[go: up one dir, main page]

WO2025194253A1 - Estimation de taille d'éléments particuliers pour inspection acoustique - Google Patents

Estimation de taille d'éléments particuliers pour inspection acoustique

Info

Publication number
WO2025194253A1
WO2025194253A1 PCT/CA2025/050364 CA2025050364W WO2025194253A1 WO 2025194253 A1 WO2025194253 A1 WO 2025194253A1 CA 2025050364 W CA2025050364 W CA 2025050364W WO 2025194253 A1 WO2025194253 A1 WO 2025194253A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature
outline
imaging data
sizing
machine learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/CA2025/050364
Other languages
English (en)
Inventor
Angélique BOUCHARD
Ivan C. Kraljic
Guillaume Painchaud-April
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Evident Canada Inc
Original Assignee
Evident Canada Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Evident Canada Inc filed Critical Evident Canada Inc
Publication of WO2025194253A1 publication Critical patent/WO2025194253A1/fr
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N29/00Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
    • G01N29/44Processing the detected response signal, e.g. electronic circuits specially adapted therefor
    • G01N29/4481Neural networks
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N29/00Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
    • G01N29/04Analysing solids
    • G01N29/043Analysing solids in the interior, e.g. by shear waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N29/00Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
    • G01N29/04Analysing solids
    • G01N29/06Visualisation of the interior, e.g. acoustic microscopy
    • G01N29/0654Imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N29/00Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
    • G01N29/04Analysing solids
    • G01N29/06Visualisation of the interior, e.g. acoustic microscopy
    • G01N29/0654Imaging
    • G01N29/069Defect imaging, localisation and sizing using, e.g. time of flight diffraction [TOFD], synthetic aperture focusing technique [SAFT], Amplituden-Laufzeit-Ortskurven [ALOK] technique
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N29/00Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
    • G01N29/22Details, e.g. general constructional or apparatus details
    • G01N29/26Arrangements for orientation or scanning by relative movement of the head and the sensor
    • G01N29/262Arrangements for orientation or scanning by relative movement of the head and the sensor by electronic orientation or focusing, e.g. with phased arrays
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/52Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N2291/00Indexing codes associated with group G01N29/00
    • G01N2291/04Wave modes and trajectories
    • G01N2291/044Internal reflections (echoes), e.g. on walls or defects

Definitions

  • This document pertains generally, but not by way of limitation, to imaging techniques for use in Non-Destructive Testing (NDT), and more particularly, to analysis of phased array (PA) acoustic inspection images, where such analysis may include identification or feature size estimation (or both) performed using a machine learning technique.
  • NDT Non-Destructive Testing
  • PA phased array
  • Another approach for NDT may include use of an acoustic inspection technique, such as where one or more electroacoustic transducers are used to insonify a region on or within the object under test, and acoustic energy that is scattered or reflected may be detected and processed. Such scattered or reflected energy may be referred to as an acoustic echo signal.
  • an acoustic inspection scheme involves use of acoustic frequencies in an ultrasonic range of frequencies, such as including pulses having energy in a specified range that may include value from, for example, a few hundred kilohertz, to tens of megahertz, as an illustrative example.
  • This disclosure describes techniques for detecting and sizing a feature in a nondestructive testing (NDT) image of an object under inspection.
  • this disclosure describes techniques that introduce a two-step approach that combines machine learning with a precise sizing algorithm. For example, a machine learning model is initially trained to perform a rough detection of features within the imaging data. This model is capable of identifying potential areas of interest, thereby providing a preliminary outline of the features. Subsequently, a deterministic sizing algorithm is applied to refine these outlines, ensuring accurate measurement of the feature dimensions. This approach not only enhances the precision of feature sizing but also allows for greater flexibility in adapting to various imaging scenarios. By leveraging the strengths of both machine learning and deterministic algorithms, the described method offers a comprehensive solution for improving the accuracy and reliability of feature detection and sizing in NDT images.
  • this disclosure is directed to a method of training processing circuitry using machine learning for detecting and sizing a feature in a non -destructive testing (NDT) image of an object under inspection, the method comprising: acquiring acoustic imaging data of a first object having a first feature; outlining the first feature in the acoustic imaging data; applying a sizing algorithm to the outlined first feature to adjust a size of the outline and generate a ground truth dataset; and training a machine learning model using the ground truth dataset.
  • NDT non -destructive testing
  • this disclosure is directed to a method of using processing circuitry for detecting and sizing a feature in non-destructive testing (NDT) images of an object under inspection using a previously trained machine learning model, the method comprising: acquiring acoustic imaging data of the object having a feature; applying the acoustic imaging data to the previously trained machine learning model to generate an outline of the feature in the imaging data; applying a sizing algorithm to the outl ined feature to adjust the outline of the feature outputted by the previously trained machine learning model; generating an NDT image with the adjusted outline of the feature; and displaying, on a user interface, the NDT image with the adjusted outline of the feature.
  • NDT non-destructive testing
  • FIG. 1 illustrates generally an example comprising an acoustic inspection system, such as may be used to perform one or more techniques described herein.
  • FIG. 2A is an image of an example of a non-destructive testing (NDT) image of an object under inspection that includes a flaw.
  • NDT non-destructive testing
  • FIG. 2B is the NDT image of FIG. 2A including an outline generated by a generic machine learning (ML) model to depict a size of the flaw.
  • ML machine learning
  • FIG. 2C is the NDT image of FIG. 2A including an outline generated by an untrained human to depict the size of the flaw.
  • FIG. 3 is a flow diagram of an example of a method 300 of training processing circuitry using machine learning for detecting and sizing a feature in an NDT image of an object under inspection.
  • FIG. 4A depicts acoustic imaging data of another object under inspection.
  • FIG. 4B depicts the features of the acoustic imaging data of FIG. 4A with outlines generated by a previously trained machine learning model.
  • FIG. 7 depicts acoustic imaging data displayed as an NDT image of an object under inspection.
  • FIG. 8 depicts acoustic imaging data displayed as an NDT image.
  • FIG. 9 is a flow diagram of an example of a method of using processing circuitry for detecting and sizing a feature in non-destructive testing (NDT) images of an object under inspection using a previously trained machine learning model.
  • NDT non-destructive testing
  • FIG. 10 is a flow diagram of an example of a method of training processing circuitry using machine learning for detecting and sizing a feature in a non-destructive testing (NDT) image of an object under inspection.
  • NDT non-destructive testing
  • FIG. 11 is a flow diagram of an example of a method of using processing circuitry for detecting and sizing a feature in non-destructive testing (NDT) images of an object under inspection using a previously trained machine learning model.
  • FIG. 12 shows an example of a machine learning module that may implement various techniques of this disclosure.
  • FIG. 13 illustrates a block diagram of an example of a machine upon which any one or more of the techniques (e.g., methodologies) discussed herein may perform.
  • Acoustic testing such as ultrasound-based inspection, may include focusing or beam-forming techniques to aid in construction of data plots or images representing a region of interest within the test object.
  • Use of an array of ultrasound transducer elements may include use of a phased-array beamforming approach and may be referred to as Phased Array Ultrasound Testing (PAUT).
  • PAUT Phased Array Ultrasound Testing
  • a delay-and-sum beamforming technique may be used such as including coherently summing time-domain representations of received acoustic signals from respective transducer elements or apertures.
  • a “Full Matrix Capture” (FMC) technique may be used where one or more elements in an array (or apertures defined by such elements) are used to transmit an acoustic pulse and other elements are used to receive scattered or reflected acoustic energy, and a matrix is constructed of time-series (e.g., A-Scan) representations corresponding to a sequence of transmit-receive cycles in which the transmissions are occurring from different elements (or corresponding apertures) in the array.
  • Beamforming and imaging may be performed using a technique such as a “Total Focusing Method” (TFM), in which a coherent summation may be performed using A-scan data acquired using an FMC technique.
  • TFM Total Focusing Method
  • phase-based approach may be used for one or more of acquisition, storage, or subsequent analysis.
  • phase-based approach may include coherent summation of normalized or quantized representations of A-Scan data corresponding to phase information.
  • phase coherence imaging PCI
  • NDT non-destructive testing
  • accurately detecting and sizing features such as flaws or other features in imaging data plays an important role in assessing the integrity of materials and structures.
  • analysis of phased-array acoustic inspection (PA) imaging data involves human inspectors identifying features and manually applying known criteria to measure the size of features (e.g., using an approach such as a - 6dB method). Relying on manual inspection and interpretation of images may be timeconsuming and prone to human error.
  • Some solutions incorporate automated algorithms to assist in the detection and sizing of features. The present inventors have recognized that these methods often fall short in terms of precision and reliability. For example, some solutions struggle with accurately delineating the boundaries of features, particularly in complex or noisy environments.
  • the present inventors have recognized a need for a more robust and flexible approach that may improve the accuracy and efficiency of feature detection and sizing in NDT applications.
  • the present inventors have recognized, among other things, that manual detection may be augmented or replaced such as by using automated identification and sizing of features with a combination of artificial intelligence (e.g., machine learning) and non-AI-based algorithmic approaches.
  • artificial intelligence e.g., machine learning
  • non-AI-based algorithmic approaches e.g., machine learning
  • This disclosure describes techniques for detecting and sizing a feature in a nondestructive testing (NDT) image of an object under inspection.
  • this disclosure describes techniques that introduce a two-step approach that combines machine learning with a precise sizing algorithm. For example, a machine learning model is initially trained to perform a rough detection of features within the imaging data. This model is capable of identifying potential areas of interest, thereby providing a preliminary outline of the features. Subsequently, a deterministic sizing algorithm is applied to refine these outlines, ensuring accurate measurement of the feature dimensions. This approach not only enhances the precision of feature sizing but also allows for greater flexibility in adapting to various imaging scenarios. By leveraging the strengths of both machine learning and deterministic algorithms, the described method offers a comprehensive solution for improving the accuracy and reliability of feature detection and sizing in NDT images.
  • FIG. 1 illustrates generally an example comprising an acoustic inspection system 100, such as may be used to perform one or more techniques described herein.
  • the acoustic inspection system 100 of FIG. 1 is an example of an acoustic imaging modality, such as an acoustic phased array system, that may implement various techniques of this disclosure.
  • the inspection system 100 may include a test instrument 140, such as a hand-held or portable assembly.
  • the test instrument 140 may be electrically coupled to a probe assembly, such as using a multi-conductor interconnect 130.
  • the probe assembly 150 may include one or more electroacoustic transducers, such as a transducer array 152 including respective transducers 154A through 154N.
  • the transducers array may follow a linear or curved contour, or may include an array of elements extending in two axes, such as providing a matrix of transducer elements.
  • the elements need not be square in footprint or arranged along a straight-line axis. Element size and pitch may be varied according to the inspection application.
  • a modular probe assembly 150 configuration may be used, such as to allow a test instrument 140 to be used with various different probe assemblies 150.
  • the transducer array 152 includes piezoelectric transducers, such as may be acoustically coupled to a target 158 (e.g., an object under test) through a coupling medium 156.
  • the coupling medium may include a fluid or gel or a solid membrane (e.g., an elastomer or other polymer material), or a combination of fluid, gel, or solid structures.
  • an acoustic transducer assembly may include a transducer array coupled to a wedge structure comprising a rigid thermoset polymer having known acoustic propagation characteristics (for example, Rexolite® available from C-Lec Plastics Inc.), and water may be injected between the wedge and the structure under test as a coupling medium 156 during testing.
  • a rigid thermoset polymer having known acoustic propagation characteristics (for example, Rexolite® available from C-Lec Plastics Inc.)
  • water may be injected between the wedge and the structure under test as a coupling medium 156 during testing.
  • the test instrument 140 may include digital and analog circuitry, such as a frontend circuit 122 including one or more transmit signal chains, receive signal chains, or switching circuitry (e.g., transmit/receive switching circuitry).
  • the transmit signal chain may include amplifier and filter circuitry, such as to provide transmit pulses for delivery through an interconnect 130 to a probe assembly 150 for insonification of the target 158, such as to image or otherwise detect a flaw 160 on or within the target 158 structure by receiving scattered or reflected acoustic energy elicited in response to the insonification.
  • test protocol may be performed using coordination between multiple test instruments 140, such as in response to an overall test scheme established from a master test instrument 140, or established by another remote system such as a computing facility 108 or general purpose computing device such as a laptop 132, tablet, smart-phone, desktop computer, or the like.
  • the test scheme may be established according to a published standard or regulatory requirement and may be performed upon initial fabrication or on a recurring basis for ongoing surveillance, as illustrative examples.
  • the receive signal chain of the front-end circuit 122 may include one or more filters or amplifier circuits, along with an analog-to-digital conversion facility, such as to digitize echo signals received using the probe assembly 150. Digitization may be performed coherently, such as to provide multiple channels of digitized data aligned or referenced to each other in time or phase.
  • the front-end circuit 122 may be coupled to and controlled by one or more processor circuits, such as a processor circuit 102 included as a portion of the test instrument 140.
  • the processor circuit 102 may be coupled to a memory circuit, such as to execute instructions that cause the test instrument 140 to perform one or more of acoustic transmission, acoustic acquisition, processing, or storage of data relating to an acoustic inspection, or to otherwise perform techniques as shown and described herein.
  • the test instrument 140 may be communicatively coupled to other portions of the system 100, such as using a wired or wireless communication interface 120.
  • performance of one or more techniques as shown and described in this disclosure may be accomplished on-board the test instrument 140 or using other processing or storage facilities such as using a computing facility 108 or a general -purpose computing device such as a laptop 132, tablet, smart-phone, desktop computer, or the like.
  • processing tasks that would be undesirably slow if performed on-board the test instrument 140 or beyond the capabilities of the test instrument 140 may be performed remotely (e.g., on a separate system), such as in response to a request from the test instrument 140.
  • storage of imaging data or intermediate data such as A-line matrices of time-series data may be accomplished using remote facilities communicatively coupled to the test instrument 140.
  • the test instrument may include a display 110, such as for presentation of configuration information or results, and an input device 112 such as including one or more of a keyboard, trackball, function keys or soft keys, mouse-interface, touch-screen, stylus, or the like, for receiving operator commands, configuration information, or responses to queries.
  • a display 110 such as for presentation of configuration information or results
  • an input device 112 such as including one or more of a keyboard, trackball, function keys or soft keys, mouse-interface, touch-screen, stylus, or the like, for receiving operator commands, configuration information, or responses to queries.
  • the acoustic inspection system 100 may acquire acoustic imaging data, such as FMC data or virtual source aperture (VS A) data, of a material using an acoustic imaging modality, such as an acoustic phased array system.
  • acoustic imaging modality such as an acoustic phased array system.
  • phased array linear scanning is another technique that may be used to acquire acoustic imaging data, where phased array linear scanning includes one A-scan per aperture along the array, and the aperture is shifted sequentially to produce a discrete set of contiguous A-scans.
  • the processor circuit 102 may then generate an acoustic imaging data set, such as a scattering matrix (S-matrix), plane wave matrix, or other matrix or data set, corresponding to an acoustic propagation mode, such as pulse echo direct (TT), self-tandem (TT-T), and/or pulse echo with skip (TT-TT).
  • an acoustic imaging data set such as a scattering matrix (S-matrix), plane wave matrix, or other matrix or data set, corresponding to an acoustic propagation mode, such as pulse echo direct (TT), self-tandem (TT-T), and/or pulse echo with skip (TT-TT).
  • the processor circuit 102 or another processor circuit may be trained using machine learning so as to detect and size a feature in a nondestructive testing (NDT) image of an object under inspection. Additionally or alternatively, the processor circuit 102 or another processor circuit may be used for detecting and sizing a feature in NDT images of an object under inspection using a previously trained machine learning model.
  • NDT nondestructive testing
  • the techniques shown and described in this document may be performed using a portion or an entirety of an inspection system 100 as shown in FIG. 1 or otherwise using a machine 1300 as discussed below in relation to FIG. 13.
  • FIG. 2A is an image of an example of a non-destructive testing (NDT) image 200 of an object under inspection that includes a feature, namely a flaw 202.
  • FIG. 2B is the NDT image 200 of FIG. 2A including an outline 204 generated by a generic machine learning (ML) model to depict a size of the flaw 202.
  • FIG. 2C is the NDT image 200 of FIG. 2A including an outline 206 generated by an untrained human to depict the size of the flaw 202.
  • Training an ML model to defect flaws is done with supervised learning. Typically, a human annotator draws bounding boxes around each flaw, or roughly segments each flaw by hand or with a pre-trained ML model.
  • FIG. 3 is a flow diagram of an example of a method 300 of training processing circuitry using machine learning for detecting and sizing a feature in an NDT image of an object under inspection.
  • accurate sizing is performed before training the AL A system, such as the inspection system 100 of FIG. 1, acquires acoustic imaging data 302 of an object under inspection, such as the target 158 of FIG. 1, with a feature 304, e.g., flaw, back wall echo, front wall echo, etc.
  • the acoustic imaging data 302 is shown as an NDT image and includes a plurality of pixels 316 (for 2D images) or voxels (for 3D images).
  • a pixel is the smallest unit of a digital 2D image or display, representing a single point of color or intensity in a grid.
  • a voxel is the 3D equivalent of a pixel, representing a discrete unit of volume in a three-dimensional space.
  • Each pixel in the acoustic imaging data 302 is a representation of an amplitude of the signal that was received by the transducer array 152 of FIG. 1.
  • the method 300 includes outlining the feature 304 in the imaging data with an outline 306.
  • a human operator/inspector generates the rough outline 306, e.g., a bounding box, around the feature 304 in the acoustic imaging data 302.
  • an algorithm or automated detection system generates the rough outline 306.
  • This initial outline 306 serves as a starting point for more precise outlines.
  • the method then applies a sizing algorithm 308 to the outline 306 of the flaw to adjust its size and create a ground truth dataset.
  • the sizing algorithm 308 analyzes individual pixels (or voxels).
  • the sizing algorithm 308 compares the representation of an amplitude of the signal of each pixel (or voxel) to a threshold value.
  • a pixel (or voxel) is included in the outline 306 of the feature 304 when the amplitude of the signal associated with pixel 316 equals or exceeds the threshold value, and is excluded when the amplitude falls below the threshold value.
  • the sizing algorithm 308 adjusts the rough outline 306 to generate a precise outline 310, or accurate ground truth data, according to NDT sizing criteria.
  • the precise outline 310 along with other ground truth data such as other precise outlines, forms the ground truth dataset 312 that is used to train a ML model.
  • the trained ML model 314 is trained using the ground truth dataset 312.
  • the trained ML model 314 is configured to detect features in NDT images and, in some examples, size those features.
  • One advantage of this technique is that the sizing algorithm is incorporated during the training phase, resulting in faster inference times because the trained ML model 314 directly outputs sized features without needing to apply the sizing algorithm during deployment.
  • FIG. 4B depicts the features of the acoustic imaging data 400 of FIG. 4A with outlines generated by a previously trained machine learning model.
  • the acoustic imaging data 400 is displayed as an NDT image in FIG. 4B.
  • the acoustic imaging data 400 includes the back wall echo 402 and the back wall echo 404 of FIG. 4A.
  • a previously trained ML model such as the trained ML model 314 of FIG. 3, is applied to the acoustic imaging data 400.
  • the previously trained ML model generates outline(s) of the feature(s) in the acoustic imaging data 400.
  • the previously trained ML model generates an outline 406 for the back wall echo 402 and an outline 408 for the back wall echo 404.
  • the system such as the inspection system 100 of FIG. 1, generates an NDT image that includes the outline 406 for the back wall echo 402 and an outline 408 for the back wall echo 404, and displays the acoustic imaging data 400 as an NDT image in FIG. 4B.
  • the system displays, on a user interface, the NDT image shown in FIG. 4B with the outline(s) of the feature(s).
  • FIG. 5 is a flow diagram of an example of a sizing algorithm 500 that may be used to implement various techniques of this disclosure.
  • the sizing algorithm 500 is an example of the sizing algorithm 308 of FIG. 3 and may be implemented using the processor circuit 102 or another processor circuit.
  • the main idea behind the sizing algorithm is, for each defect, find an adequate threshold Ki based on an amplitude of the signal of the coordinately equivalent region in previous and subsequent end views, and erode the feature i to keep only the amplitude value above Ki, and extend the feature to add every contiguous voxel whose value is above Ki.
  • the end views are similar, at least locally, with respect to structural noise and flaws.
  • the sizing algorithm 500 begins with the input of ultrasound acquisition data A G IR u ' v ' w at block 502 and a model prediction binary mask M G ⁇ 0,l ⁇ u,v,w at block 504, where A is a 3D array of the ultrasound acquisition data and M is a 3D mask array having the same dimensions of the acquisition data and representing the detected features.
  • A is a 3D array of the ultrasound acquisition data
  • M is a 3D mask array having the same dimensions of the acquisition data and representing the detected features.
  • there is a model prediction binary mask per feature type For example, there is a first mask for flaws of type 1, a second mask for flaws of type 2, etc., another mask for the front wall echoes, another mask for the back wall echoes, and so on.
  • Different sizing algorithms may then be deployed, each one being specific to a particular feature type.
  • the acquisition data in block 502 represents the raw acoustic imaging data
  • the binary mask at block 504 indicates the initial detection of features, e.g., flaw, back wall echo, front wall echo, within this data.
  • the background matrix A bg c (IR U ⁇ NA ⁇ ) U,V,W at block 508 is derived by excluding the features identified in the binary mask M from the acquisition data.
  • the background matrix is used for isolating noise and non-feature elements in the acoustic imaging data, which aids in the subsequent threshold determination process.
  • the background matrix is used to determine a representation of noise in the acoustic imaging data adjacent to the feature, e.g., flaw, front wall echo, back wall echo, etc.
  • the representation of noise may be a mean value of the amplitude of the signal of a pixel (or voxel), for example.
  • the sizing algorithm 500 proceeds with a series of steps to refine the feature mask.
  • a threshold value of the noise is determined, which is based on the background matrix and the feature's characteristics .
  • the threshold determination process is described below with respect to FIG. 6.
  • this threshold value is used to filter the feature mask, ensuring that only the most relevant data points are retained. The filtering process enhances the accuracy of the feature representation.
  • morphological dilation is applied to the feature mask, which adjusts, e.g., increases, the size of the outline of the feature, such as based on the threshold.
  • the dilation process involves expanding the boundaries of the detected feature to account for any potential underestimation of the feature's size.
  • the dilation process is iterative, and after each iteration, the sizing algorithm 500 determines a threshold at block 516 and uses this threshold to filter the feature mask at block 518.
  • an erosion process compares the amplitude of a signal of a pixel (or a voxel) to the threshold value and either includes the pixel (or the voxel) in the outline of the feature when the amplitude is equal to or greater than the threshold value or excludes the pixel (or the voxel) from the outline of the feature when the amplitude is less than the threshold value.
  • the sizing algorithm 500 obtains, from A and M, the background matrix A bg . Then, for each detected feature, the sizing algorithm 500 determines a threshold S (shown and described below with respect to FIG. 6), and filters the feature binary mask to keep the pixels (or voxels) with values in A that are greater than threshold S. Then, the sizing algorithm 500 repeats the steps of:
  • Morphological dilation which is an image processing operation that expands the boundaries of objects in a binary or grayscale image. It works by applying a structuring element to the image, where a pixel is set to the maximum value within the element's neighborhood, making objects appear larger and filling small gaps;
  • FIG. 6 is a flow diagram of an example of a method 600 for the threshold determination of the sizing algorithm 500 in FIG. 5.
  • the method 600 is an example of the threshold determination of block 510 in FIG. 5 and may be implemented using the processor circuit 102 or another processor circuit. This process is designed to accurately determine the threshold S used for refining the detection of features within ultrasound acquisition data A G W u ' v ' w .
  • the threshold corresponds to a maximum between 1) a 6 dB drop from maximum amplitude in each coordinately equivalent region in p previous and p subsequent end views, and 2) a median of maximum amplitude of each coordinately equivalent region in p previous and p subsequent end views of length 2 x p in u axis.
  • the threshold is applied to keep only the amplitudes greater than 6 dB.
  • the system performs dilation with a 3-axis kernel using a subvolume around the feature mask, recalculates the threshold using the expanded outline, and applies the threshold. If the feature mask is the same size as in the previous iteration after applying the threshold, the dilation step stops.
  • the method 600 begins at block 502 with the ultrasound acquisition data A G represents the acoustic imaging data captured during the ultrasound scan.
  • a model prediction binary mask M G ⁇ 0, l] u ' v ' w is used, which indicates the initial detection of features or defects within the acquired acoustic imaging data.
  • the binary mask assists in distinguishing between areas of interest and the background, represented at block 508 by the ultrasound acquisition data without features matrix A bg c (IR u ⁇ NA ⁇ ) U,V ' W , which was determined in FIG. 5.
  • This data set excludes the detected features, allowing for a clearer analysis of the background noise and non-feature elements.
  • the method 600 obtains the feature binary mask F E corresponding to a subset of the mask M:
  • the feature binary mask is a refined version of the initial binary mask M E ⁇ 0,l ⁇ u ' v ' w , focusing on the specific features detected.
  • the method 600 determines the maximum projection along the u-axis of the feature binary mask F E ⁇ 0,l) u f’ v f’ w f to obtain the 2D cross section of the feature mask.
  • the method 600 computes C E ⁇ NA, l v f’ w f as the max projection of F along the u-axis to obtain a 2D feature outline: > f 1 if at least one voxel in F[ ,j, k] is 1. ] k ''NA otherwise
  • the method 600 computes N, which is the noise region surrounding the feature from the acquisition data background A bg , where
  • the method 600 restrains noise to the feature outline by performing an element-wise multiplication of C with each 2D slice of N on u-axis:
  • Ntjk Ntjk O Cjk i G [max(ufi - p, 0) ⁇ mining + p, u)]
  • the method 600 determines the max projection of N along the w-axis. [0067] At block 618, which is the output of the process of block 616, the method 600 computes the noise region Ncscan as the following: f max (N [i, /, : ] ) if 3 k such that N i jk NA
  • the method 600 computes a threshold S using mean and standard deviation of defined values in N cscan :
  • the threshold S is the threshold used in block 510 and block 516 of FIG. 5, for example.
  • the object under inspection 702 includes a feature, namely a flaw 704.
  • the NDT image 700 depicts end views 706 before the flaw 704 and end views 708 after the flaw 704.
  • the “empty” areas between the flaw 704 and the end views 706 and the end views 708 represent other flaws that have been removed by the ML model so that the sizing algorithm may focus on one flaw at a time, such as the flaw 704.
  • the sizing algorithm 500 of FIG. 5 determines (at block 506) a representation of the noise in the acoustic imaging data adjacent to the feature, e.g., the flaw 704 of NDT image 700.
  • the sizing algorithm 500 uses the pixels (or voxels) in the end views 706 and the end views 708 adjacent to the flaw 704 to determine the representation of the noise in the acoustic imaging data.
  • FIG. 8 depicts acoustic imaging data displayed as an NDT image 800.
  • the NDT image 800 depicts the flaw 704 of FIG. 7 and the end views 706 of FIG. 7.
  • the NDT image 800 itself is shown as an end view.
  • the sizing algorithm 500 defines the outline of the end views 706 to be similar to the outline of the flaw 704, which was defined by an ML model, such as the previously trained machine learning model 906 in FIG. 9.
  • the sizing algorithm 500 then adjusts the size of the outline of the end views 706 to more precise match the size of the outline of the flaw 704.
  • FIG. 8 depicts acoustic imaging data displayed as an NDT image 800.
  • the NDT image 800 depicts the flaw 704 of FIG. 7 and the end views 706 of FIG. 7.
  • the NDT image 800 itself is shown as an end view.
  • the sizing algorithm 500 defines the outline of the end views 706 to be similar to the outline of the flaw 704, which was defined by an ML model
  • a system such as the inspection system 100 of FIG. 1, acquires acoustic imaging data 902 of an object under inspection, such as the target 158 of FIG. 1, with a feature 904, e.g., flaw, back wall echo, front wall echo, etc.
  • the acoustic imaging data 902 is displayed as an NDT image in FIG. 9.
  • the acoustic imaging data 902 includes a plurality of pixels 908 (for 2D images) or voxels (for 3D images). Each pixel in the acoustic imaging data 902 is a representation of an amplitude of the signal that was received by the transducer array 152 of FIG. 1.
  • the system applies the acoustic imaging data 902 to a previously trained machine learning model 906 to generate a rough outline 910, e.g., bounding box, of the feature 904 in the acoustic imaging data 902.
  • a rough outline 910 e.g., bounding box
  • the rough outline 910 likely includes quite a few pixels 908 that are not part of the feature 904, where the darker the pixel 908 the more likely the pixel 908 is part of the feature 904.
  • the system applies a sizing algorithm 912 to the outline 910 outputted by the previously trained machine learning model 906 to adjust the size and shape of the outline 910 and generate an adjusted outline 914.
  • a sizing algorithm 912 to the outline 910 outputted by the previously trained machine learning model 906 to adjust the size and shape of the outline 910 and generate an adjusted outline 914.
  • the rough outline 910 generated by the previously trained machine learning model 906, shown as a bounding box has been adjusted so that the new outline 914 more closely resembles the feature 904.
  • the system then generates an NDT image 916 with the adjusted outline 914 of the feature 904.
  • the system then displays on a user interface, such as the display 110 of FIG. 1, the NDT image 916 with the adjusted outline 914 of the feature 904.
  • the system determines and displays one or more dimensions of the feature, such as its width and/or length.
  • the system may determine the dimensions of an outlined feature in the NDT image using metadata processing, for example.
  • the acoustic imaging data includes metadata containing registration information that maps each pixel's (or voxel's) location to physical dimensions, such as millimeters.
  • the system may use this registration metadata to convert the pixel (or voxel) measurements into physical dimensions for display.
  • each pixel column corresponds to a specific position in millimeters based on the registration information stored in the NDT image file format.
  • the system calculates and displays one or more physical dimensions of the feature on the user interface.
  • statistical measurements may be calculated for the noise regions adjacent to the detected features. These statistics may be displayed on a user interface, which displays one or more of the average amplitude values, median values, and other statistical metrics that help characterize both the features and surrounding noise regions. These statistical measurements are particularly valuable to users who need to analyze the amplitude characteristics within detected features and compare them to the surrounding noise levels.
  • FIG. 10 is a flow diagram of an example of a method 1000 of training processing circuitry using machine learning for detecting and sizing a feature in a non -destructive testing (NDT) image of an object under inspection.
  • the method 1000 is an example of the method 300 shown in FIG. 3.
  • the method 1000 includes acquiring acoustic imaging data of a first object having a first feature.
  • a system such as the inspection system 100 acquires the acoustic imaging data 302 of a first object having a feature 304, as seen in FIG. 3.
  • the method 1000 includes outlining the first feature in the acoustic imaging data.
  • the processor circuit 102 of the inspection system 100 of FIG. 1 generates an outline 306 of the feature 304 in the acoustic imaging data 302 of FIG. 3.
  • the method 1000 includes applying a sizing algorithm to the outlined first feature to adjust a size of the outline and generate a ground truth dataset.
  • the processor circuit 102 of the inspection system 100 of FIG. 1 applies the sizing algorithm 308 of FIG. 3, which is described in detail with respect to FIG. 5, to the outline 306 to adjust a size of the outline and generate a ground truth dataset.
  • the method 1000 includes training a machine learning model using the ground truth dataset.
  • trained ML model 314 of FIG. 3 is trained by the processor circuit 102 of the inspection system 100 of FIG. 1 using the ground truth dataset 312.
  • the method 1000 includes applying the trained machine learning model to acoustic imaging data of a second object having a second feature, generating an NDT image with an outline of the second feature generated by the trained machine learning model, and displaying, on a user interface, the NDT image with the outline of the second feature.
  • the method 1000 includes displaying, on the user interface, one or more dimensions of the second feature. In other examples, the method 1000 includes displaying, on the user interface, statistics about the second feature and/or noise.
  • applying the sizing algorithm to the outlined first feature includes determining a representation of noise in the acoustic imaging data adjacent to the first feature, determining a threshold value of the noise, and adjusting, based on the threshold value, the size of the outline of the first feature.
  • the acoustic imaging data includes a sequence of end views of the first object, and wherein determining the representation of the noise in the acoustic imaging data adjacent to the first feature includes determining the representation of noise in the end views adjacent to the first feature.
  • determining the representation of the noise in the acoustic imaging data adjacent to the first feature includes determining the representation of noise in the end views adjacent to the first feature.
  • FIG. 11 is a flow diagram of an example of a method 1100 of using processing circuitry for detecting and sizing a feature in non-destructive testing (NDT) images of an object under inspection using a previously trained machine learning model.
  • the method 1100 is an example of the method 900 shown in FIG. 9.
  • the method 1100 includes acquiring acoustic imaging data of the object having a feature.
  • a system such as the inspection system 100 acquires the acoustic imaging data 902 of an object having a feature 904, as seen in FIG. 9.
  • the method 1100 includes applying the acoustic imaging data to the previously trained machine learning model to generate an outline of the feature in the imaging data.
  • the system applies the acoustic imaging data 902 to the previously trained machine learning model 906 of FIG. 9 to generate an outline 910 of the feature 904 in the acoustic imaging data 902.
  • the method 1100 includes applying the trained machine learning model to acoustic imaging data of a second object having a second feature, generating an NDT image with an outline of the second feature generated by the trained machine learning model, and displaying, on a user interface, the NDT image with the outline of the second feature.
  • applying the sizing algorithm to the outlined first feature includes determining a representation of noise in the acoustic imaging data adjacent to the first feature, determining a threshold value of the noise, and adjusting, based on the threshold value, the size of the outline of the first feature.
  • the method 1100 includes an erosion step in which adjusting, based on the threshold value, the size of the outline of the first feature includes comparing a representation of an amplitude of a signal of a pixel or a voxel to the threshold value, including the pixel or the voxel in the outline of the first feature when the amplitude is equal to or greater than the threshold value, and excluding the pixel or the voxel from the outline of the first feature when the amplitude is less than the threshold value.
  • the acoustic imaging data includes a sequence of end views of the first object, and wherein determining the representation of the noise in the acoustic imaging data adjacent to the first feature includes determining the representation of noise in the end views adjacent to the first feature.
  • determining the representation of the noise in the acoustic imaging data adjacent to the first feature includes determining the representation of noise in the end views adjacent to the first feature.
  • FIG. 12 shows an example of a machine learning module 1200 that may implement various techniques of this disclosure.
  • the machine learning module 1200 is an example of the trained ML model 314 of FIG. 3.
  • the machine learning module 1200 may be implemented in whole or in part by one or more computing devices.
  • a training module 1202 may be implemented by a different device than a prediction module 1004.
  • the model 1214 may be created on a first machine, e.g., a desktop computer, and then sent to a second machine, e.g., a handheld device.
  • the training module 1202 inputs training data 1206 into a selector module 1208 that selects a training vector from the training data.
  • the selector module 1208 may include data normalization/standardization and cleaning, such as to remove any useless information.
  • the model 1214 itself may perform aspects of the selector module, such as a gradient boosted trees.
  • the training module may train the machine learning module 1200 on a plurality of flaw or no flaw conditions.
  • the training data 1006 may include, for example, ground truth data.
  • Ground truth data may include synthetic data from the simulated flaws and geometry. Aside from synthetic datasets, ground truth labels may also be obtained using other NDT methods, such as radiography, CT scans, and laser surface profiling.
  • the training data 1206 may include one or more of simulations of a plurality of types of material flaws in the material, simulations of a plurality of positions of material flaws in the material, or simulations of a plurality of ghost echoes in the material to simulate no flaw conditions.
  • the training data 1206 may be labeled. In other examples, the training data may not be labeled, and the model may be trained using feedback data — such as through a reinforcement learning method.
  • the selector module 1208 selects a training vector 1210 from the training data 1206.
  • the selected data may fill the training vector 1210 and includes a set of the training data that is determined to be predictive of a classification.
  • Information chosen for inclusion in the training vector 1210 may be all the training data 1206 or in some examples, may be a subset of all the training data 1206.
  • the training vector 1210 may be utilized (along with any applicable labels) by the machine learning algorithm 1212 to produce a model 1214 (a trained machine learning model). In some examples, other data structures other than vectors may be used.
  • the machine learning algorithm 1212 may leam one or more layers of a model.
  • Example layers may include convolutional layers, dropout layers, pooling/up sampling layers, SoftMax layers, and the like.
  • Example models may be a neural network, where each layer is comprised of a plurality of neurons that take a plurality of inputs, weight the inputs, input the weighted inputs into an activation function to produce an output which may then be sent to another layer.
  • Example activation functions may include a Rectified Linear Unit (ReLu), and the like. Layers of the model may be fully or partially connected.
  • data 1216 may be input to the selector module 1218.
  • the data 1216 may include an acoustic imaging data set.
  • the selector module 1218 may operate the same, or differently than the selector module 1208 of the training module 1202. In some examples, the selector modules 1208 and 1218 are the same modules or different instances of the same module.
  • the selector module 1218 produces a vector 1220, which is input into the model 1214 to generate an output NDT image of the specimen, resulting in an image 1222.
  • the weightings and/or network structure learned by the training module 1202 may be executed on the vector 1220 by applying vector 1220 to a first layer of the model 1214 to produce inputs to a second layer of the model 1214, and so on until the image is output.
  • other data structures may be used other than a vector (e.g., a matrix).
  • CNN convolutional neural network
  • the training module may train the machine learning module 1200 on a plurality of flaw or no flaw conditions, such as described above.
  • the training module 1202 may operate in an offline manner to train the model 1214.
  • the prediction module 1204, however, may be designed to operate in an online manner. It should be noted that the model 1214 may be periodically updated via additional training and/or user feedback. For example, additional training data 1206 may be provided to refine the model by the training module 1002.
  • the machine learning algorithm 1200 may be selected from among many different potential supervised or unsupervised machine learning algorithms.
  • learning algorithms include artificial neural networks, convolutional neural networks, Bayesian networks, instance-based learning, support vector machines, decision trees (e.g., Iterative Dichotomiser 3, C4.5, Classification and Regression Tree (CART), Chi -squared Automatic Interaction Detector (CHAID), and the like), random forests, linear classifiers, quadratic classifiers, k-nearest neighbor, linear regression, logistic regression, a region based CNN, a full CNN (for semantic segmentation), a mask R-CNN algorithm for instance segmentation, and hidden Markov models.
  • unsupervised learning algorithms include expectation-maximization algorithms, vector quantization, and information bottleneck method.
  • the machine learning module 1200 of FIG. 12 may assist in training processing circuitry for detecting and sizing a feature in a non -destructive testing (NDT) image of an object under inspection and, when trained, using processing circuitry for detecting and sizing a feature in non-destructive testing (NDT) images of an object under inspection, in accordance with this disclosure.
  • NDT non -destructive testing
  • FIG. 13 illustrates a block diagram of an example of a machine 1300 upon which any one or more of the techniques (e.g., methodologies) discussed herein may perform.
  • the machine 1300 may operate as a standalone device or are connected (e.g., networked) to other machines.
  • the machine 1300 may operate in the capacity of a server machine, a client machine, or both in serverclient network environments.
  • the machine 1300 may act as a peer machine in peer-to-peer (P2P) (or other distributed) network environment.
  • P2P peer-to-peer
  • machine shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein, such as cloud computing, software as a service (SaaS), other computer cluster configurations.
  • cloud computing software as a service
  • SaaS software as a service
  • Examples, as described herein, may include, or may operate on, logic or a number of components, modules, or mechanisms (all referred to hereinafter as
  • module is understood to encompass a tangible entity, be that an entity that is physically constructed, specifically configured (e.g., hardwired), or temporarily (e.g., transitorily) configured (e.g., programmed) to operate in a specified manner or to perform part or all of any operation described herein.
  • each of the modules need not be instantiated at any one moment in time.
  • the modules comprise a general-purpose hardware processor configured using software
  • the general -purpose hardware processor is configured as respective different modules at different times.
  • Software may accordingly configure a hardware processor, for example, to constitute a particular module at one instance of time and to constitute a different module at a different instance of time.
  • Machine 1300 may include a hardware processor 1302 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof), a main memory 1304, and a static memory 1306, some or all of which may communicate with each other via an interlink 1308 (e.g., bus).
  • the machine 1300 may further include a display unit 1310, an alphanumeric input device 1312 (e.g., a keyboard), and a user interface (UI) navigation device 1314 (e.g., a mouse).
  • the display unit 1310, input device 1312 and UI navigation device 1314 are a touch screen display.
  • the machine 1300 may additionally include a storage device (e.g., drive unit) 1316, a signal generation device 1318 (e.g., a speaker), a network interface device 1320, and one or more sensors 1321, such as a global positioning system (GPS) sensor, compass, accelerometer, or other sensor.
  • a storage device e.g., drive unit
  • a signal generation device 1318 e.g., a speaker
  • a network interface device 1320 e.g., a Wi-Fi
  • sensors 1321 such as a global positioning system (GPS) sensor, compass, accelerometer, or other sensor.
  • GPS global positioning system
  • the machine 1300 may include an output controller 1328, such as a serial (e.g., universal serial bus (USB), parallel, or other wired or wireless (e.g., infrared (IR), near field communication (NFC), etc.) connection to communicate or control one or more peripheral devices (e.g., a printer, card reader, etc.)
  • the storage device 1316 may include a machine readable medium 1322 on which is stored one or more sets of data structures or instructions 1324 (e.g., software) embodying or utilized by any one or more of the techniques or functions described herein.
  • the instructions 1324 may also reside, completely or at least partially, within the main memory 1304, within static memory 1306, or within the hardware processor 1302 during execution thereof by the machine 1300.
  • one or any combination of the hardware processor 1302, the main memory 1304, the static memory 1306, or the storage device 1316 may constitute machine readable media.
  • machine readable medium 1322 is illustrated as a single medium, the term “machine readable medium” may include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) configured to store the one or more instructions 1324.
  • machine readable medium may include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) configured to store the one or more instructions 1324.
  • machine readable medium may include any medium that is capable of storing, encoding, or carrying instructions for execution by the machine 1300 and that cause the machine 1300 to perform any one or more of the techniques of the present disclosure, or that is capable of storing, encoding or carrying data structures used by or associated with such instructions.
  • Non-limiting machine readable medium examples may include solid-state memories, and optical and magnetic media.
  • machine readable media may include: non-volatile memory, such as semiconductor memory devices (e.g., Electrically Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM)) and flash memory devices; magnetic disks, such as internal hard disks and removable disks; magneto -optical disks; Random Access Memory (RAM); Solid State Drives (SSD); and CD-ROM and DVD-ROM disks.
  • EPROM Electrically Programmable Read-Only Memory
  • EEPROM Electrically Erasable Programmable Read-Only Memory
  • flash memory devices e.g., Electrically Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM)
  • flash memory devices e.g., Electrically Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM)
  • flash memory devices e.g., Electrically Erasable Programmable Read-Only Memory (EEPROM)
  • the instructions 1324 may further be transmitted or received over a communications network 1326 using a transmission medium via the network interface device 1320.
  • the machine 1300 may communicate with one or more other machines utilizing any one of a number of transfer protocols (e.g., frame relay, internet protocol (IP), transmission control protocol (TCP), user datagram protocol (UDP), hypertext transfer protocol (HTTP), etc.).
  • transfer protocols e.g., frame relay, internet protocol (IP), transmission control protocol (TCP), user datagram protocol (UDP), hypertext transfer protocol (HTTP), etc.
  • Example communication networks may include a local area network (LAN), a wide area network (WAN), a packet data network (e.g., the Internet), mobile telephone networks (e.g., cellular networks), Plain Old Telephone (POTS) networks, and wireless data networks (e.g., Institute of Electrical and Electronics Engineers (IEEE) 802.11 family of standards known as Wi-Fi®, IEEE 802.16 family of standards known as WiMax®), IEEE 802.15.4 family of standards, a Long Term Evolution (LTE) family of standards, a Universal Mobile Telecommunications System (UMTS) family of standards, peer-to-peer (P2P) networks, among others.
  • LAN local area network
  • WAN wide area network
  • POTS Plain Old Telephone
  • wireless data networks e.g., Institute of Electrical and Electronics Engineers (IEEE) 802.11 family of standards known as Wi-Fi®, IEEE 802.16 family of standards known as WiMax®
  • IEEE 802.15.4 family of standards e.g., Institute of Electrical and Electronics Engineers (IEEE
  • the network interface device 1320 may include one or more physical jacks (e.g., Ethernet, coaxial, or phonejacks) or one or more antennas to connect to the communications network 1326.
  • the network interface device 1320 may include a plurality of antennas to wirelessly communicate using at least one of single-input multiple-output (SIMO), multiple-input multiple-output (MIMO), or multiple-input single-output (MISO) techniques.
  • SIMO single-input multiple-output
  • MIMO multiple-input multiple-output
  • MISO multiple-input single-output
  • the network interface device 1320 may wirelessly communicate using Multiple User MIMO techniques.
  • Examples, as described herein, may include, or may operate on, logic or a number of components, modules, or mechanisms.
  • Modules are tangible entities (e.g., hardware) capable of performing specified operations and are configured or arranged in a certain manner.
  • circuits are arranged (e.g., internally or with respect to external entities such as other circuits) in a specified manner as a module.
  • the whole or part of one or more computer systems e.g., a standalone, client, or server computer system
  • one or more hardware processors are configured by firmware or software (e.g., instructions, an application portion, or an application) as a module that operates to perform specified operations.
  • the software may reside on a machine-readable medium.
  • the software when executed by the underlying hardware of the module, causes the hardware to perform the specified operations.
  • Various embodiments are implemented fully or partially in software and/or firmware.
  • This software and/or firmware may take the form of instructions contained in or on a non-transitory computer-readable storage medium. Those instructions may then be read and executed by one or more processors to enable performance of the operations described herein.
  • the instructions are in any suitable form, such as but not limited to source code, compiled code, interpreted code, executable code, static code, dynamic code, and the like.
  • Such a computer-readable medium may include any tangible non-transitory medium for storing information in a form readable by one or more computers, such as but not limited to read only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory; etc.
  • Method examples described herein may be machine or computer-implemented at least in part. Some examples may include a computer-readable medium or machine-readable medium encoded with instructions operable to configure an electronic device to perform methods as described in the above examples.
  • An implementation of such methods may include code, such as microcode, assembly language code, a higher-level language code, or the like. Such code may include computer readable instructions for performing various methods. The code may form portions of computer program products. Further, in an example, the code may be tangibly stored on one or more volatile, non-transitory, or nonvolatile tangible computer-readable media, such as during execution or at other times.
  • tangible computer-readable media may include, but are not limited to, hard disks, removable magnetic disks, removable optical disks (e.g., compact discs and digital video discs), magnetic cassettes, memory cards or sticks, random access memories (RAMs), read only memories (ROMs), and the like.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Chemical & Material Sciences (AREA)
  • Pathology (AREA)
  • Health & Medical Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Immunology (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Investigating Or Analyzing Materials By The Use Of Ultrasonic Waves (AREA)

Abstract

L'invention concerne des techniques de détection et de mesure de dimension d'un élément particulier dans une image de test non destructif (NDT) d'un objet en cours d'inspection. Dans certains exemples, les techniques introduisent une approche en deux étapes qui combine l'apprentissage automatique avec un algorithme de mesure de dimension précis. Par exemple, un modèle d'apprentissage automatique est initialement entraîné pour effectuer une détection grossière d'éléments particuliers dans les données d'imagerie. Ce modèle est capable d'identifier des zones potentielles d'intérêt, ce qui permet d'obtenir un contour préliminaire des éléments particuliers. Ensuite, un algorithme de mesure de dimension déterministe est appliqué pour affiner ces contours, assurant une mesure précise des dimensions des éléments particuliers. Cette approche améliore non seulement la précision des mesures de dimension d'éléments particuliers mais permet également une plus grande flexibilité dans l'adaptation à divers scénarios d'imagerie. En tirant parti des avantages à la fois des algorithmes d'apprentissage automatique et des algorithmes déterministes, les techniques offrent une solution complète pour améliorer la précision et la fiabilité de détection et de mesures de dimensions de d'éléments particuliers dans des images de NDT.
PCT/CA2025/050364 2024-03-18 2025-03-17 Estimation de taille d'éléments particuliers pour inspection acoustique Pending WO2025194253A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202463566715P 2024-03-18 2024-03-18
US63/566,715 2024-03-18

Publications (1)

Publication Number Publication Date
WO2025194253A1 true WO2025194253A1 (fr) 2025-09-25

Family

ID=97138254

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CA2025/050364 Pending WO2025194253A1 (fr) 2024-03-18 2025-03-17 Estimation de taille d'éléments particuliers pour inspection acoustique

Country Status (1)

Country Link
WO (1) WO2025194253A1 (fr)

Similar Documents

Publication Publication Date Title
US11467128B2 (en) Defect detection using ultrasound scan data
CN110074813B (zh) 一种超声图像重建方法及系统
US12352727B2 (en) Acoustic imaging techniques using machine learning
EP3637099A1 (fr) Procédé de reconstruction d'images basé sur un mappage non linéaire formé
US20240329000A1 (en) Flaw classification during non-destructive testing
Wang et al. The aircraft skin crack inspection based on different-source sensors and support vector machines
CN113570594B (zh) 超声图像中目标组织的监测方法、装置及存储介质
Fuentes et al. Autonomous ultrasonic inspection using Bayesian optimisation and robust outlier analysis
CN113887454A (zh) 基于卷积神经网络点源识别的非接触激光超声检测方法
JP2018179968A (ja) 超音波スキャン・データを使った欠陥検出
Molinier et al. Ultrasonic imaging using conditional generative adversarial networks
US11906468B2 (en) Acoustic profiling techniques for non-destructive testing
US20230360225A1 (en) Systems and methods for medical imaging
CN115389514A (zh) 一种材料缺陷检测方法和装置
EP4612522A1 (fr) Détection de défaut et d'anomalie de contrôle non destructif (ndt)
WO2025194253A1 (fr) Estimation de taille d'éléments particuliers pour inspection acoustique
CN120314458A (zh) 一种钢板内部缺陷识别方法、装置、设备及存储介质
US12153132B2 (en) Techniques to reconstruct data from acoustically constructed images using machine learning
Sutcliffe et al. Automatic defect recognition of single-v welds using full matrix capture data, computer vision and multi-layer perceptron artificial neural networks
RU2411468C1 (ru) Способ оценки количественной характеристики зондируемой поверхности земли
CN118794948A (zh) 一种聚氨酯防水涂料施工质量检测方法、介质及系统
WO2025145247A1 (fr) Imagerie de test non destructif à l'aide d'un apprentissage automatique
JP2022142569A (ja) 画像評価システムおよび画像評価方法
JP7596545B2 (ja) 音響影響マップベースの欠陥サイズ撮像
WO2024221099A1 (fr) Traduction d'image à image pour inspection acoustique

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 25772642

Country of ref document: EP

Kind code of ref document: A1