[go: up one dir, main page]

WO2025194253A1 - Feature size estimation for acoustic inspection - Google Patents

Feature size estimation for acoustic inspection

Info

Publication number
WO2025194253A1
WO2025194253A1 PCT/CA2025/050364 CA2025050364W WO2025194253A1 WO 2025194253 A1 WO2025194253 A1 WO 2025194253A1 CA 2025050364 W CA2025050364 W CA 2025050364W WO 2025194253 A1 WO2025194253 A1 WO 2025194253A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature
outline
imaging data
sizing
machine learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/CA2025/050364
Other languages
French (fr)
Inventor
Angélique BOUCHARD
Ivan C. Kraljic
Guillaume Painchaud-April
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Evident Canada Inc
Original Assignee
Evident Canada Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Evident Canada Inc filed Critical Evident Canada Inc
Publication of WO2025194253A1 publication Critical patent/WO2025194253A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N29/00Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
    • G01N29/44Processing the detected response signal, e.g. electronic circuits specially adapted therefor
    • G01N29/4481Neural networks
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N29/00Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
    • G01N29/04Analysing solids
    • G01N29/043Analysing solids in the interior, e.g. by shear waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N29/00Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
    • G01N29/04Analysing solids
    • G01N29/06Visualisation of the interior, e.g. acoustic microscopy
    • G01N29/0654Imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N29/00Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
    • G01N29/04Analysing solids
    • G01N29/06Visualisation of the interior, e.g. acoustic microscopy
    • G01N29/0654Imaging
    • G01N29/069Defect imaging, localisation and sizing using, e.g. time of flight diffraction [TOFD], synthetic aperture focusing technique [SAFT], Amplituden-Laufzeit-Ortskurven [ALOK] technique
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N29/00Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
    • G01N29/22Details, e.g. general constructional or apparatus details
    • G01N29/26Arrangements for orientation or scanning by relative movement of the head and the sensor
    • G01N29/262Arrangements for orientation or scanning by relative movement of the head and the sensor by electronic orientation or focusing, e.g. with phased arrays
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N2291/00Indexing codes associated with group G01N29/00
    • G01N2291/04Wave modes and trajectories
    • G01N2291/044Internal reflections (echoes), e.g. on walls or defects
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/88Sonar systems specially adapted for specific applications
    • G01S15/89Sonar systems specially adapted for specific applications for mapping or imaging
    • G01S15/8906Short-range imaging systems; Acoustic microscope systems using pulse-echo techniques
    • G01S15/8909Short-range imaging systems; Acoustic microscope systems using pulse-echo techniques using a static transducer configuration
    • G01S15/8915Short-range imaging systems; Acoustic microscope systems using pulse-echo techniques using a static transducer configuration using a transducer array

Definitions

  • This document pertains generally, but not by way of limitation, to imaging techniques for use in Non-Destructive Testing (NDT), and more particularly, to analysis of phased array (PA) acoustic inspection images, where such analysis may include identification or feature size estimation (or both) performed using a machine learning technique.
  • NDT Non-Destructive Testing
  • PA phased array
  • Another approach for NDT may include use of an acoustic inspection technique, such as where one or more electroacoustic transducers are used to insonify a region on or within the object under test, and acoustic energy that is scattered or reflected may be detected and processed. Such scattered or reflected energy may be referred to as an acoustic echo signal.
  • an acoustic inspection scheme involves use of acoustic frequencies in an ultrasonic range of frequencies, such as including pulses having energy in a specified range that may include value from, for example, a few hundred kilohertz, to tens of megahertz, as an illustrative example.
  • This disclosure describes techniques for detecting and sizing a feature in a nondestructive testing (NDT) image of an object under inspection.
  • this disclosure describes techniques that introduce a two-step approach that combines machine learning with a precise sizing algorithm. For example, a machine learning model is initially trained to perform a rough detection of features within the imaging data. This model is capable of identifying potential areas of interest, thereby providing a preliminary outline of the features. Subsequently, a deterministic sizing algorithm is applied to refine these outlines, ensuring accurate measurement of the feature dimensions. This approach not only enhances the precision of feature sizing but also allows for greater flexibility in adapting to various imaging scenarios. By leveraging the strengths of both machine learning and deterministic algorithms, the described method offers a comprehensive solution for improving the accuracy and reliability of feature detection and sizing in NDT images.
  • this disclosure is directed to a method of training processing circuitry using machine learning for detecting and sizing a feature in a non -destructive testing (NDT) image of an object under inspection, the method comprising: acquiring acoustic imaging data of a first object having a first feature; outlining the first feature in the acoustic imaging data; applying a sizing algorithm to the outlined first feature to adjust a size of the outline and generate a ground truth dataset; and training a machine learning model using the ground truth dataset.
  • NDT non -destructive testing
  • this disclosure is directed to a method of using processing circuitry for detecting and sizing a feature in non-destructive testing (NDT) images of an object under inspection using a previously trained machine learning model, the method comprising: acquiring acoustic imaging data of the object having a feature; applying the acoustic imaging data to the previously trained machine learning model to generate an outline of the feature in the imaging data; applying a sizing algorithm to the outl ined feature to adjust the outline of the feature outputted by the previously trained machine learning model; generating an NDT image with the adjusted outline of the feature; and displaying, on a user interface, the NDT image with the adjusted outline of the feature.
  • NDT non-destructive testing
  • FIG. 1 illustrates generally an example comprising an acoustic inspection system, such as may be used to perform one or more techniques described herein.
  • FIG. 2A is an image of an example of a non-destructive testing (NDT) image of an object under inspection that includes a flaw.
  • NDT non-destructive testing
  • FIG. 2B is the NDT image of FIG. 2A including an outline generated by a generic machine learning (ML) model to depict a size of the flaw.
  • ML machine learning
  • FIG. 2C is the NDT image of FIG. 2A including an outline generated by an untrained human to depict the size of the flaw.
  • FIG. 3 is a flow diagram of an example of a method 300 of training processing circuitry using machine learning for detecting and sizing a feature in an NDT image of an object under inspection.
  • FIG. 4A depicts acoustic imaging data of another object under inspection.
  • FIG. 4B depicts the features of the acoustic imaging data of FIG. 4A with outlines generated by a previously trained machine learning model.
  • FIG. 7 depicts acoustic imaging data displayed as an NDT image of an object under inspection.
  • FIG. 8 depicts acoustic imaging data displayed as an NDT image.
  • FIG. 9 is a flow diagram of an example of a method of using processing circuitry for detecting and sizing a feature in non-destructive testing (NDT) images of an object under inspection using a previously trained machine learning model.
  • NDT non-destructive testing
  • FIG. 10 is a flow diagram of an example of a method of training processing circuitry using machine learning for detecting and sizing a feature in a non-destructive testing (NDT) image of an object under inspection.
  • NDT non-destructive testing
  • FIG. 11 is a flow diagram of an example of a method of using processing circuitry for detecting and sizing a feature in non-destructive testing (NDT) images of an object under inspection using a previously trained machine learning model.
  • FIG. 12 shows an example of a machine learning module that may implement various techniques of this disclosure.
  • FIG. 13 illustrates a block diagram of an example of a machine upon which any one or more of the techniques (e.g., methodologies) discussed herein may perform.
  • Acoustic testing such as ultrasound-based inspection, may include focusing or beam-forming techniques to aid in construction of data plots or images representing a region of interest within the test object.
  • Use of an array of ultrasound transducer elements may include use of a phased-array beamforming approach and may be referred to as Phased Array Ultrasound Testing (PAUT).
  • PAUT Phased Array Ultrasound Testing
  • a delay-and-sum beamforming technique may be used such as including coherently summing time-domain representations of received acoustic signals from respective transducer elements or apertures.
  • a “Full Matrix Capture” (FMC) technique may be used where one or more elements in an array (or apertures defined by such elements) are used to transmit an acoustic pulse and other elements are used to receive scattered or reflected acoustic energy, and a matrix is constructed of time-series (e.g., A-Scan) representations corresponding to a sequence of transmit-receive cycles in which the transmissions are occurring from different elements (or corresponding apertures) in the array.
  • Beamforming and imaging may be performed using a technique such as a “Total Focusing Method” (TFM), in which a coherent summation may be performed using A-scan data acquired using an FMC technique.
  • TFM Total Focusing Method
  • phase-based approach may be used for one or more of acquisition, storage, or subsequent analysis.
  • phase-based approach may include coherent summation of normalized or quantized representations of A-Scan data corresponding to phase information.
  • phase coherence imaging PCI
  • NDT non-destructive testing
  • accurately detecting and sizing features such as flaws or other features in imaging data plays an important role in assessing the integrity of materials and structures.
  • analysis of phased-array acoustic inspection (PA) imaging data involves human inspectors identifying features and manually applying known criteria to measure the size of features (e.g., using an approach such as a - 6dB method). Relying on manual inspection and interpretation of images may be timeconsuming and prone to human error.
  • Some solutions incorporate automated algorithms to assist in the detection and sizing of features. The present inventors have recognized that these methods often fall short in terms of precision and reliability. For example, some solutions struggle with accurately delineating the boundaries of features, particularly in complex or noisy environments.
  • the present inventors have recognized a need for a more robust and flexible approach that may improve the accuracy and efficiency of feature detection and sizing in NDT applications.
  • the present inventors have recognized, among other things, that manual detection may be augmented or replaced such as by using automated identification and sizing of features with a combination of artificial intelligence (e.g., machine learning) and non-AI-based algorithmic approaches.
  • artificial intelligence e.g., machine learning
  • non-AI-based algorithmic approaches e.g., machine learning
  • This disclosure describes techniques for detecting and sizing a feature in a nondestructive testing (NDT) image of an object under inspection.
  • this disclosure describes techniques that introduce a two-step approach that combines machine learning with a precise sizing algorithm. For example, a machine learning model is initially trained to perform a rough detection of features within the imaging data. This model is capable of identifying potential areas of interest, thereby providing a preliminary outline of the features. Subsequently, a deterministic sizing algorithm is applied to refine these outlines, ensuring accurate measurement of the feature dimensions. This approach not only enhances the precision of feature sizing but also allows for greater flexibility in adapting to various imaging scenarios. By leveraging the strengths of both machine learning and deterministic algorithms, the described method offers a comprehensive solution for improving the accuracy and reliability of feature detection and sizing in NDT images.
  • FIG. 1 illustrates generally an example comprising an acoustic inspection system 100, such as may be used to perform one or more techniques described herein.
  • the acoustic inspection system 100 of FIG. 1 is an example of an acoustic imaging modality, such as an acoustic phased array system, that may implement various techniques of this disclosure.
  • the inspection system 100 may include a test instrument 140, such as a hand-held or portable assembly.
  • the test instrument 140 may be electrically coupled to a probe assembly, such as using a multi-conductor interconnect 130.
  • the probe assembly 150 may include one or more electroacoustic transducers, such as a transducer array 152 including respective transducers 154A through 154N.
  • the transducers array may follow a linear or curved contour, or may include an array of elements extending in two axes, such as providing a matrix of transducer elements.
  • the elements need not be square in footprint or arranged along a straight-line axis. Element size and pitch may be varied according to the inspection application.
  • a modular probe assembly 150 configuration may be used, such as to allow a test instrument 140 to be used with various different probe assemblies 150.
  • the transducer array 152 includes piezoelectric transducers, such as may be acoustically coupled to a target 158 (e.g., an object under test) through a coupling medium 156.
  • the coupling medium may include a fluid or gel or a solid membrane (e.g., an elastomer or other polymer material), or a combination of fluid, gel, or solid structures.
  • an acoustic transducer assembly may include a transducer array coupled to a wedge structure comprising a rigid thermoset polymer having known acoustic propagation characteristics (for example, Rexolite® available from C-Lec Plastics Inc.), and water may be injected between the wedge and the structure under test as a coupling medium 156 during testing.
  • a rigid thermoset polymer having known acoustic propagation characteristics (for example, Rexolite® available from C-Lec Plastics Inc.)
  • water may be injected between the wedge and the structure under test as a coupling medium 156 during testing.
  • the test instrument 140 may include digital and analog circuitry, such as a frontend circuit 122 including one or more transmit signal chains, receive signal chains, or switching circuitry (e.g., transmit/receive switching circuitry).
  • the transmit signal chain may include amplifier and filter circuitry, such as to provide transmit pulses for delivery through an interconnect 130 to a probe assembly 150 for insonification of the target 158, such as to image or otherwise detect a flaw 160 on or within the target 158 structure by receiving scattered or reflected acoustic energy elicited in response to the insonification.
  • test protocol may be performed using coordination between multiple test instruments 140, such as in response to an overall test scheme established from a master test instrument 140, or established by another remote system such as a computing facility 108 or general purpose computing device such as a laptop 132, tablet, smart-phone, desktop computer, or the like.
  • the test scheme may be established according to a published standard or regulatory requirement and may be performed upon initial fabrication or on a recurring basis for ongoing surveillance, as illustrative examples.
  • the receive signal chain of the front-end circuit 122 may include one or more filters or amplifier circuits, along with an analog-to-digital conversion facility, such as to digitize echo signals received using the probe assembly 150. Digitization may be performed coherently, such as to provide multiple channels of digitized data aligned or referenced to each other in time or phase.
  • the front-end circuit 122 may be coupled to and controlled by one or more processor circuits, such as a processor circuit 102 included as a portion of the test instrument 140.
  • the processor circuit 102 may be coupled to a memory circuit, such as to execute instructions that cause the test instrument 140 to perform one or more of acoustic transmission, acoustic acquisition, processing, or storage of data relating to an acoustic inspection, or to otherwise perform techniques as shown and described herein.
  • the test instrument 140 may be communicatively coupled to other portions of the system 100, such as using a wired or wireless communication interface 120.
  • performance of one or more techniques as shown and described in this disclosure may be accomplished on-board the test instrument 140 or using other processing or storage facilities such as using a computing facility 108 or a general -purpose computing device such as a laptop 132, tablet, smart-phone, desktop computer, or the like.
  • processing tasks that would be undesirably slow if performed on-board the test instrument 140 or beyond the capabilities of the test instrument 140 may be performed remotely (e.g., on a separate system), such as in response to a request from the test instrument 140.
  • storage of imaging data or intermediate data such as A-line matrices of time-series data may be accomplished using remote facilities communicatively coupled to the test instrument 140.
  • the test instrument may include a display 110, such as for presentation of configuration information or results, and an input device 112 such as including one or more of a keyboard, trackball, function keys or soft keys, mouse-interface, touch-screen, stylus, or the like, for receiving operator commands, configuration information, or responses to queries.
  • a display 110 such as for presentation of configuration information or results
  • an input device 112 such as including one or more of a keyboard, trackball, function keys or soft keys, mouse-interface, touch-screen, stylus, or the like, for receiving operator commands, configuration information, or responses to queries.
  • the acoustic inspection system 100 may acquire acoustic imaging data, such as FMC data or virtual source aperture (VS A) data, of a material using an acoustic imaging modality, such as an acoustic phased array system.
  • acoustic imaging modality such as an acoustic phased array system.
  • phased array linear scanning is another technique that may be used to acquire acoustic imaging data, where phased array linear scanning includes one A-scan per aperture along the array, and the aperture is shifted sequentially to produce a discrete set of contiguous A-scans.
  • the processor circuit 102 may then generate an acoustic imaging data set, such as a scattering matrix (S-matrix), plane wave matrix, or other matrix or data set, corresponding to an acoustic propagation mode, such as pulse echo direct (TT), self-tandem (TT-T), and/or pulse echo with skip (TT-TT).
  • an acoustic imaging data set such as a scattering matrix (S-matrix), plane wave matrix, or other matrix or data set, corresponding to an acoustic propagation mode, such as pulse echo direct (TT), self-tandem (TT-T), and/or pulse echo with skip (TT-TT).
  • the processor circuit 102 or another processor circuit may be trained using machine learning so as to detect and size a feature in a nondestructive testing (NDT) image of an object under inspection. Additionally or alternatively, the processor circuit 102 or another processor circuit may be used for detecting and sizing a feature in NDT images of an object under inspection using a previously trained machine learning model.
  • NDT nondestructive testing
  • the techniques shown and described in this document may be performed using a portion or an entirety of an inspection system 100 as shown in FIG. 1 or otherwise using a machine 1300 as discussed below in relation to FIG. 13.
  • FIG. 2A is an image of an example of a non-destructive testing (NDT) image 200 of an object under inspection that includes a feature, namely a flaw 202.
  • FIG. 2B is the NDT image 200 of FIG. 2A including an outline 204 generated by a generic machine learning (ML) model to depict a size of the flaw 202.
  • FIG. 2C is the NDT image 200 of FIG. 2A including an outline 206 generated by an untrained human to depict the size of the flaw 202.
  • Training an ML model to defect flaws is done with supervised learning. Typically, a human annotator draws bounding boxes around each flaw, or roughly segments each flaw by hand or with a pre-trained ML model.
  • FIG. 3 is a flow diagram of an example of a method 300 of training processing circuitry using machine learning for detecting and sizing a feature in an NDT image of an object under inspection.
  • accurate sizing is performed before training the AL A system, such as the inspection system 100 of FIG. 1, acquires acoustic imaging data 302 of an object under inspection, such as the target 158 of FIG. 1, with a feature 304, e.g., flaw, back wall echo, front wall echo, etc.
  • the acoustic imaging data 302 is shown as an NDT image and includes a plurality of pixels 316 (for 2D images) or voxels (for 3D images).
  • a pixel is the smallest unit of a digital 2D image or display, representing a single point of color or intensity in a grid.
  • a voxel is the 3D equivalent of a pixel, representing a discrete unit of volume in a three-dimensional space.
  • Each pixel in the acoustic imaging data 302 is a representation of an amplitude of the signal that was received by the transducer array 152 of FIG. 1.
  • the method 300 includes outlining the feature 304 in the imaging data with an outline 306.
  • a human operator/inspector generates the rough outline 306, e.g., a bounding box, around the feature 304 in the acoustic imaging data 302.
  • an algorithm or automated detection system generates the rough outline 306.
  • This initial outline 306 serves as a starting point for more precise outlines.
  • the method then applies a sizing algorithm 308 to the outline 306 of the flaw to adjust its size and create a ground truth dataset.
  • the sizing algorithm 308 analyzes individual pixels (or voxels).
  • the sizing algorithm 308 compares the representation of an amplitude of the signal of each pixel (or voxel) to a threshold value.
  • a pixel (or voxel) is included in the outline 306 of the feature 304 when the amplitude of the signal associated with pixel 316 equals or exceeds the threshold value, and is excluded when the amplitude falls below the threshold value.
  • the sizing algorithm 308 adjusts the rough outline 306 to generate a precise outline 310, or accurate ground truth data, according to NDT sizing criteria.
  • the precise outline 310 along with other ground truth data such as other precise outlines, forms the ground truth dataset 312 that is used to train a ML model.
  • the trained ML model 314 is trained using the ground truth dataset 312.
  • the trained ML model 314 is configured to detect features in NDT images and, in some examples, size those features.
  • One advantage of this technique is that the sizing algorithm is incorporated during the training phase, resulting in faster inference times because the trained ML model 314 directly outputs sized features without needing to apply the sizing algorithm during deployment.
  • FIG. 4B depicts the features of the acoustic imaging data 400 of FIG. 4A with outlines generated by a previously trained machine learning model.
  • the acoustic imaging data 400 is displayed as an NDT image in FIG. 4B.
  • the acoustic imaging data 400 includes the back wall echo 402 and the back wall echo 404 of FIG. 4A.
  • a previously trained ML model such as the trained ML model 314 of FIG. 3, is applied to the acoustic imaging data 400.
  • the previously trained ML model generates outline(s) of the feature(s) in the acoustic imaging data 400.
  • the previously trained ML model generates an outline 406 for the back wall echo 402 and an outline 408 for the back wall echo 404.
  • the system such as the inspection system 100 of FIG. 1, generates an NDT image that includes the outline 406 for the back wall echo 402 and an outline 408 for the back wall echo 404, and displays the acoustic imaging data 400 as an NDT image in FIG. 4B.
  • the system displays, on a user interface, the NDT image shown in FIG. 4B with the outline(s) of the feature(s).
  • FIG. 5 is a flow diagram of an example of a sizing algorithm 500 that may be used to implement various techniques of this disclosure.
  • the sizing algorithm 500 is an example of the sizing algorithm 308 of FIG. 3 and may be implemented using the processor circuit 102 or another processor circuit.
  • the main idea behind the sizing algorithm is, for each defect, find an adequate threshold Ki based on an amplitude of the signal of the coordinately equivalent region in previous and subsequent end views, and erode the feature i to keep only the amplitude value above Ki, and extend the feature to add every contiguous voxel whose value is above Ki.
  • the end views are similar, at least locally, with respect to structural noise and flaws.
  • the sizing algorithm 500 begins with the input of ultrasound acquisition data A G IR u ' v ' w at block 502 and a model prediction binary mask M G ⁇ 0,l ⁇ u,v,w at block 504, where A is a 3D array of the ultrasound acquisition data and M is a 3D mask array having the same dimensions of the acquisition data and representing the detected features.
  • A is a 3D array of the ultrasound acquisition data
  • M is a 3D mask array having the same dimensions of the acquisition data and representing the detected features.
  • there is a model prediction binary mask per feature type For example, there is a first mask for flaws of type 1, a second mask for flaws of type 2, etc., another mask for the front wall echoes, another mask for the back wall echoes, and so on.
  • Different sizing algorithms may then be deployed, each one being specific to a particular feature type.
  • the acquisition data in block 502 represents the raw acoustic imaging data
  • the binary mask at block 504 indicates the initial detection of features, e.g., flaw, back wall echo, front wall echo, within this data.
  • the background matrix A bg c (IR U ⁇ NA ⁇ ) U,V,W at block 508 is derived by excluding the features identified in the binary mask M from the acquisition data.
  • the background matrix is used for isolating noise and non-feature elements in the acoustic imaging data, which aids in the subsequent threshold determination process.
  • the background matrix is used to determine a representation of noise in the acoustic imaging data adjacent to the feature, e.g., flaw, front wall echo, back wall echo, etc.
  • the representation of noise may be a mean value of the amplitude of the signal of a pixel (or voxel), for example.
  • the sizing algorithm 500 proceeds with a series of steps to refine the feature mask.
  • a threshold value of the noise is determined, which is based on the background matrix and the feature's characteristics .
  • the threshold determination process is described below with respect to FIG. 6.
  • this threshold value is used to filter the feature mask, ensuring that only the most relevant data points are retained. The filtering process enhances the accuracy of the feature representation.
  • morphological dilation is applied to the feature mask, which adjusts, e.g., increases, the size of the outline of the feature, such as based on the threshold.
  • the dilation process involves expanding the boundaries of the detected feature to account for any potential underestimation of the feature's size.
  • the dilation process is iterative, and after each iteration, the sizing algorithm 500 determines a threshold at block 516 and uses this threshold to filter the feature mask at block 518.
  • an erosion process compares the amplitude of a signal of a pixel (or a voxel) to the threshold value and either includes the pixel (or the voxel) in the outline of the feature when the amplitude is equal to or greater than the threshold value or excludes the pixel (or the voxel) from the outline of the feature when the amplitude is less than the threshold value.
  • the sizing algorithm 500 obtains, from A and M, the background matrix A bg . Then, for each detected feature, the sizing algorithm 500 determines a threshold S (shown and described below with respect to FIG. 6), and filters the feature binary mask to keep the pixels (or voxels) with values in A that are greater than threshold S. Then, the sizing algorithm 500 repeats the steps of:
  • Morphological dilation which is an image processing operation that expands the boundaries of objects in a binary or grayscale image. It works by applying a structuring element to the image, where a pixel is set to the maximum value within the element's neighborhood, making objects appear larger and filling small gaps;
  • FIG. 6 is a flow diagram of an example of a method 600 for the threshold determination of the sizing algorithm 500 in FIG. 5.
  • the method 600 is an example of the threshold determination of block 510 in FIG. 5 and may be implemented using the processor circuit 102 or another processor circuit. This process is designed to accurately determine the threshold S used for refining the detection of features within ultrasound acquisition data A G W u ' v ' w .
  • the threshold corresponds to a maximum between 1) a 6 dB drop from maximum amplitude in each coordinately equivalent region in p previous and p subsequent end views, and 2) a median of maximum amplitude of each coordinately equivalent region in p previous and p subsequent end views of length 2 x p in u axis.
  • the threshold is applied to keep only the amplitudes greater than 6 dB.
  • the system performs dilation with a 3-axis kernel using a subvolume around the feature mask, recalculates the threshold using the expanded outline, and applies the threshold. If the feature mask is the same size as in the previous iteration after applying the threshold, the dilation step stops.
  • the method 600 begins at block 502 with the ultrasound acquisition data A G represents the acoustic imaging data captured during the ultrasound scan.
  • a model prediction binary mask M G ⁇ 0, l] u ' v ' w is used, which indicates the initial detection of features or defects within the acquired acoustic imaging data.
  • the binary mask assists in distinguishing between areas of interest and the background, represented at block 508 by the ultrasound acquisition data without features matrix A bg c (IR u ⁇ NA ⁇ ) U,V ' W , which was determined in FIG. 5.
  • This data set excludes the detected features, allowing for a clearer analysis of the background noise and non-feature elements.
  • the method 600 obtains the feature binary mask F E corresponding to a subset of the mask M:
  • the feature binary mask is a refined version of the initial binary mask M E ⁇ 0,l ⁇ u ' v ' w , focusing on the specific features detected.
  • the method 600 determines the maximum projection along the u-axis of the feature binary mask F E ⁇ 0,l) u f’ v f’ w f to obtain the 2D cross section of the feature mask.
  • the method 600 computes C E ⁇ NA, l v f’ w f as the max projection of F along the u-axis to obtain a 2D feature outline: > f 1 if at least one voxel in F[ ,j, k] is 1. ] k ''NA otherwise
  • the method 600 computes N, which is the noise region surrounding the feature from the acquisition data background A bg , where
  • the method 600 restrains noise to the feature outline by performing an element-wise multiplication of C with each 2D slice of N on u-axis:
  • Ntjk Ntjk O Cjk i G [max(ufi - p, 0) ⁇ mining + p, u)]
  • the method 600 determines the max projection of N along the w-axis. [0067] At block 618, which is the output of the process of block 616, the method 600 computes the noise region Ncscan as the following: f max (N [i, /, : ] ) if 3 k such that N i jk NA
  • the method 600 computes a threshold S using mean and standard deviation of defined values in N cscan :
  • the threshold S is the threshold used in block 510 and block 516 of FIG. 5, for example.
  • the object under inspection 702 includes a feature, namely a flaw 704.
  • the NDT image 700 depicts end views 706 before the flaw 704 and end views 708 after the flaw 704.
  • the “empty” areas between the flaw 704 and the end views 706 and the end views 708 represent other flaws that have been removed by the ML model so that the sizing algorithm may focus on one flaw at a time, such as the flaw 704.
  • the sizing algorithm 500 of FIG. 5 determines (at block 506) a representation of the noise in the acoustic imaging data adjacent to the feature, e.g., the flaw 704 of NDT image 700.
  • the sizing algorithm 500 uses the pixels (or voxels) in the end views 706 and the end views 708 adjacent to the flaw 704 to determine the representation of the noise in the acoustic imaging data.
  • FIG. 8 depicts acoustic imaging data displayed as an NDT image 800.
  • the NDT image 800 depicts the flaw 704 of FIG. 7 and the end views 706 of FIG. 7.
  • the NDT image 800 itself is shown as an end view.
  • the sizing algorithm 500 defines the outline of the end views 706 to be similar to the outline of the flaw 704, which was defined by an ML model, such as the previously trained machine learning model 906 in FIG. 9.
  • the sizing algorithm 500 then adjusts the size of the outline of the end views 706 to more precise match the size of the outline of the flaw 704.
  • FIG. 8 depicts acoustic imaging data displayed as an NDT image 800.
  • the NDT image 800 depicts the flaw 704 of FIG. 7 and the end views 706 of FIG. 7.
  • the NDT image 800 itself is shown as an end view.
  • the sizing algorithm 500 defines the outline of the end views 706 to be similar to the outline of the flaw 704, which was defined by an ML model
  • a system such as the inspection system 100 of FIG. 1, acquires acoustic imaging data 902 of an object under inspection, such as the target 158 of FIG. 1, with a feature 904, e.g., flaw, back wall echo, front wall echo, etc.
  • the acoustic imaging data 902 is displayed as an NDT image in FIG. 9.
  • the acoustic imaging data 902 includes a plurality of pixels 908 (for 2D images) or voxels (for 3D images). Each pixel in the acoustic imaging data 902 is a representation of an amplitude of the signal that was received by the transducer array 152 of FIG. 1.
  • the system applies the acoustic imaging data 902 to a previously trained machine learning model 906 to generate a rough outline 910, e.g., bounding box, of the feature 904 in the acoustic imaging data 902.
  • a rough outline 910 e.g., bounding box
  • the rough outline 910 likely includes quite a few pixels 908 that are not part of the feature 904, where the darker the pixel 908 the more likely the pixel 908 is part of the feature 904.
  • the system applies a sizing algorithm 912 to the outline 910 outputted by the previously trained machine learning model 906 to adjust the size and shape of the outline 910 and generate an adjusted outline 914.
  • a sizing algorithm 912 to the outline 910 outputted by the previously trained machine learning model 906 to adjust the size and shape of the outline 910 and generate an adjusted outline 914.
  • the rough outline 910 generated by the previously trained machine learning model 906, shown as a bounding box has been adjusted so that the new outline 914 more closely resembles the feature 904.
  • the system then generates an NDT image 916 with the adjusted outline 914 of the feature 904.
  • the system then displays on a user interface, such as the display 110 of FIG. 1, the NDT image 916 with the adjusted outline 914 of the feature 904.
  • the system determines and displays one or more dimensions of the feature, such as its width and/or length.
  • the system may determine the dimensions of an outlined feature in the NDT image using metadata processing, for example.
  • the acoustic imaging data includes metadata containing registration information that maps each pixel's (or voxel's) location to physical dimensions, such as millimeters.
  • the system may use this registration metadata to convert the pixel (or voxel) measurements into physical dimensions for display.
  • each pixel column corresponds to a specific position in millimeters based on the registration information stored in the NDT image file format.
  • the system calculates and displays one or more physical dimensions of the feature on the user interface.
  • statistical measurements may be calculated for the noise regions adjacent to the detected features. These statistics may be displayed on a user interface, which displays one or more of the average amplitude values, median values, and other statistical metrics that help characterize both the features and surrounding noise regions. These statistical measurements are particularly valuable to users who need to analyze the amplitude characteristics within detected features and compare them to the surrounding noise levels.
  • FIG. 10 is a flow diagram of an example of a method 1000 of training processing circuitry using machine learning for detecting and sizing a feature in a non -destructive testing (NDT) image of an object under inspection.
  • the method 1000 is an example of the method 300 shown in FIG. 3.
  • the method 1000 includes acquiring acoustic imaging data of a first object having a first feature.
  • a system such as the inspection system 100 acquires the acoustic imaging data 302 of a first object having a feature 304, as seen in FIG. 3.
  • the method 1000 includes outlining the first feature in the acoustic imaging data.
  • the processor circuit 102 of the inspection system 100 of FIG. 1 generates an outline 306 of the feature 304 in the acoustic imaging data 302 of FIG. 3.
  • the method 1000 includes applying a sizing algorithm to the outlined first feature to adjust a size of the outline and generate a ground truth dataset.
  • the processor circuit 102 of the inspection system 100 of FIG. 1 applies the sizing algorithm 308 of FIG. 3, which is described in detail with respect to FIG. 5, to the outline 306 to adjust a size of the outline and generate a ground truth dataset.
  • the method 1000 includes training a machine learning model using the ground truth dataset.
  • trained ML model 314 of FIG. 3 is trained by the processor circuit 102 of the inspection system 100 of FIG. 1 using the ground truth dataset 312.
  • the method 1000 includes applying the trained machine learning model to acoustic imaging data of a second object having a second feature, generating an NDT image with an outline of the second feature generated by the trained machine learning model, and displaying, on a user interface, the NDT image with the outline of the second feature.
  • the method 1000 includes displaying, on the user interface, one or more dimensions of the second feature. In other examples, the method 1000 includes displaying, on the user interface, statistics about the second feature and/or noise.
  • applying the sizing algorithm to the outlined first feature includes determining a representation of noise in the acoustic imaging data adjacent to the first feature, determining a threshold value of the noise, and adjusting, based on the threshold value, the size of the outline of the first feature.
  • the acoustic imaging data includes a sequence of end views of the first object, and wherein determining the representation of the noise in the acoustic imaging data adjacent to the first feature includes determining the representation of noise in the end views adjacent to the first feature.
  • determining the representation of the noise in the acoustic imaging data adjacent to the first feature includes determining the representation of noise in the end views adjacent to the first feature.
  • FIG. 11 is a flow diagram of an example of a method 1100 of using processing circuitry for detecting and sizing a feature in non-destructive testing (NDT) images of an object under inspection using a previously trained machine learning model.
  • the method 1100 is an example of the method 900 shown in FIG. 9.
  • the method 1100 includes acquiring acoustic imaging data of the object having a feature.
  • a system such as the inspection system 100 acquires the acoustic imaging data 902 of an object having a feature 904, as seen in FIG. 9.
  • the method 1100 includes applying the acoustic imaging data to the previously trained machine learning model to generate an outline of the feature in the imaging data.
  • the system applies the acoustic imaging data 902 to the previously trained machine learning model 906 of FIG. 9 to generate an outline 910 of the feature 904 in the acoustic imaging data 902.
  • the method 1100 includes applying the trained machine learning model to acoustic imaging data of a second object having a second feature, generating an NDT image with an outline of the second feature generated by the trained machine learning model, and displaying, on a user interface, the NDT image with the outline of the second feature.
  • applying the sizing algorithm to the outlined first feature includes determining a representation of noise in the acoustic imaging data adjacent to the first feature, determining a threshold value of the noise, and adjusting, based on the threshold value, the size of the outline of the first feature.
  • the method 1100 includes an erosion step in which adjusting, based on the threshold value, the size of the outline of the first feature includes comparing a representation of an amplitude of a signal of a pixel or a voxel to the threshold value, including the pixel or the voxel in the outline of the first feature when the amplitude is equal to or greater than the threshold value, and excluding the pixel or the voxel from the outline of the first feature when the amplitude is less than the threshold value.
  • the acoustic imaging data includes a sequence of end views of the first object, and wherein determining the representation of the noise in the acoustic imaging data adjacent to the first feature includes determining the representation of noise in the end views adjacent to the first feature.
  • determining the representation of the noise in the acoustic imaging data adjacent to the first feature includes determining the representation of noise in the end views adjacent to the first feature.
  • FIG. 12 shows an example of a machine learning module 1200 that may implement various techniques of this disclosure.
  • the machine learning module 1200 is an example of the trained ML model 314 of FIG. 3.
  • the machine learning module 1200 may be implemented in whole or in part by one or more computing devices.
  • a training module 1202 may be implemented by a different device than a prediction module 1004.
  • the model 1214 may be created on a first machine, e.g., a desktop computer, and then sent to a second machine, e.g., a handheld device.
  • the training module 1202 inputs training data 1206 into a selector module 1208 that selects a training vector from the training data.
  • the selector module 1208 may include data normalization/standardization and cleaning, such as to remove any useless information.
  • the model 1214 itself may perform aspects of the selector module, such as a gradient boosted trees.
  • the training module may train the machine learning module 1200 on a plurality of flaw or no flaw conditions.
  • the training data 1006 may include, for example, ground truth data.
  • Ground truth data may include synthetic data from the simulated flaws and geometry. Aside from synthetic datasets, ground truth labels may also be obtained using other NDT methods, such as radiography, CT scans, and laser surface profiling.
  • the training data 1206 may include one or more of simulations of a plurality of types of material flaws in the material, simulations of a plurality of positions of material flaws in the material, or simulations of a plurality of ghost echoes in the material to simulate no flaw conditions.
  • the training data 1206 may be labeled. In other examples, the training data may not be labeled, and the model may be trained using feedback data — such as through a reinforcement learning method.
  • the selector module 1208 selects a training vector 1210 from the training data 1206.
  • the selected data may fill the training vector 1210 and includes a set of the training data that is determined to be predictive of a classification.
  • Information chosen for inclusion in the training vector 1210 may be all the training data 1206 or in some examples, may be a subset of all the training data 1206.
  • the training vector 1210 may be utilized (along with any applicable labels) by the machine learning algorithm 1212 to produce a model 1214 (a trained machine learning model). In some examples, other data structures other than vectors may be used.
  • the machine learning algorithm 1212 may leam one or more layers of a model.
  • Example layers may include convolutional layers, dropout layers, pooling/up sampling layers, SoftMax layers, and the like.
  • Example models may be a neural network, where each layer is comprised of a plurality of neurons that take a plurality of inputs, weight the inputs, input the weighted inputs into an activation function to produce an output which may then be sent to another layer.
  • Example activation functions may include a Rectified Linear Unit (ReLu), and the like. Layers of the model may be fully or partially connected.
  • data 1216 may be input to the selector module 1218.
  • the data 1216 may include an acoustic imaging data set.
  • the selector module 1218 may operate the same, or differently than the selector module 1208 of the training module 1202. In some examples, the selector modules 1208 and 1218 are the same modules or different instances of the same module.
  • the selector module 1218 produces a vector 1220, which is input into the model 1214 to generate an output NDT image of the specimen, resulting in an image 1222.
  • the weightings and/or network structure learned by the training module 1202 may be executed on the vector 1220 by applying vector 1220 to a first layer of the model 1214 to produce inputs to a second layer of the model 1214, and so on until the image is output.
  • other data structures may be used other than a vector (e.g., a matrix).
  • CNN convolutional neural network
  • the training module may train the machine learning module 1200 on a plurality of flaw or no flaw conditions, such as described above.
  • the training module 1202 may operate in an offline manner to train the model 1214.
  • the prediction module 1204, however, may be designed to operate in an online manner. It should be noted that the model 1214 may be periodically updated via additional training and/or user feedback. For example, additional training data 1206 may be provided to refine the model by the training module 1002.
  • the machine learning algorithm 1200 may be selected from among many different potential supervised or unsupervised machine learning algorithms.
  • learning algorithms include artificial neural networks, convolutional neural networks, Bayesian networks, instance-based learning, support vector machines, decision trees (e.g., Iterative Dichotomiser 3, C4.5, Classification and Regression Tree (CART), Chi -squared Automatic Interaction Detector (CHAID), and the like), random forests, linear classifiers, quadratic classifiers, k-nearest neighbor, linear regression, logistic regression, a region based CNN, a full CNN (for semantic segmentation), a mask R-CNN algorithm for instance segmentation, and hidden Markov models.
  • unsupervised learning algorithms include expectation-maximization algorithms, vector quantization, and information bottleneck method.
  • the machine learning module 1200 of FIG. 12 may assist in training processing circuitry for detecting and sizing a feature in a non -destructive testing (NDT) image of an object under inspection and, when trained, using processing circuitry for detecting and sizing a feature in non-destructive testing (NDT) images of an object under inspection, in accordance with this disclosure.
  • NDT non -destructive testing
  • FIG. 13 illustrates a block diagram of an example of a machine 1300 upon which any one or more of the techniques (e.g., methodologies) discussed herein may perform.
  • the machine 1300 may operate as a standalone device or are connected (e.g., networked) to other machines.
  • the machine 1300 may operate in the capacity of a server machine, a client machine, or both in serverclient network environments.
  • the machine 1300 may act as a peer machine in peer-to-peer (P2P) (or other distributed) network environment.
  • P2P peer-to-peer
  • machine shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein, such as cloud computing, software as a service (SaaS), other computer cluster configurations.
  • cloud computing software as a service
  • SaaS software as a service
  • Examples, as described herein, may include, or may operate on, logic or a number of components, modules, or mechanisms (all referred to hereinafter as
  • module is understood to encompass a tangible entity, be that an entity that is physically constructed, specifically configured (e.g., hardwired), or temporarily (e.g., transitorily) configured (e.g., programmed) to operate in a specified manner or to perform part or all of any operation described herein.
  • each of the modules need not be instantiated at any one moment in time.
  • the modules comprise a general-purpose hardware processor configured using software
  • the general -purpose hardware processor is configured as respective different modules at different times.
  • Software may accordingly configure a hardware processor, for example, to constitute a particular module at one instance of time and to constitute a different module at a different instance of time.
  • Machine 1300 may include a hardware processor 1302 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof), a main memory 1304, and a static memory 1306, some or all of which may communicate with each other via an interlink 1308 (e.g., bus).
  • the machine 1300 may further include a display unit 1310, an alphanumeric input device 1312 (e.g., a keyboard), and a user interface (UI) navigation device 1314 (e.g., a mouse).
  • the display unit 1310, input device 1312 and UI navigation device 1314 are a touch screen display.
  • the machine 1300 may additionally include a storage device (e.g., drive unit) 1316, a signal generation device 1318 (e.g., a speaker), a network interface device 1320, and one or more sensors 1321, such as a global positioning system (GPS) sensor, compass, accelerometer, or other sensor.
  • a storage device e.g., drive unit
  • a signal generation device 1318 e.g., a speaker
  • a network interface device 1320 e.g., a Wi-Fi
  • sensors 1321 such as a global positioning system (GPS) sensor, compass, accelerometer, or other sensor.
  • GPS global positioning system
  • the machine 1300 may include an output controller 1328, such as a serial (e.g., universal serial bus (USB), parallel, or other wired or wireless (e.g., infrared (IR), near field communication (NFC), etc.) connection to communicate or control one or more peripheral devices (e.g., a printer, card reader, etc.)
  • the storage device 1316 may include a machine readable medium 1322 on which is stored one or more sets of data structures or instructions 1324 (e.g., software) embodying or utilized by any one or more of the techniques or functions described herein.
  • the instructions 1324 may also reside, completely or at least partially, within the main memory 1304, within static memory 1306, or within the hardware processor 1302 during execution thereof by the machine 1300.
  • one or any combination of the hardware processor 1302, the main memory 1304, the static memory 1306, or the storage device 1316 may constitute machine readable media.
  • machine readable medium 1322 is illustrated as a single medium, the term “machine readable medium” may include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) configured to store the one or more instructions 1324.
  • machine readable medium may include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) configured to store the one or more instructions 1324.
  • machine readable medium may include any medium that is capable of storing, encoding, or carrying instructions for execution by the machine 1300 and that cause the machine 1300 to perform any one or more of the techniques of the present disclosure, or that is capable of storing, encoding or carrying data structures used by or associated with such instructions.
  • Non-limiting machine readable medium examples may include solid-state memories, and optical and magnetic media.
  • machine readable media may include: non-volatile memory, such as semiconductor memory devices (e.g., Electrically Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM)) and flash memory devices; magnetic disks, such as internal hard disks and removable disks; magneto -optical disks; Random Access Memory (RAM); Solid State Drives (SSD); and CD-ROM and DVD-ROM disks.
  • EPROM Electrically Programmable Read-Only Memory
  • EEPROM Electrically Erasable Programmable Read-Only Memory
  • flash memory devices e.g., Electrically Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM)
  • flash memory devices e.g., Electrically Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM)
  • flash memory devices e.g., Electrically Erasable Programmable Read-Only Memory (EEPROM)
  • the instructions 1324 may further be transmitted or received over a communications network 1326 using a transmission medium via the network interface device 1320.
  • the machine 1300 may communicate with one or more other machines utilizing any one of a number of transfer protocols (e.g., frame relay, internet protocol (IP), transmission control protocol (TCP), user datagram protocol (UDP), hypertext transfer protocol (HTTP), etc.).
  • transfer protocols e.g., frame relay, internet protocol (IP), transmission control protocol (TCP), user datagram protocol (UDP), hypertext transfer protocol (HTTP), etc.
  • Example communication networks may include a local area network (LAN), a wide area network (WAN), a packet data network (e.g., the Internet), mobile telephone networks (e.g., cellular networks), Plain Old Telephone (POTS) networks, and wireless data networks (e.g., Institute of Electrical and Electronics Engineers (IEEE) 802.11 family of standards known as Wi-Fi®, IEEE 802.16 family of standards known as WiMax®), IEEE 802.15.4 family of standards, a Long Term Evolution (LTE) family of standards, a Universal Mobile Telecommunications System (UMTS) family of standards, peer-to-peer (P2P) networks, among others.
  • LAN local area network
  • WAN wide area network
  • POTS Plain Old Telephone
  • wireless data networks e.g., Institute of Electrical and Electronics Engineers (IEEE) 802.11 family of standards known as Wi-Fi®, IEEE 802.16 family of standards known as WiMax®
  • IEEE 802.15.4 family of standards e.g., Institute of Electrical and Electronics Engineers (IEEE
  • the network interface device 1320 may include one or more physical jacks (e.g., Ethernet, coaxial, or phonejacks) or one or more antennas to connect to the communications network 1326.
  • the network interface device 1320 may include a plurality of antennas to wirelessly communicate using at least one of single-input multiple-output (SIMO), multiple-input multiple-output (MIMO), or multiple-input single-output (MISO) techniques.
  • SIMO single-input multiple-output
  • MIMO multiple-input multiple-output
  • MISO multiple-input single-output
  • the network interface device 1320 may wirelessly communicate using Multiple User MIMO techniques.
  • Examples, as described herein, may include, or may operate on, logic or a number of components, modules, or mechanisms.
  • Modules are tangible entities (e.g., hardware) capable of performing specified operations and are configured or arranged in a certain manner.
  • circuits are arranged (e.g., internally or with respect to external entities such as other circuits) in a specified manner as a module.
  • the whole or part of one or more computer systems e.g., a standalone, client, or server computer system
  • one or more hardware processors are configured by firmware or software (e.g., instructions, an application portion, or an application) as a module that operates to perform specified operations.
  • the software may reside on a machine-readable medium.
  • the software when executed by the underlying hardware of the module, causes the hardware to perform the specified operations.
  • Various embodiments are implemented fully or partially in software and/or firmware.
  • This software and/or firmware may take the form of instructions contained in or on a non-transitory computer-readable storage medium. Those instructions may then be read and executed by one or more processors to enable performance of the operations described herein.
  • the instructions are in any suitable form, such as but not limited to source code, compiled code, interpreted code, executable code, static code, dynamic code, and the like.
  • Such a computer-readable medium may include any tangible non-transitory medium for storing information in a form readable by one or more computers, such as but not limited to read only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory; etc.
  • Method examples described herein may be machine or computer-implemented at least in part. Some examples may include a computer-readable medium or machine-readable medium encoded with instructions operable to configure an electronic device to perform methods as described in the above examples.
  • An implementation of such methods may include code, such as microcode, assembly language code, a higher-level language code, or the like. Such code may include computer readable instructions for performing various methods. The code may form portions of computer program products. Further, in an example, the code may be tangibly stored on one or more volatile, non-transitory, or nonvolatile tangible computer-readable media, such as during execution or at other times.
  • tangible computer-readable media may include, but are not limited to, hard disks, removable magnetic disks, removable optical disks (e.g., compact discs and digital video discs), magnetic cassettes, memory cards or sticks, random access memories (RAMs), read only memories (ROMs), and the like.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Pathology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • Health & Medical Sciences (AREA)
  • Immunology (AREA)
  • General Health & Medical Sciences (AREA)
  • Acoustics & Sound (AREA)
  • Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Signal Processing (AREA)
  • Investigating Or Analyzing Materials By The Use Of Ultrasonic Waves (AREA)

Abstract

Techniques for detecting and sizing a feature in a non-destructive testing (NDT) image of an object under inspection. In some examples, the techniques introduce a two-step approach that combines machine learning with a precise sizing algorithm. For example, a machine learning model is initially trained to perform a rough detection of features within the imaging data. This model is capable of identifying potential areas of interest, thereby providing a preliminary outline of the features. Subsequently, a deterministic sizing algorithm is applied to refine these outlines, ensuring accurate measurement of the feature dimensions. This approach not only enhances the precision of feature sizing but also allows for greater flexibility in adapting to various imaging scenarios. By leveraging the strengths of both machine learning and deterministic algorithms, the techniques offer a comprehensive solution for improving the accuracy and reliability of feature detection and sizing in NDT images.

Description

FEATURE SIZE ESTIMATION FOR ACOUSTIC INSPECTION
CLAIM OF PRIORITY
[0001] This application claims the benefit of priority of U.S. Provisional Patent Application Serial Number 63/566,715, titled “FEATURE SIZE ESTIMATION FOR ACOUSTIC INSPECTION” to Angelique Bouchard et al., filed on March 18, 2024, the entire contents of which being incorporated herein by reference.
FIELD OF THE DISCLOSURE
[0002] This document pertains generally, but not by way of limitation, to imaging techniques for use in Non-Destructive Testing (NDT), and more particularly, to analysis of phased array (PA) acoustic inspection images, where such analysis may include identification or feature size estimation (or both) performed using a machine learning technique.
BACKGROUND
[0003] Non-destructive testing (NDT) may refer to use of one or more different techniques to inspect regions on or within an object, such as to ascertain whether features or defects exist, or to otherwise characterize the object being inspected. Examples of nondestructive test approaches may include use of an eddy-current testing approach where electromagnetic energy is applied to the object and resulting induced currents on or within the object are detected, with the values of a detected current (or a related impedance) providing an indication of the structure of the object under test, such as to indicate a presence of a crack, void, porosity, or other inhomogeneity.
[0004] Another approach for NDT may include use of an acoustic inspection technique, such as where one or more electroacoustic transducers are used to insonify a region on or within the object under test, and acoustic energy that is scattered or reflected may be detected and processed. Such scattered or reflected energy may be referred to as an acoustic echo signal. Generally, such an acoustic inspection scheme involves use of acoustic frequencies in an ultrasonic range of frequencies, such as including pulses having energy in a specified range that may include value from, for example, a few hundred kilohertz, to tens of megahertz, as an illustrative example. SUMMARY OF THE DISCLOSURE
[0005] This disclosure describes techniques for detecting and sizing a feature in a nondestructive testing (NDT) image of an object under inspection. In some examples, this disclosure describes techniques that introduce a two-step approach that combines machine learning with a precise sizing algorithm. For example, a machine learning model is initially trained to perform a rough detection of features within the imaging data. This model is capable of identifying potential areas of interest, thereby providing a preliminary outline of the features. Subsequently, a deterministic sizing algorithm is applied to refine these outlines, ensuring accurate measurement of the feature dimensions. This approach not only enhances the precision of feature sizing but also allows for greater flexibility in adapting to various imaging scenarios. By leveraging the strengths of both machine learning and deterministic algorithms, the described method offers a comprehensive solution for improving the accuracy and reliability of feature detection and sizing in NDT images.
[0006] In some aspects, this disclosure is directed to a method of training processing circuitry using machine learning for detecting and sizing a feature in a non -destructive testing (NDT) image of an object under inspection, the method comprising: acquiring acoustic imaging data of a first object having a first feature; outlining the first feature in the acoustic imaging data; applying a sizing algorithm to the outlined first feature to adjust a size of the outline and generate a ground truth dataset; and training a machine learning model using the ground truth dataset.
[0007] In some aspects, this disclosure is directed to a method of using processing circuitry for detecting and sizing a feature in non-destructive testing (NDT) images of an object under inspection using a previously trained machine learning model, the method comprising: acquiring acoustic imaging data of the object having a feature; applying the acoustic imaging data to the previously trained machine learning model to generate an outline of the feature in the imaging data; applying a sizing algorithm to the outl ined feature to adjust the outline of the feature outputted by the previously trained machine learning model; generating an NDT image with the adjusted outline of the feature; and displaying, on a user interface, the NDT image with the adjusted outline of the feature.
BRIEF DESCRIPTION OF THE DRAWINGS
[0008] In the drawings, which are not necessarily drawn to scale, like numerals may describe similar components in different views. The drawings illustrate generally, by way of example, but not by way of limitation, various embodiments discussed in the present document.
[0009] FIG. 1 illustrates generally an example comprising an acoustic inspection system, such as may be used to perform one or more techniques described herein.
[0010] FIG. 2A is an image of an example of a non-destructive testing (NDT) image of an object under inspection that includes a flaw.
[0011] FIG. 2B is the NDT image of FIG. 2A including an outline generated by a generic machine learning (ML) model to depict a size of the flaw.
[0012] FIG. 2C is the NDT image of FIG. 2A including an outline generated by an untrained human to depict the size of the flaw.
[0013] FIG. 3 is a flow diagram of an example of a method 300 of training processing circuitry using machine learning for detecting and sizing a feature in an NDT image of an object under inspection.
[0014] FIG. 4A depicts acoustic imaging data of another object under inspection.
[0015] FIG. 4B depicts the features of the acoustic imaging data of FIG. 4A with outlines generated by a previously trained machine learning model.
[0016] FIG. 5 is a flow diagram of an example of a sizing algorithm that may be used to implement various techniques of this disclosure.
[0017] FIG. 6 is a flow diagram of an example of a method for the threshold determination of the sizing algorithm in FIG. 5.
[0018] FIG. 7 depicts acoustic imaging data displayed as an NDT image of an object under inspection.
[0019] FIG. 8 depicts acoustic imaging data displayed as an NDT image.
[0020] FIG. 9 is a flow diagram of an example of a method of using processing circuitry for detecting and sizing a feature in non-destructive testing (NDT) images of an object under inspection using a previously trained machine learning model.
[0021] FIG. 10 is a flow diagram of an example of a method of training processing circuitry using machine learning for detecting and sizing a feature in a non-destructive testing (NDT) image of an object under inspection.
[0022] FIG. 11 is a flow diagram of an example of a method of using processing circuitry for detecting and sizing a feature in non-destructive testing (NDT) images of an object under inspection using a previously trained machine learning model. [0023] FIG. 12 shows an example of a machine learning module that may implement various techniques of this disclosure.
[0024] FIG. 13 illustrates a block diagram of an example of a machine upon which any one or more of the techniques (e.g., methodologies) discussed herein may perform.
DETAILED DESCRIPTION
[0025] Acoustic testing, such as ultrasound-based inspection, may include focusing or beam-forming techniques to aid in construction of data plots or images representing a region of interest within the test object. Use of an array of ultrasound transducer elements may include use of a phased-array beamforming approach and may be referred to as Phased Array Ultrasound Testing (PAUT). For example, a delay-and-sum beamforming technique may be used such as including coherently summing time-domain representations of received acoustic signals from respective transducer elements or apertures. In another approach, a “Full Matrix Capture” (FMC) technique may be used where one or more elements in an array (or apertures defined by such elements) are used to transmit an acoustic pulse and other elements are used to receive scattered or reflected acoustic energy, and a matrix is constructed of time-series (e.g., A-Scan) representations corresponding to a sequence of transmit-receive cycles in which the transmissions are occurring from different elements (or corresponding apertures) in the array. Beamforming and imaging may be performed using a technique such as a “Total Focusing Method” (TFM), in which a coherent summation may be performed using A-scan data acquired using an FMC technique. In a manner similar to TFM imaging, a phase-based approach may be used for one or more of acquisition, storage, or subsequent analysis. Such a phase-based approach may include coherent summation of normalized or quantized representations of A-Scan data corresponding to phase information. Such an approach may be referred to as a “phase coherence imaging” (PCI) beamforming technique.
[0026] In the field of non-destructive testing (NDT), accurately detecting and sizing features such as flaws or other features in imaging data plays an important role in assessing the integrity of materials and structures. Generally, analysis of phased-array acoustic inspection (PA) imaging data involves human inspectors identifying features and manually applying known criteria to measure the size of features (e.g., using an approach such as a - 6dB method). Relying on manual inspection and interpretation of images may be timeconsuming and prone to human error. Some solutions incorporate automated algorithms to assist in the detection and sizing of features. The present inventors have recognized that these methods often fall short in terms of precision and reliability. For example, some solutions struggle with accurately delineating the boundaries of features, particularly in complex or noisy environments.
[0027] As a result, the present inventors have recognized a need for a more robust and flexible approach that may improve the accuracy and efficiency of feature detection and sizing in NDT applications. The present inventors have recognized, among other things, that manual detection may be augmented or replaced such as by using automated identification and sizing of features with a combination of artificial intelligence (e.g., machine learning) and non-AI-based algorithmic approaches.
[0028] This disclosure describes techniques for detecting and sizing a feature in a nondestructive testing (NDT) image of an object under inspection. In some examples, this disclosure describes techniques that introduce a two-step approach that combines machine learning with a precise sizing algorithm. For example, a machine learning model is initially trained to perform a rough detection of features within the imaging data. This model is capable of identifying potential areas of interest, thereby providing a preliminary outline of the features. Subsequently, a deterministic sizing algorithm is applied to refine these outlines, ensuring accurate measurement of the feature dimensions. This approach not only enhances the precision of feature sizing but also allows for greater flexibility in adapting to various imaging scenarios. By leveraging the strengths of both machine learning and deterministic algorithms, the described method offers a comprehensive solution for improving the accuracy and reliability of feature detection and sizing in NDT images.
[0029] FIG. 1 illustrates generally an example comprising an acoustic inspection system 100, such as may be used to perform one or more techniques described herein. The acoustic inspection system 100 of FIG. 1 is an example of an acoustic imaging modality, such as an acoustic phased array system, that may implement various techniques of this disclosure.
[0030] The inspection system 100 may include a test instrument 140, such as a hand-held or portable assembly. The test instrument 140 may be electrically coupled to a probe assembly, such as using a multi-conductor interconnect 130. The probe assembly 150 may include one or more electroacoustic transducers, such as a transducer array 152 including respective transducers 154A through 154N. The transducers array may follow a linear or curved contour, or may include an array of elements extending in two axes, such as providing a matrix of transducer elements. The elements need not be square in footprint or arranged along a straight-line axis. Element size and pitch may be varied according to the inspection application. [0031] A modular probe assembly 150 configuration may be used, such as to allow a test instrument 140 to be used with various different probe assemblies 150. Generally, the transducer array 152 includes piezoelectric transducers, such as may be acoustically coupled to a target 158 (e.g., an object under test) through a coupling medium 156. The coupling medium may include a fluid or gel or a solid membrane (e.g., an elastomer or other polymer material), or a combination of fluid, gel, or solid structures. For example, an acoustic transducer assembly may include a transducer array coupled to a wedge structure comprising a rigid thermoset polymer having known acoustic propagation characteristics (for example, Rexolite® available from C-Lec Plastics Inc.), and water may be injected between the wedge and the structure under test as a coupling medium 156 during testing.
[0032] The test instrument 140 may include digital and analog circuitry, such as a frontend circuit 122 including one or more transmit signal chains, receive signal chains, or switching circuitry (e.g., transmit/receive switching circuitry). The transmit signal chain may include amplifier and filter circuitry, such as to provide transmit pulses for delivery through an interconnect 130 to a probe assembly 150 for insonification of the target 158, such as to image or otherwise detect a flaw 160 on or within the target 158 structure by receiving scattered or reflected acoustic energy elicited in response to the insonification. [0033] Although FIG. 1 shows a single probe assembly 150 and a single transducer array 152, other configurations may be used, such as multiple probe assemblies connected to a single test instrument 140, or multiple transducer arrays 152 used with a single or multiple probe assemblies 150 for tandem inspection. Similarly, a test protocol may be performed using coordination between multiple test instruments 140, such as in response to an overall test scheme established from a master test instrument 140, or established by another remote system such as a computing facility 108 or general purpose computing device such as a laptop 132, tablet, smart-phone, desktop computer, or the like. The test scheme may be established according to a published standard or regulatory requirement and may be performed upon initial fabrication or on a recurring basis for ongoing surveillance, as illustrative examples.
[0034] The receive signal chain of the front-end circuit 122 may include one or more filters or amplifier circuits, along with an analog-to-digital conversion facility, such as to digitize echo signals received using the probe assembly 150. Digitization may be performed coherently, such as to provide multiple channels of digitized data aligned or referenced to each other in time or phase. The front-end circuit 122 may be coupled to and controlled by one or more processor circuits, such as a processor circuit 102 included as a portion of the test instrument 140. The processor circuit 102 may be coupled to a memory circuit, such as to execute instructions that cause the test instrument 140 to perform one or more of acoustic transmission, acoustic acquisition, processing, or storage of data relating to an acoustic inspection, or to otherwise perform techniques as shown and described herein. The test instrument 140 may be communicatively coupled to other portions of the system 100, such as using a wired or wireless communication interface 120.
[0035] For example, performance of one or more techniques as shown and described in this disclosure may be accomplished on-board the test instrument 140 or using other processing or storage facilities such as using a computing facility 108 or a general -purpose computing device such as a laptop 132, tablet, smart-phone, desktop computer, or the like. For example, processing tasks that would be undesirably slow if performed on-board the test instrument 140 or beyond the capabilities of the test instrument 140 may be performed remotely (e.g., on a separate system), such as in response to a request from the test instrument 140. Similarly, storage of imaging data or intermediate data such as A-line matrices of time-series data may be accomplished using remote facilities communicatively coupled to the test instrument 140. The test instrument may include a display 110, such as for presentation of configuration information or results, and an input device 112 such as including one or more of a keyboard, trackball, function keys or soft keys, mouse-interface, touch-screen, stylus, or the like, for receiving operator commands, configuration information, or responses to queries.
[0036] The acoustic inspection system 100 may acquire acoustic imaging data, such as FMC data or virtual source aperture (VS A) data, of a material using an acoustic imaging modality, such as an acoustic phased array system. In addition, phased array linear scanning is another technique that may be used to acquire acoustic imaging data, where phased array linear scanning includes one A-scan per aperture along the array, and the aperture is shifted sequentially to produce a discrete set of contiguous A-scans. The processor circuit 102 may then generate an acoustic imaging data set, such as a scattering matrix (S-matrix), plane wave matrix, or other matrix or data set, corresponding to an acoustic propagation mode, such as pulse echo direct (TT), self-tandem (TT-T), and/or pulse echo with skip (TT-TT).
[0037] As described in more detail below, the processor circuit 102 or another processor circuit may be trained using machine learning so as to detect and size a feature in a nondestructive testing (NDT) image of an object under inspection. Additionally or alternatively, the processor circuit 102 or another processor circuit may be used for detecting and sizing a feature in NDT images of an object under inspection using a previously trained machine learning model. The techniques shown and described in this document may be performed using a portion or an entirety of an inspection system 100 as shown in FIG. 1 or otherwise using a machine 1300 as discussed below in relation to FIG. 13.
[0038] FIG. 2A is an image of an example of a non-destructive testing (NDT) image 200 of an object under inspection that includes a feature, namely a flaw 202. FIG. 2B is the NDT image 200 of FIG. 2A including an outline 204 generated by a generic machine learning (ML) model to depict a size of the flaw 202. FIG. 2C is the NDT image 200 of FIG. 2A including an outline 206 generated by an untrained human to depict the size of the flaw 202. Training an ML model to defect flaws is done with supervised learning. Typically, a human annotator draws bounding boxes around each flaw, or roughly segments each flaw by hand or with a pre-trained ML model.
[0039] One problem is that flaw sizes drawn by the human annotator or pre-trained ML model are not accurate and do not reflect the size of the flaws as measured by an approved algorithm. Neither the outline 204 in FIG. 2B nor the outline 206 in FIG. 2C accurately reflects the size of the flaw 202. This disclosure describes techniques to replace the manual detection and sizing of flaws with a combination of artificial intelligence and algorithms.
[0040] FIG. 3 is a flow diagram of an example of a method 300 of training processing circuitry using machine learning for detecting and sizing a feature in an NDT image of an object under inspection. In the method 300, accurate sizing is performed before training the AL A system, such as the inspection system 100 of FIG. 1, acquires acoustic imaging data 302 of an object under inspection, such as the target 158 of FIG. 1, with a feature 304, e.g., flaw, back wall echo, front wall echo, etc. The acoustic imaging data 302 is shown as an NDT image and includes a plurality of pixels 316 (for 2D images) or voxels (for 3D images). A pixel (picture element) is the smallest unit of a digital 2D image or display, representing a single point of color or intensity in a grid. A voxel (volume element) is the 3D equivalent of a pixel, representing a discrete unit of volume in a three-dimensional space. Each pixel in the acoustic imaging data 302 is a representation of an amplitude of the signal that was received by the transducer array 152 of FIG. 1.
[0041] Once the acoustic imaging data is acquired, the method 300 includes outlining the feature 304 in the imaging data with an outline 306. In some examples, a human operator/inspector generates the rough outline 306, e.g., a bounding box, around the feature 304 in the acoustic imaging data 302. In other examples, an algorithm or automated detection system generates the rough outline 306. This initial outline 306 serves as a starting point for more precise outlines. [0042] The method then applies a sizing algorithm 308 to the outline 306 of the flaw to adjust its size and create a ground truth dataset. The sizing algorithm 308 analyzes individual pixels (or voxels). The sizing algorithm 308 compares the representation of an amplitude of the signal of each pixel (or voxel) to a threshold value. A pixel (or voxel) is included in the outline 306 of the feature 304 when the amplitude of the signal associated with pixel 316 equals or exceeds the threshold value, and is excluded when the amplitude falls below the threshold value. As seen in FIG. 3, the sizing algorithm 308 adjusts the rough outline 306 to generate a precise outline 310, or accurate ground truth data, according to NDT sizing criteria.
[0043] The precise outline 310, along with other ground truth data such as other precise outlines, forms the ground truth dataset 312 that is used to train a ML model. The trained ML model 314 is trained using the ground truth dataset 312. The trained ML model 314 is configured to detect features in NDT images and, in some examples, size those features. One advantage of this technique is that the sizing algorithm is incorporated during the training phase, resulting in faster inference times because the trained ML model 314 directly outputs sized features without needing to apply the sizing algorithm during deployment.
[0044] Once trained, the trained ML model 314 may be applied to acoustic imaging data of another object under inspection having another feature, e.g., flaw, back wall echo, front wall echo, etc., as described below with respect to FIG. 4A and FIG. 4B.
[0045] FIG. 4A depicts acoustic imaging data 400 of another object under inspection. The acoustic imaging data 400 is displayed as an NDT image in FIG. 4A. The object may be another target 158 of FIG. 1. The acoustic imaging data 400 includes two features, back wall echo 402 and back wall echo 404.
[0046] FIG. 4B depicts the features of the acoustic imaging data 400 of FIG. 4A with outlines generated by a previously trained machine learning model. The acoustic imaging data 400 is displayed as an NDT image in FIG. 4B. The acoustic imaging data 400 includes the back wall echo 402 and the back wall echo 404 of FIG. 4A. A previously trained ML model, such as the trained ML model 314 of FIG. 3, is applied to the acoustic imaging data 400. The previously trained ML model generates outline(s) of the feature(s) in the acoustic imaging data 400. Here, the previously trained ML model generates an outline 406 for the back wall echo 402 and an outline 408 for the back wall echo 404.
[0047] The system, such as the inspection system 100 of FIG. 1, generates an NDT image that includes the outline 406 for the back wall echo 402 and an outline 408 for the back wall echo 404, and displays the acoustic imaging data 400 as an NDT image in FIG. 4B. The system displays, on a user interface, the NDT image shown in FIG. 4B with the outline(s) of the feature(s).
[0048] FIG. 5 is a flow diagram of an example of a sizing algorithm 500 that may be used to implement various techniques of this disclosure. The sizing algorithm 500 is an example of the sizing algorithm 308 of FIG. 3 and may be implemented using the processor circuit 102 or another processor circuit. The main idea behind the sizing algorithm is, for each defect, find an adequate threshold Ki based on an amplitude of the signal of the coordinately equivalent region in previous and subsequent end views, and erode the feature i to keep only the amplitude value above Ki, and extend the feature to add every contiguous voxel whose value is above Ki. In some examples, the end views are similar, at least locally, with respect to structural noise and flaws.
[0049] The sizing algorithm 500 begins with the input of ultrasound acquisition data A G IRu'v'wat block 502 and a model prediction binary mask M G {0,l}u,v,w at block 504, where A is a 3D array of the ultrasound acquisition data and M is a 3D mask array having the same dimensions of the acquisition data and representing the detected features. In some examples, there is a model prediction binary mask per feature type. For example, there is a first mask for flaws of type 1, a second mask for flaws of type 2, etc., another mask for the front wall echoes, another mask for the back wall echoes, and so on. Different sizing algorithms may then be deployed, each one being specific to a particular feature type.
[0050] Additional inputs include the following: p G [0- ■“], which is a threshold determination parameter corresponding to the length of the considered window along the u-axis, and kmax G M+ : Maximum iterations for dilation steps.
Also, Mtjk = 1 if Mtjk belongs to a feature, otherwise Mtjk = 0. The acquisition data in block 502 represents the raw acoustic imaging data, and the binary mask at block 504 indicates the initial detection of features, e.g., flaw, back wall echo, front wall echo, within this data.
[0051] Using A and M as inputs, at block 506, the sizing algorithm 500 computes the background matrix Abg = A Q (->M), where (-iM) is defined as follows:
This means the following:
[0052] The background matrix Abg c (IR U {NA})U,V,W at block 508 is derived by excluding the features identified in the binary mask M from the acquisition data. The background matrix is used for isolating noise and non-feature elements in the acoustic imaging data, which aids in the subsequent threshold determination process. The background matrix is used to determine a representation of noise in the acoustic imaging data adjacent to the feature, e.g., flaw, front wall echo, back wall echo, etc. The representation of noise may be a mean value of the amplitude of the signal of a pixel (or voxel), for example.
[0053] For each detected feature, the sizing algorithm 500 proceeds with a series of steps to refine the feature mask. At block 510, initially, a threshold value of the noise is determined, which is based on the background matrix and the feature's characteristics . The threshold determination process is described below with respect to FIG. 6. At block 512 this threshold value is used to filter the feature mask, ensuring that only the most relevant data points are retained. The filtering process enhances the accuracy of the feature representation.
[0054] At block 514 and following the filtering, morphological dilation is applied to the feature mask, which adjusts, e.g., increases, the size of the outline of the feature, such as based on the threshold. The dilation process involves expanding the boundaries of the detected feature to account for any potential underestimation of the feature's size. The dilation process is iterative, and after each iteration, the sizing algorithm 500 determines a threshold at block 516 and uses this threshold to filter the feature mask at block 518. In adjusting the size of the outline of the feature, an erosion process compares the amplitude of a signal of a pixel (or a voxel) to the threshold value and either includes the pixel (or the voxel) in the outline of the feature when the amplitude is equal to or greater than the threshold value or excludes the pixel (or the voxel) from the outline of the feature when the amplitude is less than the threshold value.
[0055] At block 520, the sizing algorithm 500 determines whether the maximum number of iterations has been reached or whether the feature mask has stabilized, indicated by C = C . If the conditions for termination are not met ("NO" branch of block 520), then the sizing algorithm 500 returns to block 514. If the conditions for termination are met ("YES" branch of block 520), the sizing algorithm 500 returns the revised mask M' at block 522, where M' E {0,l}u,v,w is the revised version of the binary mask representing the resized features. This iterative approach allows for precise adjustment of the feature size, accommodating variations in the ultrasound acquisition data and improving the reliability of the defect sizing.
[0056] To summarize, the sizing algorithm 500 obtains, from A and M, the background matrix Abg. Then, for each detected feature, the sizing algorithm 500 determines a threshold S (shown and described below with respect to FIG. 6), and filters the feature binary mask to keep the pixels (or voxels) with values in A that are greater than threshold S. Then, the sizing algorithm 500 repeats the steps of:
1. Morphological dilation, which is an image processing operation that expands the boundaries of objects in a binary or grayscale image. It works by applying a structuring element to the image, where a pixel is set to the maximum value within the element's neighborhood, making objects appear larger and filling small gaps;
2. Determining threshold S and
3. Filtering the feature binary mask to keep the pixels (or voxels) with values in A that are greater than threshold S to obtain C while k < kmax and C + C .
[0057] FIG. 6 is a flow diagram of an example of a method 600 for the threshold determination of the sizing algorithm 500 in FIG. 5. The method 600 is an example of the threshold determination of block 510 in FIG. 5 and may be implemented using the processor circuit 102 or another processor circuit. This process is designed to accurately determine the threshold S used for refining the detection of features within ultrasound acquisition data A G Wu'v'w.
[0058] In some examples, the threshold corresponds to a maximum between 1) a 6 dB drop from maximum amplitude in each coordinately equivalent region in p previous and p subsequent end views, and 2) a median of maximum amplitude of each coordinately equivalent region in p previous and p subsequent end views of length 2 x p in u axis. In an erosion step, the threshold is applied to keep only the amplitudes greater than 6 dB. In a dilation step and for a fixed number of iterations, the system performs dilation with a 3-axis kernel using a subvolume around the feature mask, recalculates the threshold using the expanded outline, and applies the threshold. If the feature mask is the same size as in the previous iteration after applying the threshold, the dilation step stops.
[0059] The method 600 begins at block 502 with the ultrasound acquisition data A G represents the acoustic imaging data captured during the ultrasound scan.
Along with this data, at block 504 a model prediction binary mask M G {0, l]u'v'w is used, which indicates the initial detection of features or defects within the acquired acoustic imaging data. The binary mask assists in distinguishing between areas of interest and the background, represented at block 508 by the ultrasound acquisition data without features matrix Abg c (IR u {NA})U,V'W, which was determined in FIG. 5. This data set excludes the detected features, allowing for a clearer analysis of the background noise and non-feature elements. These three blocks represent inputs 602 to the method 600.
[0060] At block 604, the method 600 obtains the feature binary mask F E corresponding to a subset of the mask M:
F = M[ufl uf2, vfl vf2, wfl wf2] where (u^, Uf2, v^, Vf2, Wflt w 2) are indices determining the bounding box of the feature inside A. The feature binary mask is a refined version of the initial binary mask M E {0,l}u'v'w, focusing on the specific features detected.
[0061] At block 606, is the result obtained from the process of block 604.
[0062] At block 608, the method 600 determines the maximum projection along the u-axis of the feature binary mask F E {0,l)uf’vf’wf to obtain the 2D cross section of the feature mask.
[0063] At block 610, which is the output of the process of block 608, the method 600 computes C E {NA, l vf’wf as the max projection of F along the u-axis to obtain a 2D feature outline: > f 1 if at least one voxel in F[ ,j, k] is 1. ]k ''NA otherwise
This produces a feature outline slice that provides a two-dimensional representation of the feature's outline, which is used in subsequent noise analysis.
[0064] At block 612, the method 600 computes N, which is the noise region surrounding the feature from the acquisition data background Abg, where
[0065] At block 614, the method 600 restrains noise to the feature outline by performing an element-wise multiplication of C with each 2D slice of N on u-axis:
Ntjk = Ntjk O Cjk i G [max(ufi - p, 0) ■ mining + p, u)]
[0066] At block 616, the method 600 determines the max projection of N along the w-axis. [0067] At block 618, which is the output of the process of block 616, the method 600 computes the noise region Ncscan as the following: f max (N [i, /, : ] ) if 3 k such that Ni jk NA
[0068] Ncscan)ij = _ K . w, 7 ,
J I NA it Nijk = NA v/c t [Wfi: wy2J
[0069] At block 620, the method 600 computes a threshold S using mean and standard deviation of defined values in Ncscan :
S = mean Ncscan) + K X std Ncscan)
The threshold S is the threshold used in block 510 and block 516 of FIG. 5, for example.
[0070] FIG. 7 depicts acoustic imaging data displayed as an NDT image 700 of an object under inspection 702. The NDT image 700 is a 3D view of a portion of the object under inspection 702, where the object under inspection 702, e.g., a tubular structure, extends longitudinally from left to right in the NDT image 700. The system, such as the inspection system 100 of FIG. 1, acquires acoustic imaging data that includes end views of the object under inspection 702. The system displays the 3D NDT image 700 by “stacking” sequences of end views together to generate the 3D volume shown, which is shown in perspective view.
[0071] The object under inspection 702 includes a feature, namely a flaw 704. The NDT image 700 depicts end views 706 before the flaw 704 and end views 708 after the flaw 704. The “empty” areas between the flaw 704 and the end views 706 and the end views 708 represent other flaws that have been removed by the ML model so that the sizing algorithm may focus on one flaw at a time, such as the flaw 704.
[0072] The sizing algorithm 500 of FIG. 5 determines (at block 506) a representation of the noise in the acoustic imaging data adjacent to the feature, e.g., the flaw 704 of NDT image 700. In the example shown, the sizing algorithm 500 uses the pixels (or voxels) in the end views 706 and the end views 708 adjacent to the flaw 704 to determine the representation of the noise in the acoustic imaging data.
[0073] FIG. 8 depicts acoustic imaging data displayed as an NDT image 800. The NDT image 800 depicts the flaw 704 of FIG. 7 and the end views 706 of FIG. 7. The NDT image 800 itself is shown as an end view. As seen in FIG. 8, the sizing algorithm 500 defines the outline of the end views 706 to be similar to the outline of the flaw 704, which was defined by an ML model, such as the previously trained machine learning model 906 in FIG. 9. The sizing algorithm 500 then adjusts the size of the outline of the end views 706 to more precise match the size of the outline of the flaw 704. [0074] FIG. 9 is a flow diagram of an example of a method 900 of using processing circuitry for detecting and sizing a feature in non-destructive testing (NDT) images of an object under inspection using a previously trained machine learning model. In the method 900, the sizing algorithm, such as the sizing algorithm 500 of FIG. 5, is performed after the ML inference. In contrast, the method 300 in FIG. 3 applied the sizing algorithm before training the ML model.
[0075] A system, such as the inspection system 100 of FIG. 1, acquires acoustic imaging data 902 of an object under inspection, such as the target 158 of FIG. 1, with a feature 904, e.g., flaw, back wall echo, front wall echo, etc. The acoustic imaging data 902 is displayed as an NDT image in FIG. 9. The acoustic imaging data 902 includes a plurality of pixels 908 (for 2D images) or voxels (for 3D images). Each pixel in the acoustic imaging data 902 is a representation of an amplitude of the signal that was received by the transducer array 152 of FIG. 1.
[0076] The system applies the acoustic imaging data 902 to a previously trained machine learning model 906 to generate a rough outline 910, e.g., bounding box, of the feature 904 in the acoustic imaging data 902. As seen in FIG. 9, the rough outline 910 likely includes quite a few pixels 908 that are not part of the feature 904, where the darker the pixel 908 the more likely the pixel 908 is part of the feature 904.
[0077] The system applies a sizing algorithm 912 to the outline 910 outputted by the previously trained machine learning model 906 to adjust the size and shape of the outline 910 and generate an adjusted outline 914. In the non-limiting example shown in FIG. 9, the rough outline 910 generated by the previously trained machine learning model 906, shown as a bounding box, has been adjusted so that the new outline 914 more closely resembles the feature 904.
[0078] The system then generates an NDT image 916 with the adjusted outline 914 of the feature 904. The system then displays on a user interface, such as the display 110 of FIG. 1, the NDT image 916 with the adjusted outline 914 of the feature 904.
[0079] In some examples, using either the techniques of FIG. 3 or FIG. 9, the system determines and displays one or more dimensions of the feature, such as its width and/or length. The system may determine the dimensions of an outlined feature in the NDT image using metadata processing, for example. The acoustic imaging data includes metadata containing registration information that maps each pixel's (or voxel's) location to physical dimensions, such as millimeters. When the final outline of a feature is determined through the erosion and dilation steps of the sizing algorithm 500 described below, the system may use this registration metadata to convert the pixel (or voxel) measurements into physical dimensions for display. For example, if an end view is 100 pixels wide, each pixel column corresponds to a specific position in millimeters based on the registration information stored in the NDT image file format. By analyzing the pixel positions that make up the outline of the feature and applying the registration metadata, the system calculates and displays one or more physical dimensions of the feature on the user interface.
[0080] In addition, in some examples, the system displays statistics about the feature and/or the noise. For features, the system determines statistical measurements of the amplitude values from pixels (or voxels) within the outlined region, such as the mean amplitude, median amplitude, and standard deviation. This statistical analysis provides a characterization of the feature, as pixels (or voxels) within the outlined region have a specific amplitude value received by the transducer, such as the transducer array 152 of FIG. 1.
[0081] Similarly, statistical measurements may be calculated for the noise regions adjacent to the detected features. These statistics may be displayed on a user interface, which displays one or more of the average amplitude values, median values, and other statistical metrics that help characterize both the features and surrounding noise regions. These statistical measurements are particularly valuable to users who need to analyze the amplitude characteristics within detected features and compare them to the surrounding noise levels.
[0082] FIG. 10 is a flow diagram of an example of a method 1000 of training processing circuitry using machine learning for detecting and sizing a feature in a non -destructive testing (NDT) image of an object under inspection. The method 1000 is an example of the method 300 shown in FIG. 3.
[0083] At block 1002, the method 1000 includes acquiring acoustic imaging data of a first object having a first feature. For example, a system such as the inspection system 100 acquires the acoustic imaging data 302 of a first object having a feature 304, as seen in FIG. 3.
[0084] At block 1004, the method 1000 includes outlining the first feature in the acoustic imaging data. For example, the processor circuit 102 of the inspection system 100 of FIG. 1 generates an outline 306 of the feature 304 in the acoustic imaging data 302 of FIG. 3.
[0085] At block 1006, the method 1000 includes applying a sizing algorithm to the outlined first feature to adjust a size of the outline and generate a ground truth dataset. For example, the processor circuit 102 of the inspection system 100 of FIG. 1 applies the sizing algorithm 308 of FIG. 3, which is described in detail with respect to FIG. 5, to the outline 306 to adjust a size of the outline and generate a ground truth dataset. In FIG. 3. The precise outline 310, along with other ground truth data such as other precise outlines, forms the ground truth dataset 312.
[0086] At block 1008, the method 1000 includes training a machine learning model using the ground truth dataset. For example, trained ML model 314 of FIG. 3 is trained by the processor circuit 102 of the inspection system 100 of FIG. 1 using the ground truth dataset 312.
[0087] In some examples, the method 1000 includes applying the trained machine learning model to acoustic imaging data of a second object having a second feature, generating an NDT image with an outline of the second feature generated by the trained machine learning model, and displaying, on a user interface, the NDT image with the outline of the second feature.
[0088] In some examples, the method 1000 includes displaying, on the user interface, one or more dimensions of the second feature. In other examples, the method 1000 includes displaying, on the user interface, statistics about the second feature and/or noise.
[0089] In some examples, applying the sizing algorithm to the outlined first feature includes determining a representation of noise in the acoustic imaging data adjacent to the first feature, determining a threshold value of the noise, and adjusting, based on the threshold value, the size of the outline of the first feature.
[0090] In some examples, the method 1000 includes an erosion step in which adjusting, based on the threshold value, the size of the outline of the first feature includes comparing a representation of an amplitude of a signal of a pixel or a voxel to the threshold value, including the pixel or the voxel in the outline of the first feature when the amplitude is equal to or greater than the threshold value, and excluding the pixel or the voxel from the outline of the first feature when the amplitude is less than the threshold value.
[0091] In some examples, the acoustic imaging data includes a sequence of end views of the first object, and wherein determining the representation of the noise in the acoustic imaging data adjacent to the first feature includes determining the representation of noise in the end views adjacent to the first feature. An example is shown and described with respect to FIG. 7 and FIG. 8.
[0092] FIG. 11 is a flow diagram of an example of a method 1100 of using processing circuitry for detecting and sizing a feature in non-destructive testing (NDT) images of an object under inspection using a previously trained machine learning model. The method 1100 is an example of the method 900 shown in FIG. 9.
[0093] At block 1102, the method 1100 includes acquiring acoustic imaging data of the object having a feature. For example, a system such as the inspection system 100 acquires the acoustic imaging data 902 of an object having a feature 904, as seen in FIG. 9.
[0094] At block 1104, the method 1100 includes applying the acoustic imaging data to the previously trained machine learning model to generate an outline of the feature in the imaging data. For example, the system applies the acoustic imaging data 902 to the previously trained machine learning model 906 of FIG. 9 to generate an outline 910 of the feature 904 in the acoustic imaging data 902.
[0095] At block 1106, the method 1100 includes applying a sizing algorithm to the outlined feature to adjust the outline of the feature outputted by the previously trained machine learning model. For example, the system applies the sizing algorithm 912 of FIG. 9, which is described in detail with respect to FIG. 5, to the outline 910 of the feature 904.
[0096] At block 1108, the method 1100 includes generating an NDT image with the adjusted outline of the feature. For example, the system generates an NDT image 916 with the adjusted outline 914, as seen in FIG. 9.
[0097] At block 1110, the method 1100 includes displaying, on a user interface, the NDT image with the adjusted outline of the feature. The system displays, such as on the display 110 of FIG. 1, the NDT image 916 with the outline 914 of the feature 904.
[0098] In some examples, the method 1100 includes applying the trained machine learning model to acoustic imaging data of a second object having a second feature, generating an NDT image with an outline of the second feature generated by the trained machine learning model, and displaying, on a user interface, the NDT image with the outline of the second feature.
[0099] In some examples, the method 1100 includes displaying, on the user interface, one or more dimensions of the second feature. In other examples, the method 1100 includes displaying, on the user interface, statistics about the second feature and/or noise.
[0100] In some examples, applying the sizing algorithm to the outlined first feature includes determining a representation of noise in the acoustic imaging data adjacent to the first feature, determining a threshold value of the noise, and adjusting, based on the threshold value, the size of the outline of the first feature.
[0101] In some examples, the method 1100 includes an erosion step in which adjusting, based on the threshold value, the size of the outline of the first feature includes comparing a representation of an amplitude of a signal of a pixel or a voxel to the threshold value, including the pixel or the voxel in the outline of the first feature when the amplitude is equal to or greater than the threshold value, and excluding the pixel or the voxel from the outline of the first feature when the amplitude is less than the threshold value.
[0102] In some examples, the acoustic imaging data includes a sequence of end views of the first object, and wherein determining the representation of the noise in the acoustic imaging data adjacent to the first feature includes determining the representation of noise in the end views adjacent to the first feature. An example is shown and described with respect to FIG. 7 and FIG. 8.
[0103] FIG. 12 shows an example of a machine learning module 1200 that may implement various techniques of this disclosure. The machine learning module 1200 is an example of the trained ML model 314 of FIG. 3. The machine learning module 1200 may be implemented in whole or in part by one or more computing devices. In some examples, a training module 1202 may be implemented by a different device than a prediction module 1004. In these examples, the model 1214 may be created on a first machine, e.g., a desktop computer, and then sent to a second machine, e.g., a handheld device.
[0104] The machine learning module 1200 utilizes a training module 1202 and a prediction module 1204. The machine learning module 1200 may implement a computerized method of training processing circuitry, such as the processor 1302 of FIG. 13, using machine learning for detecting and sizing a feature in a non-destructive testing (NDT) image of an object under inspection. Additionally or alternatively, the machine learning module 1200 may implement a computerized method of using processing circuitry, such as the processor 1302 of FIG. 13, for detecting and sizing a feature in non-destructive testing (NDT) images of an object under inspection using a previously trained machine learning model
[0105] The training module 1202 inputs training data 1206 into a selector module 1208 that selects a training vector from the training data. The selector module 1208 may include data normalization/standardization and cleaning, such as to remove any useless information. In some examples, the model 1214 itself may perform aspects of the selector module, such as a gradient boosted trees.
[0106] The training module may train the machine learning module 1200 on a plurality of flaw or no flaw conditions. The training data 1006 may include, for example, ground truth data. Ground truth data may include synthetic data from the simulated flaws and geometry. Aside from synthetic datasets, ground truth labels may also be obtained using other NDT methods, such as radiography, CT scans, and laser surface profiling. In addition, the training data 1206 may include one or more of simulations of a plurality of types of material flaws in the material, simulations of a plurality of positions of material flaws in the material, or simulations of a plurality of ghost echoes in the material to simulate no flaw conditions.
[0107] The training data 1206 may be labeled. In other examples, the training data may not be labeled, and the model may be trained using feedback data — such as through a reinforcement learning method.
[0108] The selector module 1208 selects a training vector 1210 from the training data 1206. The selected data may fill the training vector 1210 and includes a set of the training data that is determined to be predictive of a classification. Information chosen for inclusion in the training vector 1210 may be all the training data 1206 or in some examples, may be a subset of all the training data 1206. The training vector 1210 may be utilized (along with any applicable labels) by the machine learning algorithm 1212 to produce a model 1214 (a trained machine learning model). In some examples, other data structures other than vectors may be used. The machine learning algorithm 1212 may leam one or more layers of a model.
[0109] Example layers may include convolutional layers, dropout layers, pooling/up sampling layers, SoftMax layers, and the like. Example models may be a neural network, where each layer is comprised of a plurality of neurons that take a plurality of inputs, weight the inputs, input the weighted inputs into an activation function to produce an output which may then be sent to another layer. Example activation functions may include a Rectified Linear Unit (ReLu), and the like. Layers of the model may be fully or partially connected.
[0110] In the prediction module 1204, data 1216 may be input to the selector module 1218. The data 1216 may include an acoustic imaging data set. The selector module 1218 may operate the same, or differently than the selector module 1208 of the training module 1202. In some examples, the selector modules 1208 and 1218 are the same modules or different instances of the same module. The selector module 1218 produces a vector 1220, which is input into the model 1214 to generate an output NDT image of the specimen, resulting in an image 1222.
[0111] For example, the weightings and/or network structure learned by the training module 1202 may be executed on the vector 1220 by applying vector 1220 to a first layer of the model 1214 to produce inputs to a second layer of the model 1214, and so on until the image is output. As previously noted, other data structures may be used other than a vector (e.g., a matrix).
[0112] In some examples, there may be hidden layers between the input and output. In some examples, a convolutional neural network (CNN) may be connected to the s -matrix and the s-matrix may be kept in matrix form (not vector).
[0113] The training module may train the machine learning module 1200 on a plurality of flaw or no flaw conditions, such as described above. The training module 1202 may operate in an offline manner to train the model 1214. The prediction module 1204, however, may be designed to operate in an online manner. It should be noted that the model 1214 may be periodically updated via additional training and/or user feedback. For example, additional training data 1206 may be provided to refine the model by the training module 1002.
[0114] The machine learning algorithm 1200 may be selected from among many different potential supervised or unsupervised machine learning algorithms. Examples of learning algorithms include artificial neural networks, convolutional neural networks, Bayesian networks, instance-based learning, support vector machines, decision trees (e.g., Iterative Dichotomiser 3, C4.5, Classification and Regression Tree (CART), Chi -squared Automatic Interaction Detector (CHAID), and the like), random forests, linear classifiers, quadratic classifiers, k-nearest neighbor, linear regression, logistic regression, a region based CNN, a full CNN (for semantic segmentation), a mask R-CNN algorithm for instance segmentation, and hidden Markov models. Examples of unsupervised learning algorithms include expectation-maximization algorithms, vector quantization, and information bottleneck method.
[0115] In this manner, the machine learning module 1200 of FIG. 12 may assist in training processing circuitry for detecting and sizing a feature in a non -destructive testing (NDT) image of an object under inspection and, when trained, using processing circuitry for detecting and sizing a feature in non-destructive testing (NDT) images of an object under inspection, in accordance with this disclosure.
[0116] The techniques shown and described in this document may be performed using a portion or an entirety of an inspection system 100 as shown in FIG. 1 or otherwise using a machine 1300 as discussed below in relation to FIG. 13.
[0117] FIG. 13 illustrates a block diagram of an example of a machine 1300 upon which any one or more of the techniques (e.g., methodologies) discussed herein may perform. In alternative embodiments, the machine 1300 may operate as a standalone device or are connected (e.g., networked) to other machines. In a networked deployment, the machine 1300 may operate in the capacity of a server machine, a client machine, or both in serverclient network environments. In an example, the machine 1300 may act as a peer machine in peer-to-peer (P2P) (or other distributed) network environment. The machine 1300 is a personal computer (PC), a tablet PC, a set-top box (STB), a personal digital assistant (PDA), a mobile telephone, a smart phone, a web appliance, a network router, switch or bridge, a server computer, a database, conference room equipment, or any machine capable of executing instructions (sequential or otherwise) that specify actions to be taken by that machine. In various embodiments, machine 1300 may perform one or more of the processes described above. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein, such as cloud computing, software as a service (SaaS), other computer cluster configurations.
[0118] Examples, as described herein, may include, or may operate on, logic or a number of components, modules, or mechanisms (all referred to hereinafter as
“modules”). Modules are tangible entities (e.g., hardware) capable of performing specified operations and is configured or arranged in a certain manner. In an example, circuits are arranged (e.g., internally or with respect to external entities such as other circuits) in a specified manner as a module. In an example, the whole or part of one or more computer systems (e.g., a standalone, client or server computer system) or one or more hardware processors are configured by firmware or software (e.g., instructions, an application portion, or an application) as a module that operates to perform specified operations. In an example, the software may reside on a non-transitory computer readable storage medium or other machine readable medium. In an example, the software, when executed by the underlying hardware of the module, causes the hardware to perform the specified operations.
[0119] Accordingly, the term “module” is understood to encompass a tangible entity, be that an entity that is physically constructed, specifically configured (e.g., hardwired), or temporarily (e.g., transitorily) configured (e.g., programmed) to operate in a specified manner or to perform part or all of any operation described herein. Considering examples in which modules are temporarily configured, each of the modules need not be instantiated at any one moment in time. For example, where the modules comprise a general-purpose hardware processor configured using software, the general -purpose hardware processor is configured as respective different modules at different times. Software may accordingly configure a hardware processor, for example, to constitute a particular module at one instance of time and to constitute a different module at a different instance of time. [0120] Machine (e.g., computer system) 1300 may include a hardware processor 1302 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof), a main memory 1304, and a static memory 1306, some or all of which may communicate with each other via an interlink 1308 (e.g., bus). The machine 1300 may further include a display unit 1310, an alphanumeric input device 1312 (e.g., a keyboard), and a user interface (UI) navigation device 1314 (e.g., a mouse). In an example, the display unit 1310, input device 1312 and UI navigation device 1314 are a touch screen display. The machine 1300 may additionally include a storage device (e.g., drive unit) 1316, a signal generation device 1318 (e.g., a speaker), a network interface device 1320, and one or more sensors 1321, such as a global positioning system (GPS) sensor, compass, accelerometer, or other sensor. The machine 1300 may include an output controller 1328, such as a serial (e.g., universal serial bus (USB), parallel, or other wired or wireless (e.g., infrared (IR), near field communication (NFC), etc.) connection to communicate or control one or more peripheral devices (e.g., a printer, card reader, etc.) [0121] The storage device 1316 may include a machine readable medium 1322 on which is stored one or more sets of data structures or instructions 1324 (e.g., software) embodying or utilized by any one or more of the techniques or functions described herein. The instructions 1324 may also reside, completely or at least partially, within the main memory 1304, within static memory 1306, or within the hardware processor 1302 during execution thereof by the machine 1300. In an example, one or any combination of the hardware processor 1302, the main memory 1304, the static memory 1306, or the storage device 1316 may constitute machine readable media.
[0122] While the machine readable medium 1322 is illustrated as a single medium, the term "machine readable medium" may include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) configured to store the one or more instructions 1324.
[0123] The term “machine readable medium” may include any medium that is capable of storing, encoding, or carrying instructions for execution by the machine 1300 and that cause the machine 1300 to perform any one or more of the techniques of the present disclosure, or that is capable of storing, encoding or carrying data structures used by or associated with such instructions. Non-limiting machine readable medium examples may include solid-state memories, and optical and magnetic media. Specific examples of machine readable media may include: non-volatile memory, such as semiconductor memory devices (e.g., Electrically Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM)) and flash memory devices; magnetic disks, such as internal hard disks and removable disks; magneto -optical disks; Random Access Memory (RAM); Solid State Drives (SSD); and CD-ROM and DVD-ROM disks. In some examples, machine readable media may include non -transitory machine readable media. In some examples, machine readable media may include machine readable media that is not a transitory propagating signal.
[0124] The instructions 1324 may further be transmitted or received over a communications network 1326 using a transmission medium via the network interface device 1320. The machine 1300 may communicate with one or more other machines utilizing any one of a number of transfer protocols (e.g., frame relay, internet protocol (IP), transmission control protocol (TCP), user datagram protocol (UDP), hypertext transfer protocol (HTTP), etc.). Example communication networks may include a local area network (LAN), a wide area network (WAN), a packet data network (e.g., the Internet), mobile telephone networks (e.g., cellular networks), Plain Old Telephone (POTS) networks, and wireless data networks (e.g., Institute of Electrical and Electronics Engineers (IEEE) 802.11 family of standards known as Wi-Fi®, IEEE 802.16 family of standards known as WiMax®), IEEE 802.15.4 family of standards, a Long Term Evolution (LTE) family of standards, a Universal Mobile Telecommunications System (UMTS) family of standards, peer-to-peer (P2P) networks, among others. In an example, the network interface device 1320 may include one or more physical jacks (e.g., Ethernet, coaxial, or phonejacks) or one or more antennas to connect to the communications network 1326. In an example, the network interface device 1320 may include a plurality of antennas to wirelessly communicate using at least one of single-input multiple-output (SIMO), multiple-input multiple-output (MIMO), or multiple-input single-output (MISO) techniques. In some examples, the network interface device 1320 may wirelessly communicate using Multiple User MIMO techniques.
[0125] Examples, as described herein, may include, or may operate on, logic or a number of components, modules, or mechanisms. Modules are tangible entities (e.g., hardware) capable of performing specified operations and are configured or arranged in a certain manner. In an example, circuits are arranged (e.g., internally or with respect to external entities such as other circuits) in a specified manner as a module. In an example, the whole or part of one or more computer systems (e.g., a standalone, client, or server computer system) or one or more hardware processors are configured by firmware or software (e.g., instructions, an application portion, or an application) as a module that operates to perform specified operations. In an example, the software may reside on a machine-readable medium. In an example, the software, when executed by the underlying hardware of the module, causes the hardware to perform the specified operations.
[0126] Accordingly, the term “module” is understood to encompass a tangible entity, be that an entity that is physically constructed, specifically configured (e.g., hardwired), or temporarily (e.g., transitorily) configured (e.g., programmed) to operate in a specified manner or to perform part or all of any operation described herein. Considering examples in which modules are temporarily configured, each of the modules need not be instantiated at any one moment in time. For example, where the modules comprise a general-purpose hardware processor configured using software, the general -purpose hardware processor is configured as respective different modules at different times. Software may accordingly configure a hardware processor, for example, to constitute a particular module at one instance of time and to constitute a different module at a different instance of time.
[0127] Various embodiments are implemented fully or partially in software and/or firmware. This software and/or firmware may take the form of instructions contained in or on a non-transitory computer-readable storage medium. Those instructions may then be read and executed by one or more processors to enable performance of the operations described herein. The instructions are in any suitable form, such as but not limited to source code, compiled code, interpreted code, executable code, static code, dynamic code, and the like. Such a computer-readable medium may include any tangible non-transitory medium for storing information in a form readable by one or more computers, such as but not limited to read only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory; etc.
Various Notes
[0128] Each of the non-limiting claims or examples described herein may stand on its own, or may be combined in various permutations or combinations with one or more of the other examples.
[0129] The above detailed description includes references to the accompanying drawings, which form a part of the detailed description. The drawings show, by way of illustration, specific embodiments in which the invention may be practiced. These embodiments are also referred to herein as “examples.” Such examples may include elements in addition to those shown or described. However, the present inventors also contemplate examples in which only those elements shown or described are provided. Moreover, the present inventors also contemplate examples using any combination or permutation of those elements shown or described (or one or more claims thereof), either with respect to a particular example (or one or more claims thereof), or with respect to other examples (or one or more claims thereof) shown or described herein.
[0130] In the event of inconsistent usages between this document and any documents so incorporated by reference, the usage in this document controls.
[0131] In this document, the terms “a” or “an” are used, as is common in patent documents, to include one or more than one, independent of any other instances or usages of “at least one” or “one or more.” In this document, the term “or” is used to refer to a nonexclusive or, such that “A or B” includes “A but not B,” “B but not A,” and “A and B,” unless otherwise indicated. In this document, the terms “including” and “in which” are used as the plain-English equivalents of the respective terms “comprising” and “wherein.” Also, in the following claims, the terms “including” and “comprising” are open-ended, that is, a system, device, article, composition, formulation, or process that includes elements in addition to those listed after such a term in a claim are still deemed to fall within the scope of that claim. Moreover, in the following claims, the terms “first,” “second,” and “third,” etc. are used merely as labels, and are not intended to impose numerical requirements on their objects.
[0132] Method examples described herein may be machine or computer-implemented at least in part. Some examples may include a computer-readable medium or machine-readable medium encoded with instructions operable to configure an electronic device to perform methods as described in the above examples. An implementation of such methods may include code, such as microcode, assembly language code, a higher-level language code, or the like. Such code may include computer readable instructions for performing various methods. The code may form portions of computer program products. Further, in an example, the code may be tangibly stored on one or more volatile, non-transitory, or nonvolatile tangible computer-readable media, such as during execution or at other times. Examples of these tangible computer-readable media may include, but are not limited to, hard disks, removable magnetic disks, removable optical disks (e.g., compact discs and digital video discs), magnetic cassettes, memory cards or sticks, random access memories (RAMs), read only memories (ROMs), and the like.
[0133] The above description is intended to be illustrative, and not restrictive. For example, the above-described examples (or one or more claims thereof) may be used in combination with each other. Other embodiments may be used, such as by one of ordinary skill in the art upon reviewing the above description. The Abstract is provided to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. Also, in the above Detailed Description, various features may be grouped together to streamline the disclosure. This should not be interpreted as intending that an unclaimed disclosed feature is essential to any claim. Rather, inventive subject matter may lie in less than all features of a particular disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description as examples or embodiments, with each claim standing on its own as a separate embodiment, and it is contemplated that such embodiments may be combined with each other in various combinations or permutations. The scope of the invention should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.

Claims

CLAIMS What is claimed is:
1. A method of training processing circuitry using machine learning for detecting and sizing a feature in a non-destructive testing (NDT) image of an object under inspection, the method comprising: acquiring acoustic imaging data of a first object having a first feature; outlining the first feature in the acoustic imaging data; applying a sizing algorithm to the outlined first feature to adjust a size of the outline and generate a ground truth dataset; and training a machine learning model using the ground truth dataset.
2. The method of claim 1, comprising: applying the trained machine learning model to acoustic imaging data of a second object having a second feature; generating an NDT image with an outline of the second feature generated by the trained machine learning model; and displaying, on a user interface, the NDT image with the outline of the second feature.
3. The method of claim 2, comprising: displaying, on the user interface, one or more dimensions of the second feature.
4. The method of claim 2, comprising: displaying, on the user interface, statistics about the second feature and/or noise.
5. The method of claim 1, wherein applying the sizing algorithm to the outlined first feature comprises: determining a representation of noise in the acoustic imaging data adjacent to the first feature; determining a threshold value of the noise; and adjusting, based on the threshold value, the size of the outline of the first feature.
6. The method of claim 5, wherein adjusting, based on the threshold value, the size of the outline of the first feature includes: comparing a representation of an amplitude of a signal of a pixel or a voxel to the threshold value; including the pixel or the voxel in the outline of the first feature when the amplitude is equal to or greater than the threshold value; and excluding the pixel or the voxel from the outline of the first feature when the amplitude is less than the threshold value.
7. The method of claim 5, wherein the acoustic imaging data includes a sequence of end views of the first object, and wherein determining the representation of the noise in the acoustic imaging data adjacent to the first feature comprises: determining the representation of noise in the end views adjacent to the first feature.
8. The method of claim 1, wherein the first feature is a flaw.
9. The method of claim 1, wherein the first feature is a front wall echo or a back wall echo.
10. A method of using processing circuitry for detecting and sizing a feature in nondestructive testing (NDT) images of an object under inspection using a previously trained machine learning model, the method comprising: acquiring acoustic imaging data of the object having a feature; applying the acoustic imaging data to the previously trained machine learning model to generate an outline of the feature in the imaging data; applying a sizing algorithm to the outlined feature to adjust the outline of the feature outputted by the previously trained machine learning model; generating an NDT image with the adjusted outline of the feature; and displaying, on a user interface, the NDT image with the adjusted outline of the feature.
11. The method using processing circuitry of claim 10, comprising: displaying, on the user interface, one or more dimensions of the second feature.
12. The method using processing circuitry of claim 10, comprising: displaying, on the user interface, statistics about the second feature and/or noise.
13. The method using processing circuitry of claim 10, wherein applying the sizing algorithm to the outlined feature comprises: determining a representation of noise in the acoustic imaging data adjacent to the feature; determining a threshold value of the noise; and adjusting, based on the threshold value, the size of the outline of the feature.
14. The method using processing circuitry of claim 13, wherein adjusting, based on the threshold value, the size of the outline of the feature includes: comparing a representation of an amplitude of a pixel or a voxel to the threshold value; including the pixel or the voxel in the outline of the feature when the amplitude is equal to or greater than the threshold value; and excluding the pixel or the voxel from the outline of the feature when the amplitude is less than the threshold value.
15. The method using processing circuitry of claim 14, wherein the acoustic imaging data includes a sequence of end views of the object, and wherein determining the representation of the noise in the acoustic imaging data adjacent to the feature comprises: determining the representation of noise in the end views adjacent to the feature.
16. The method using processing circuitry of claim 10, wherein the feature is a flaw.
17. The method using processing circuitry of claim 10, wherein the feature is a front wall echo or a back wall echo.
PCT/CA2025/050364 2024-03-18 2025-03-17 Feature size estimation for acoustic inspection Pending WO2025194253A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202463566715P 2024-03-18 2024-03-18
US63/566,715 2024-03-18

Publications (1)

Publication Number Publication Date
WO2025194253A1 true WO2025194253A1 (en) 2025-09-25

Family

ID=97138254

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CA2025/050364 Pending WO2025194253A1 (en) 2024-03-18 2025-03-17 Feature size estimation for acoustic inspection

Country Status (1)

Country Link
WO (1) WO2025194253A1 (en)

Similar Documents

Publication Publication Date Title
US11467128B2 (en) Defect detection using ultrasound scan data
CN110074813B (en) Ultrasonic image reconstruction method and system
US12352727B2 (en) Acoustic imaging techniques using machine learning
EP3637099A1 (en) Image reconstruction method based on a trained non-linear mapping
US20240329000A1 (en) Flaw classification during non-destructive testing
Wang et al. The aircraft skin crack inspection based on different-source sensors and support vector machines
CN113570594B (en) Method, device and storage medium for monitoring target tissue in ultrasonic image
Fuentes et al. Autonomous ultrasonic inspection using Bayesian optimisation and robust outlier analysis
CN113887454A (en) Non-contact laser ultrasonic detection method based on convolutional neural network point source identification
JP2018179968A (en) Defect detection using ultrasound scan data
Molinier et al. Ultrasonic imaging using conditional generative adversarial networks
US11906468B2 (en) Acoustic profiling techniques for non-destructive testing
US20230360225A1 (en) Systems and methods for medical imaging
CN115389514A (en) Material defect detection method and device
EP4612522A1 (en) Non-destructive test (ndt) flaw and anomaly detection
WO2025194253A1 (en) Feature size estimation for acoustic inspection
CN120314458A (en) A method, device, equipment and storage medium for identifying internal defects of steel plates
US12153132B2 (en) Techniques to reconstruct data from acoustically constructed images using machine learning
Sutcliffe et al. Automatic defect recognition of single-v welds using full matrix capture data, computer vision and multi-layer perceptron artificial neural networks
RU2411468C1 (en) Method of evaluating quantitative characteristics of probed earth&#39;s surface
CN118794948A (en) A polyurethane waterproof coating construction quality detection method, medium and system
WO2025145247A1 (en) Non-destructive testing imaging using machine learning
JP2022142569A (en) Image evaluation system and image evaluation method
JP7596545B2 (en) Acoustic impact map based defect size imaging
WO2024221099A1 (en) Image-to-image translation for acoustic inspection

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 25772642

Country of ref document: EP

Kind code of ref document: A1