[go: up one dir, main page]

WO2025145247A1 - Non-destructive testing imaging using machine learning - Google Patents

Non-destructive testing imaging using machine learning Download PDF

Info

Publication number
WO2025145247A1
WO2025145247A1 PCT/CA2024/051723 CA2024051723W WO2025145247A1 WO 2025145247 A1 WO2025145247 A1 WO 2025145247A1 CA 2024051723 W CA2024051723 W CA 2024051723W WO 2025145247 A1 WO2025145247 A1 WO 2025145247A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
machine learning
specimen
acquisition data
learning model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/CA2024/051723
Other languages
French (fr)
Inventor
Chi-Hang Kwan
Guillaume Painchaud-April
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Evident Canada Inc
Original Assignee
Evident Canada Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Evident Canada Inc filed Critical Evident Canada Inc
Publication of WO2025145247A1 publication Critical patent/WO2025145247A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N29/00Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
    • G01N29/44Processing the detected response signal, e.g. electronic circuits specially adapted therefor
    • G01N29/4481Neural networks
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N29/00Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
    • G01N29/04Analysing solids
    • G01N29/06Visualisation of the interior, e.g. acoustic microscopy
    • G01N29/0654Imaging
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Definitions

  • the present inventors have recognized that use of beamforming techniques such as TFM may be computationally intensive, and interpretation of images generated using TFM beamforming may present various challenges, such as involving analysis by skilled technicians to determine whether indicia in such images are representative of flaws or are merely artifacts or benign features.
  • the present inventors have also recognized that beamforming techniques such as TFM are generally restricted to a specified acoustic propagation mode.
  • a flaw map generated using such an approach may be easier to interpret than imaging provided using TFM or PCI beamforming.
  • such a technique may be used in parallel with TFM or PCI beamforming to provide an alternate view or an overlay to aid in interpretation of images generated using TFM or PCI.
  • FIG. 1 illustrates generally an example comprising an acoustic inspection system 100, such as may be used to perform one or more techniques described herein.
  • the acoustic inspection system 100 of FIG. 1 is an example of an acoustic imaging modality, such as an acoustic phased array system, that may implement various techniques of this disclosure.
  • the inspection system 100 may include a test instrument 140, such as a hand-held or portable assembly.
  • the test instrument 140 may be electrically coupled to a probe assembly, such as using a multi-conductor interconnect 130.
  • the probe assembly 150 may include one or more electroacoustic transducers, such as a transducer array 152 including respective transducers 154A through 154N.
  • the transducers array may follow a linear or curved contour, or may include an array of elements extending in two axes, such as providing a matrix of transducer elements.
  • the elements need not be square in footprint or arranged along a straight-line axis. Element size and pitch may be varied according to the inspection application.
  • a modular probe assembly 150 configuration may be used, such as to allow a test instrument 140 to be used with various different probe assemblies 150.
  • the transducer array 152 includes piezoelectric transducers, such as may be acoustically coupled to a target 158 (e.g., an object under test) through a coupling medium 156.
  • the coupling medium may include a fluid or gel or a solid membrane (e.g., an elastomer or other polymer material), or a combination of fluid, gel, or solid structures.
  • an acoustic transducer assembly may include a transducer array coupled to a wedge structure comprising a rigid thermoset polymer having known acoustic propagation characteristics (for example, Rexolite® available from C-Lec Plastics Inc.), and water may be injected between the wedge and the structure under test as a coupling medium 156 during testing.
  • a rigid thermoset polymer having known acoustic propagation characteristics (for example, Rexolite® available from C-Lec Plastics Inc.)
  • water may be injected between the wedge and the structure under test as a coupling medium 156 during testing.
  • the test instrument 140 may include digital and analog circuitry, such as a frontend circuit 122 including one or more transmit signal chains, receive signal chains, or switching circuitry (e.g., transmit/receive switching circuitry).
  • the transmit signal chain may include amplifier and filter circuitry, such as to provide transmit pulses for delivery through an interconnect 130 to a probe assembly 150 for insonification of the target 158, such as to image or otherwise detect a flaw 160 on or within the target 158 structure by receiving scattered or reflected acoustic energy elicited in response to the insonification.
  • FIG. 1 shows a single probe assembly 150 and a single transducer array 152
  • other configurations may be used, such as multiple probe assemblies connected to a single test instrument 140, or multiple transducer arrays 152 used with a single or multiple probe assemblies 150 for tandem inspection.
  • a test protocol may be performed using coordination between multiple test instruments 140, such as in response to an overall test scheme established from a master test instrument 140, or established by another remote system such as a computing facility 108 or general purpose computing device such as a laptop 132, tablet, smart-phone, desktop computer, or the like.
  • the test scheme may be established according to a published standard or regulatory requirement, and may be performed upon initial fabrication or on a recurring basis for ongoing surveillance, as illustrative examples.
  • the processor circuit 102 may be coupled to a memory circuit, such as to execute instructions that cause the test instrument 140 to perform one or more of acoustic transmission, acoustic acquisition, processing, or storage of data relating to an acoustic inspection, or to otherwise perform techniques as shown and described herein.
  • the test instrument 140 may be communicatively coupled to other portions of the system 100, such as using a wired or wireless communication interface 120.
  • FIG. 2 depicts various NDT images.
  • an acoustic inspection system such as the acoustic inspection system 100 of FIG. 1, is used to acquire FMC acoustic data and generate a TFM image 200.
  • the TFM image 200 indicates a flaw 202.
  • the acoustic inspection system may generate and display an output NDT image 204, such as on the display 110 of FIG. 1.
  • the flaw 202 is shown in front view in the output NDT image 204 and, in this particular image, represents a hole drilled through a specimen.
  • the ground truth 208 image confirms the accuracy of the output NDT image 204 by also displaying the flaw 202.
  • the acoustic inspection system uses one or more acoustic wave propagation modes 210 (also referred to in this disclosure as “propagation modes”) to generate the TFM acoustic data.
  • the acoustic inspection system generates a TFM image 212 using an LLT propagation mode (longitudinal-longitudinal-transverse).
  • the acoustic inspection system generates a TFM image 214 using TLT propagation mode (transverse-longitudinal- transverse) and a TFM image 216 using TTT propagation mode (transverse -transverse- transverse.
  • the many propagation modes are different representations of the same physical object, for instance the specimen schematics 208 and 220.
  • the schematics 220 represents a drill hole, shown as flaw 202, extending to the side wall of a specimen and the schematics 218 is a representation of the same drill hole from a side view.
  • the operator is tasked with mentally associating the evidence found across multiple propagation mode representations of the part to provide an assessment of the quality of the specimen being inspected.
  • the acoustic inspection system such as the acoustic inspection system 100 of FIG. 1, bypasses any intermediate imaging steps (e.g., generation of TFM images or PCI images) to obtain flaw/geometry labels directly from sparse acquisition data, such as FMC acoustic data or partial FMC acoustic data.
  • a number of FMCs and corresponding known schematic images such as 208 and 220 are combined to train the method into being able to output a schematic representation such as 204 or 218.
  • the operator of the acoustic inspection system does not need to decide a priori the number of propagation modes needed, and the technique may be applied for arbitrarily complex specimen geometries.
  • FIG. 3 depicts an example of a machine learning module 300 that may implement various techniques of this disclosure.
  • the machine learning module 300 is an example of a previously trained machine learning model that may be implemented in whole or in part by one or more computing devices, such as the laptop 132 of FIG. 1.
  • the machine learning module 300 may be an autoencoder of some type, such as a modified U- net generator.
  • the fully dense layers 304 serve as a bridge connecting the FMC data with the desired NDT image output.
  • the fully dense layers 304 act as a bridge that connects the sparse acquisition data 308 with the desired output NDT image 312, ensuring that the most relevant information is passed on while potentially transforming or conditioning it to enhance the performance of the decoder 306.
  • the fully dense layers 304 are important for fine-tuning the representation learned by the encoder 302 for use in the generative process.
  • the output 314 from the fully dense layers 304 is then reshaped before being passed into the decoder 306, which transforms the flattened one-dimensional data from the fully dense layers 304 back into a multi-dimensional format that may be processed by the decoder 306.
  • FIG. 4 is a flow diagram of an example of a computerized method using processing circuitry for generating a non-destructive testing (NDT) image of a specimen under inspection directly from sparse acquisition data using a previously trained machine learning model.
  • NDT non-destructive testing
  • the machine 800 is a personal computer (PC), a tablet PC, a set-top box (STB), a personal digital assistant (PDA), a mobile telephone, a smart phone, a web appliance, a network router, switch or bridge, a server computer, a database, conference room equipment, or any machine capable of executing instructions (sequential or otherwise) that specify actions to be taken by that machine.
  • PC personal computer
  • PDA personal digital assistant
  • STB set-top box
  • mobile telephone a smart phone
  • web appliance a web appliance
  • network router a network router, switch or bridge
  • server computer a database, conference room equipment, or any machine capable of executing instructions (sequential or otherwise) that specify actions to be taken by that machine.
  • server computer a database, conference room equipment, or any machine capable of executing instructions (sequential or otherwise) that specify actions to be taken by that machine.
  • server computer a database, conference room equipment, or any machine capable of executing instructions (sequential or otherwise) that specify actions to
  • machine shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein, such as cloud computing, software as a service (SaaS), other computer cluster configurations.
  • Examples, as described herein, may include, or may operate on, logic or a number of components, modules, or mechanisms (all referred to hereinafter as
  • Modules are tangible entities (e.g., hardware) capable of performing specified operations and is configured or arranged in a certain manner.
  • circuits are arranged (e.g., internally or with respect to external entities such as other circuits) in a specified manner as a module.
  • the whole or part of one or more computer systems e.g., a standalone, client or server computer system
  • one or more hardware processors are configured by firmware or software (e.g., instructions, an application portion, or an application) as a module that operates to perform specified operations.
  • the software may reside on a non-transitory computer readable storage medium or other machine readable medium.
  • the software when executed by the underlying hardware of the module, causes the hardware to perform the specified operations.
  • module is understood to encompass a tangible entity, be that an entity that is physically constructed, specifically configured (e.g., hardwired), or temporarily (e.g., transitorily) configured (e.g., programmed) to operate in a specified manner or to perform part or all of any operation described herein.
  • each of the modules need not be instantiated at any one moment in time.
  • the modules comprise a general-purpose hardware processor configured using software
  • the general -purpose hardware processor is configured as respective different modules at different times.
  • Software may accordingly configure a hardware processor, for example, to constitute a particular module at one instance of time and to constitute a different module at a different instance of time.
  • Machine 800 may include a hardware processor 802 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof), a main memory 804, and a static memory 806, some or all of which may communicate with each other via an interlink 808 (e.g., bus).
  • the machine 800 may further include a display unit 810, an alphanumeric input device 812 (e.g., a keyboard), and a user interface (UI) navigation device 814 (e.g., a mouse).
  • the display unit 810, input device 812 and UI navigation device 814 are a touch screen display.
  • the machine 800 may additionally include a storage device (e.g., drive unit) 816, a signal generation device 818 (e.g., a speaker), a network interface device 820, and one or more sensors 821, such as a global positioning system (GPS) sensor, compass, accelerometer, or other sensor.
  • the machine 800 may include an output controller 828, such as a serial (e.g., universal serial bus (USB), parallel, or other wired or wireless (e.g., infrared (IR), near field communication (NFC), etc.) connection to communicate or control one or more peripheral devices (e.g., a printer, card reader, etc.).
  • a serial e.g., universal serial bus (USB), parallel, or other wired or wireless (e.g., infrared (IR), near field communication (NFC), etc.) connection to communicate or control one or more peripheral devices (e.g., a printer, card reader, etc.).
  • USB universal serial bus
  • NFC near field
  • the storage device 816 may include a machine readable medium 822 on which is stored one or more sets of data structures or instructions 824 (e.g., software) embodying or utilized by any one or more of the techniques or functions described herein.
  • the instructions 824 may also reside, completely or at least partially, within the main memory 804, within static memory 806, or within the hardware processor 802 during execution thereof by the machine 800.
  • one or any combination of the hardware processor 802, the main memory 804, the static memory 806, or the storage device 816 may constitute machine readable media.
  • machine readable medium 822 is illustrated as a single medium, the term “machine readable medium” may include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) configured to store the one or more instructions 824.
  • machine readable medium may include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) configured to store the one or more instructions 824.
  • machine readable medium may include any medium that is capable of storing, encoding, or carrying instructions for execution by the machine 800 and that cause the machine 800 to perform any one or more of the techniques of the present disclosure, or that is capable of storing, encoding or carrying data structures used by or associated with such instructions.
  • Non-limiting machine readable medium examples may include solid-state memories, and optical and magnetic media.
  • machine readable media may include: non-volatile memory, such as semiconductor memory devices (e.g., Electrically Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM)) and flash memory devices; magnetic disks, such as internal hard disks and removable disks; magneto -optical disks; Random Access Memory (RAM); Solid State Drives (SSD); and CD-ROM and DVD-ROM disks.
  • EPROM Electrically Programmable Read-Only Memory
  • EEPROM Electrically Erasable Programmable Read-Only Memory
  • flash memory devices e.g., Electrically Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM)
  • flash memory devices e.g., Electrically Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM)
  • flash memory devices e.g., Electrically Erasable Programmable Read-Only Memory (EEPROM)
  • the instructions 824 may further be transmitted or received over a communications network 826 using a transmission medium via the network interface device 820.
  • the machine 800 may communicate with one or more other machines utilizing any one of a number of transfer protocols (e.g., frame relay, internet protocol (IP), transmission control protocol (TCP), user datagram protocol (UDP), hypertext transfer protocol (HTTP), etc.).
  • transfer protocols e.g., frame relay, internet protocol (IP), transmission control protocol (TCP), user datagram protocol (UDP), hypertext transfer protocol (HTTP), etc.
  • Example communication networks may include a local area network (LAN), a wide area network (WAN), a packet data network (e.g., the Internet), mobile telephone networks (e.g., cellular networks), Plain Old Telephone (POTS) networks, and wireless data networks (e.g., Institute of Electrical and Electronics Engineers (IEEE) 802.11 family of standards known as Wi-Fi®, IEEE 802.16 family of standards known as WiMax®), IEEE 802.15.4 family of standards, a Long Term Evolution (LTE) family of standards, a Universal Mobile Telecommunications System (UMTS) family of standards, peer-to-peer (P2P) networks, among others.
  • LAN local area network
  • WAN wide area network
  • POTS Plain Old Telephone
  • wireless data networks e.g., Institute of Electrical and Electronics Engineers (IEEE) 802.11 family of standards known as Wi-Fi®, IEEE 802.16 family of standards known as WiMax®
  • IEEE 802.15.4 family of standards e.g., Institute of Electrical and Electronics Engineers (IEEE
  • the network interface device 820 may include one or more physical jacks (e.g., Ethernet, coaxial, or phone jacks) or one or more antennas to connect to the communications network 826.
  • the network interface device 820 may include a plurality of antennas to wirelessly communicate using at least one of single -input multiple-output (SIMO), multiple-input multiple-output (MIMO), or multiple-input singleoutput (MISO) techniques.
  • SIMO single -input multiple-output
  • MIMO multiple-input multiple-output
  • MISO multiple-input singleoutput
  • the network interface device 820 may wirelessly communicate using Multiple User MIMO techniques.
  • Examples, as described herein, may include, or may operate on, logic or a number of components, modules, or mechanisms.
  • Modules are tangible entities (e.g., hardware) capable of performing specified operations and are configured or arranged in a certain manner.
  • circuits are arranged (e.g., internally or with respect to external entities such as other circuits) in a specified manner as a module.
  • the whole or part of one or more computer systems e.g., a standalone, client, or server computer system
  • one or more hardware processors are configured by firmware or software (e.g., instructions, an application portion, or an application) as a module that operates to perform specified operations.
  • the software may reside on a machine-readable medium.
  • the software when executed by the underlying hardware of the module, causes the hardware to perform the specified operations.
  • module is understood to encompass a tangible entity, be that an entity that is physically constructed, specifically configured (e.g., hardwired), or temporarily (e.g., transitorily) configured (e.g., programmed) to operate in a specified manner or to perform part or all of any operation described herein.
  • each of the modules need not be instantiated at any one moment in time.
  • the modules comprise a general-purpose hardware processor configured using software
  • the general -purpose hardware processor is configured as respective different modules at different times.
  • Software may accordingly configure a hardware processor, for example, to constitute a particular module at one instance of time and to constitute a different module at a different instance of time.
  • Various embodiments are implemented fully or partially in software and/or firmware.
  • This software and/or firmware may take the form of instructions contained in or on a non-transitory computer-readable storage medium. Those instructions may then be read and executed by one or more processors to enable performance of the operations described herein.
  • the instructions are in any suitable form, such as but not limited to source code, compiled code, interpreted code, executable code, static code, dynamic code, and the like.
  • Such a computer-readable medium may include any tangible non-transitory medium for storing information in a form readable by one or more computers, such as but not limited to read only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory; etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Pathology (AREA)
  • Immunology (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Biochemistry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Acoustics & Sound (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Signal Processing (AREA)
  • Investigating Or Analyzing Materials By The Use Of Ultrasonic Waves (AREA)

Abstract

A machine learning approach (e.g., using a neural network) may be provided with sparse acquisition data, such as FMC data or partial FMC acoustic data, from an acquisition as an input and may be used to generate output imagery, such as a flaw map (e.g., an image including features or regions labeled as flaws), without requiring intermediate TFM beamforming. For example, such an approach may include application of a model to acquired FMC data to generate an image directly, once the model has been established (e.g., trained). Such a model may generate a flaw map incorporating acoustic information from multiple propagation modes contemporaneously. A flaw map generated using such an approach may be easier to interpret than imaging provided using TFM or PCI beamforming.

Description

NON-DESTRUCTIVE TESTING IMAGING USING MACHINE LEARNING
CLAIM OF PRIORITY
[0001] This application claims the benefit of priority to U.S. Provisional Application Serial No. 63/618,121, titled “FMC-TO-NDT IMAGING USING MACHINE LEARNING” to Chi-Hang Kwan et al., filed January 5, 2024, which is incorporated by reference herein in its entirety.
FIELD OF THE DISCLOSURE
[0002] This document pertains generally, but not by way of limitation, to imaging techniques for use in Non-Destructive Testing (NDT), and more particularly, to generation of images from acoustic data, where such generation may be performed using a machine learning technique.
BACKGROUND
[0003] Non-destructive testing (NDT) may refer to use of one or more different techniques to inspect regions on or within an object, such as to ascertain whether flaws or defects exist, or to otherwise characterize the object being inspected. Examples of nondestructive test approaches may include use of an eddy-current testing approach where electromagnetic energy is applied to the object and resulting induced currents on or within the object are detected, with the values of a detected current (or a related impedance) providing an indication of the structure of the object under test, such as to indicate a presence of a crack, void, porosity, or other inhomogeneity.
[0004] Another approach for NDT may include use of an acoustic inspection technique, such as where one or more electroacoustic transducers are used to insonify a region on or within the object under test, and acoustic energy that is scattered or reflected may be detected and processed. Such scattered or reflected energy may be referred to as an acoustic echo signal. Generally, such an acoustic inspection scheme involves use of acoustic frequencies in an ultrasonic range of frequencies, such as including pulses having energy in a specified range that may include value from, for example, a few hundred kilohertz, to tens of megahertz, as an illustrative example. SUMMARY OF THE DISCLOSURE
[0005] Using various techniques of this disclosure, a machine learning approach (e.g., using a neural network) may be provided with sparse acquisition data, such as FMC data or partial FMC acoustic data, from an acquisition as an input and may be used to generate output imagery, such as a flaw map (e.g., an image including features or regions labeled as flaws), without requiring intermediate TFM beamforming. For example, such an approach may include application of a model to acquired FMC data to generate an image directly, once the model has been established (e.g., trained). Such a model may generate a flaw map incorporating acoustic information from multiple propagation modes contemporaneously. A flaw map generated using such an approach may be easier to interpret than imaging provided using TFM or PCI beamforming. In an example, such a technique may be used in parallel with TFM or PCI beamforming to provide an alternate view or an overlay to aid in interpretation of images generated using TFM or PCI.
[0006] In some aspects, this disclosure is directed to a computerized method using processing circuitry for generating a non-destructive testing (NDT) image of a specimen under inspection directly from sparse acquisition data using a previously trained machine learning model, the computerized method comprising: acquiring the sparse acquisition data of the specimen; applying the previously trained machine learning model to the sparse acquisition data; generating, using the previously trained machine learning model while bypassing intermediate imaging, the NDT image that includes a geometry of the specimen; and displaying the NDT image.
[0007] In some aspects, this disclosure is directed to an ultrasound inspection system for generating a non-destructive testing (NDT) image of a specimen under inspection directly from sparse acquisition data using a previously trained machine learning model, the ultrasound inspection system comprising: an ultrasonic probe assembly; and a processor in communication with the ultrasonic probe assembly, the processor configured for: acquiring the sparse acquisition data of the specimen; applying the previously trained machine learning model to the sparse acquisition data; generating, using the previously trained machine learning model while bypassing intermediate imaging, the NDT image that includes geometry of the specimen; and displaying the NDT image.
[0008] In some aspects, this disclosure is directed to a computerized method of training processing circuitry using machine learning in a system for non-destructive testing (NDT) of a specimen, the computerized method comprising: training a machine learning model to be used to generate NDT images directly from sparse acquisition data. BRIEF DESCRIPTION OF THE DRAWINGS
[0009] In the drawings, which are not necessarily drawn to scale, like numerals may describe similar components in different views. The drawings illustrate generally, by way of example, but not by way of limitation, various embodiments discussed in the present document.
[0010] FIG. 1 illustrates generally an example comprising an acoustic inspection system, such as may be used to perform one or more techniques described herein.
[0011] FIG. 2 depicts various NDT images.
[0012] FIG. 3 depicts an example of a machine learning module that may implement various techniques of this disclosure.
[0013] FIG. 4 is a flow diagram of an example of a computerized method using processing circuitry for generating a non-destructive testing (NDT) image of a specimen under inspection directly from sparse acquisition data using a previously trained machine learning model.
[0014] FIG. 5 shows another example of a machine learning module that may implement various techniques of this disclosure.
[0015] FIG. 6 is a flow diagram of an example of a computerized method of training processing circuitry using machine learning in a system for non-destructive testing (NDT) of a specimen.
[0016] FIG. 7 illustrates a computerized method in accordance with one embodiment.
[0017] FIG. 8 illustrates a block diagram of an example of a machine upon which any one or more of the techniques (e.g., methodologies) discussed herein may perform.
DETAILED DESCRIPTION
[0018] Acoustic testing, such as ultrasound-based inspection, may include focusing or beam-forming techniques to aid in construction of data plots or images representing a region of interest within the specimen under inspection. Use of an array of ultrasound transducer elements may include use of a phased-array beamforming approach and may be referred to as Phased Array Ultrasound Testing (PAUT). For example, a delay -and-sum beamforming technique may be used such as including coherently summing time-domain representations of received acoustic signals from respective transducer elements or apertures. [0019] In another approach, a “Full Matrix Capture” (FMC) technique may be used where one or more elements in an array (or apertures defined by such elements) are used to transmit an acoustic pulse and other elements are used to receive scattered or reflected acoustic energy, and a matrix is constructed of time-series (e.g., A-scan) representations corresponding to a sequence of transmit-receive cycles in which the transmissions are occurring from different elements (or corresponding apertures) in the array. Beamforming and imaging may be performed using a technique such as a “Total Focusing Method” (TFM), in which a coherent summation may be performed using A-scan data acquired using an FMC technique. In a manner similar to TFM imaging, a phase-based approach may be used for one or more of acquisition, storage, or subsequent analysis. Such a phase -based approach may include coherent summation of normalized or quantized representations of A- scan data corresponding to phase information. Such an approach may be referred to as a “phase coherence imaging” (PCI) beamforming technique.
[0020] The present inventors have recognized that use of beamforming techniques such as TFM may be computationally intensive, and interpretation of images generated using TFM beamforming may present various challenges, such as involving analysis by skilled technicians to determine whether indicia in such images are representative of flaws or are merely artifacts or benign features. The present inventors have also recognized that beamforming techniques such as TFM are generally restricted to a specified acoustic propagation mode.
[0021] Using various techniques of this disclosure, a machine learning approach (e.g., using a neural network) may be provided with sparse acquisition data, such as FMC data or partial FMC acoustic data, from an acquisition as an input and may be used to generate output imagery, such as a flaw map (e.g., an image including features or regions labeled as flaws), without requiring intermediate TFM beamforming. For example, such an approach may include application of a model to acquired FMC data to generate an image directly, once the model has been established (e.g., trained). Such a model may generate a flaw map incorporating acoustic information from multiple propagation modes contemporaneously. A flaw map generated using such an approach may be easier to interpret than imaging provided using TFM or PCI beamforming. In an example, such a technique may be used in parallel with TFM or PCI beamforming to provide an alternate view or an overlay to aid in interpretation of images generated using TFM or PCI.
[0022] FIG. 1 illustrates generally an example comprising an acoustic inspection system 100, such as may be used to perform one or more techniques described herein. The acoustic inspection system 100 of FIG. 1 is an example of an acoustic imaging modality, such as an acoustic phased array system, that may implement various techniques of this disclosure.
[0023] The inspection system 100 may include a test instrument 140, such as a hand-held or portable assembly. The test instrument 140 may be electrically coupled to a probe assembly, such as using a multi-conductor interconnect 130. The probe assembly 150 may include one or more electroacoustic transducers, such as a transducer array 152 including respective transducers 154A through 154N. The transducers array may follow a linear or curved contour, or may include an array of elements extending in two axes, such as providing a matrix of transducer elements. The elements need not be square in footprint or arranged along a straight-line axis. Element size and pitch may be varied according to the inspection application.
[0024] A modular probe assembly 150 configuration may be used, such as to allow a test instrument 140 to be used with various different probe assemblies 150. Generally, the transducer array 152 includes piezoelectric transducers, such as may be acoustically coupled to a target 158 (e.g., an object under test) through a coupling medium 156. The coupling medium may include a fluid or gel or a solid membrane (e.g., an elastomer or other polymer material), or a combination of fluid, gel, or solid structures. For example, an acoustic transducer assembly may include a transducer array coupled to a wedge structure comprising a rigid thermoset polymer having known acoustic propagation characteristics (for example, Rexolite® available from C-Lec Plastics Inc.), and water may be injected between the wedge and the structure under test as a coupling medium 156 during testing.
[0025] The test instrument 140 may include digital and analog circuitry, such as a frontend circuit 122 including one or more transmit signal chains, receive signal chains, or switching circuitry (e.g., transmit/receive switching circuitry). The transmit signal chain may include amplifier and filter circuitry, such as to provide transmit pulses for delivery through an interconnect 130 to a probe assembly 150 for insonification of the target 158, such as to image or otherwise detect a flaw 160 on or within the target 158 structure by receiving scattered or reflected acoustic energy elicited in response to the insonification.
[0026] Although FIG. 1 shows a single probe assembly 150 and a single transducer array 152, other configurations may be used, such as multiple probe assemblies connected to a single test instrument 140, or multiple transducer arrays 152 used with a single or multiple probe assemblies 150 for tandem inspection. Similarly, a test protocol may be performed using coordination between multiple test instruments 140, such as in response to an overall test scheme established from a master test instrument 140, or established by another remote system such as a computing facility 108 or general purpose computing device such as a laptop 132, tablet, smart-phone, desktop computer, or the like. The test scheme may be established according to a published standard or regulatory requirement, and may be performed upon initial fabrication or on a recurring basis for ongoing surveillance, as illustrative examples.
[0027] The receive signal chain of the front-end circuit 122 may include one or more filters or amplifier circuits, along with an analog-to-digital conversion facility, such as to digitize echo signals received using the probe assembly 150. Digitization may be performed coherently, such as to provide multiple channels of digitized data aligned or referenced to each other in time or phase. The front-end circuit 122 may be coupled to and controlled by one or more processor circuits, such as a processor circuit 102 included as a portion of the test instrument 140. The processor circuit 102 may be coupled to a memory circuit, such as to execute instructions that cause the test instrument 140 to perform one or more of acoustic transmission, acoustic acquisition, processing, or storage of data relating to an acoustic inspection, or to otherwise perform techniques as shown and described herein. The test instrument 140 may be communicatively coupled to other portions of the system 100, such as using a wired or wireless communication interface 120.
[0028] For example, performance of one or more techniques as shown and described herein may be accomplished on-board the test instrument 140 or using other processing or storage facilities such as using a computing facility 108 or a general -purpose computing device such as a laptop 132, tablet, smart-phone, desktop computer, or the like. For example, processing tasks that would be undesirably slow if performed on-board the test instrument 140 or beyond the capabilities of the test instrument 140 may be performed remotely (e.g., on a separate system), such as in response to a request from the test instrument 140. Similarly, storage of imaging data or intermediate data such as A-line matrices of time-series data may be accomplished using remote facilities communicatively coupled to the test instrument 140. The test instrument may include a display 110, such as for presentation of configuration information or results, and an input device 112 such as including one or more of a keyboard, trackball, function keys or soft keys, mouse-interface, touch-screen, stylus, or the like, for receiving operator commands, configuration information, or responses to queries.
[0029] The acoustic inspection system 100 may acquire acoustic imaging data, such as FMC data or virtual source aperture (VS A) data, of a material using an acoustic imaging modality, such as an acoustic phased array system. The processor circuit 102 may then generate an acoustic imaging data set, such as a scattering matrix (S -matrix), plane wave matrix, or other matrix or data set, corresponding to an acoustic propagation mode, such as pulse echo direct (TT), self-tandem (TT-T), and/or pulse echo with skip (TT-TT).
[0030] As described in more detail below, the processor circuit 102 or another processor circuit may apply a previously trained machine learning model to the sparse acquisition data and generate an NDT image of the specimen while bypassing intermediate imaging, such as TFM imaging. In some examples, the processor circuit 102 automatically bypasses the intermediate imaging (such as TFM imaging) without first requesting user input to do so, enabling direct generation of the NDT image from the sparse acquisition data.
[0031] In other examples, the processor circuit 102 may cause the display 110 to ask the user whether to bypass the intermediate imaging. When user input is requested, the processor circuit 102 can either proceed with direct NDT image generation by bypassing the intermediate imaging, or generate intermediate images like TFM images before producing the final NDT image showing flaws and geometry of the specimen. This flexibility allows operators to compare the direct approach with traditional intermediate imaging methods when desired.
[0032] FIG. 2 depicts various NDT images. In previous approaches, an acoustic inspection system, such as the acoustic inspection system 100 of FIG. 1, is used to acquire FMC acoustic data and generate a TFM image 200. The TFM image 200 indicates a flaw 202. Using the TFM image 200, the acoustic inspection system may generate and display an output NDT image 204, such as on the display 110 of FIG. 1. The flaw 202 is shown in front view in the output NDT image 204 and, in this particular image, represents a hole drilled through a specimen. The ground truth 208 image confirms the accuracy of the output NDT image 204 by also displaying the flaw 202.
[0033] In some examples, the acoustic inspection system may generate a PCI image 206 from the FMC acoustic data instead of the TFM image 200. Using the PCI image 206, the acoustic inspection system may generate and display an output NDT image 204. TFM uses full amplitude A-scans with 12-16 bit digitized amplitude signals, for example, while PCI uses only the phase information (positive/negative) of the signal, reducing it to just 1 bit of data per sample. As such, PCI offers significant computational benefits compared to TFM by reducing bandwidth requirements, reducing power requirements, and reducing memory usage for data sets.
[0034] The acoustic inspection system uses one or more acoustic wave propagation modes 210 (also referred to in this disclosure as “propagation modes”) to generate the TFM acoustic data. The acoustic inspection system generates a TFM image 212 using an LLT propagation mode (longitudinal-longitudinal-transverse). Similarly, the acoustic inspection system generates a TFM image 214 using TLT propagation mode (transverse-longitudinal- transverse) and a TFM image 216 using TTT propagation mode (transverse -transverse- transverse. The many propagation modes are different representations of the same physical object, for instance the specimen schematics 208 and 220. The schematics 220 represents a drill hole, shown as flaw 202, extending to the side wall of a specimen and the schematics 218 is a representation of the same drill hole from a side view. In this conventional method, the operator is tasked with mentally associating the evidence found across multiple propagation mode representations of the part to provide an assessment of the quality of the specimen being inspected.
[0035] Using various techniques of this disclosure, the acoustic inspection system, such as the acoustic inspection system 100 of FIG. 1, bypasses any intermediate imaging steps (e.g., generation of TFM images or PCI images) to obtain flaw/geometry labels directly from sparse acquisition data, such as FMC acoustic data or partial FMC acoustic data. A number of FMCs and corresponding known schematic images such as 208 and 220 are combined to train the method into being able to output a schematic representation such as 204 or 218. With this approach, the operator of the acoustic inspection system does not need to decide a priori the number of propagation modes needed, and the technique may be applied for arbitrarily complex specimen geometries.
[0036] FIG. 3 depicts an example of a machine learning module 300 that may implement various techniques of this disclosure. The machine learning module 300 is an example of a previously trained machine learning model that may be implemented in whole or in part by one or more computing devices, such as the laptop 132 of FIG. 1. In some examples, the machine learning module 300 may be an autoencoder of some type, such as a modified U- net generator.
[0037] The machine learning module 300 processes the sparse acquisition data of the specimen through multiple stages to generate NDT images. The sparse acquisition data may include FMC acoustic data or partial FMC acoustic data, for example. FMC involves transmitting from one element in an ultrasonic array while receiving signals on all the elements of the array. This process is repeated for each element in the array, resulting in a complete matrix of transmit-receive combinations. Partial FMC reduces the number of transmit-receive combinations compared to full FMC such that not every possible transmitreceive pair is recorded, which may save time and reduce the data size. [0038] In some examples, the sparse acquisition data of the specimen includes amplitude - based acoustic data. In other examples, the sparse acquisition data of the specimen includes phase-based acoustic data without amplitude-based acoustic data. Phase-based acoustic data refers to data where the full amplitude values have been discarded from the signal and the remaining signal consists only in the sine/cosine of the instantaneous phase. The expression “phase-based acoustic data” also includes the use of the sign of the acoustic signal which is either positive or negative. Specifically, instead of storing A-scans with 12 or 16 bits of digitized amplitude data or the digitized sine/cosine of the phase data, for example, the acoustic inspection system only keeps 1 bit (0 or 1) per time position representing whether the signal phase is positive or negative at each time point along the binarized AScan. Using just the phase information (Os and Is) rather than full amplitude values still produces clean, usable images while providing significant benefits including reduced bandwidth requirements, reduced power consumption, and reduced memory usage for data storage.
[0039] The encoder 302 receives as input sparse acquisition data 308 of the specimen, which was acquired by the acoustic inspection system. In some examples, the sparse acquisition data includes three-dimensional (3D) FMC acoustic data (or 3D partial FMC acoustic data) organized according to transmit, receive, and time dimensions. For example, a 32-element phased array probe generates a large data structure having 32 transmission steps, with each transmission having 32 reception steps (creating a 32x32 combination of transmit-receive pairs), and each pair including thousands of time sample points. This 3D organization is significant because the probe elements are physically arranged sequentially, creating meaningful spatial correlations in the data that may be utilized by the neural network. The data effectively forms a cube containing all possible combinations of transmitter elements, receiver elements, and their corresponding time series measurements. This comprehensive dataset captures the full range of acoustic interactions within the specimen, though it may also be implemented in a partial or sparse form while maintaining effectiveness.
[0040] As the data moves through the encoder 302, the number of features may be gradually increased while the size of the dimensions is decreased, such as using 3D convolutions, to process the input data. The encoder output 310 is then flattened before being passed into the fully dense layers 304, which transforms the multi-dimensional data structure from the encoder 302 into a one-dimensional format before it enters the fully dense layers 304.
[0041] The fully dense layers 304 serve as a bridge connecting the FMC data with the desired NDT image output. The fully dense layers 304 act as a bridge that connects the sparse acquisition data 308 with the desired output NDT image 312, ensuring that the most relevant information is passed on while potentially transforming or conditioning it to enhance the performance of the decoder 306. The fully dense layers 304 are important for fine-tuning the representation learned by the encoder 302 for use in the generative process. The output 314 from the fully dense layers 304 is then reshaped before being passed into the decoder 306, which transforms the flattened one-dimensional data from the fully dense layers 304 back into a multi-dimensional format that may be processed by the decoder 306.
[0042] The decoder 306 processes the reshaped output from the fully dense layers 304, such as using 2D transpose convolutions. The decoder 306 gradually decreases the number of features while increasing the size of the dimensions to match the desired output resolution. This process enables the reconstruction of an interpretable image. The decoder 306 ultimately generates the final output NDT image 312, which represents the flaws and geometry of the specimen under inspection.
[0043] The output NDT image 312 includes both flaws and/or geometry of the specimen under inspection. The geometry refers to multiple aspects of the specimen, such as the overall shape and features of the specimen (e.g., the specimen is an inch thick), as well as the geometry of any flaws present (e.g., circular flaws or cracks). The geometry may include features like weld bevels, the contours of holes, or the overall thickness of the specimen.
[0044] If present, flaws appear as distinct features in the output NDT image 312 and may be labeled or classified. For example, the acoustic inspection system uses color coding, such as where red indicates a flaw (like a flat -bottom hole), green indicates the sidewall, and blue indicates base material. A flaw may include various things like a crack, a void, a hole, or even a foreign material inclusion (like copper in steel). The acoustic inspection system may detect both the geometric shape of these flaws (their contours and position) as well as potentially their material composition.
[0045] Geometry represents the overall structural features of the specimen (like its thickness, edges, or designed features), while flaws represent deviations or anomalies within that geometry (like cracks, voids, or material inconsistencies). The ability of the acoustic inspection system to show both aspects simultaneously helps operators understand not just where flaws are located but also their context within the specimen's overall structure.
[0046] The machine learning module 300 enables direct transformation from sparse acquisition data to interpretable output NDT images while bypassing intermediate imaging steps such as TFM imaging. For example, the machine learning module 300 compresses the dense FMC data into meaningful features during the encoding stage through encoder 302, processes this information through the fully dense layers 304, and then reconstructs an interpretable image through the decoder 306.
[0047] FIG. 4 is a flow diagram of an example of a computerized method using processing circuitry for generating a non-destructive testing (NDT) image of a specimen under inspection directly from sparse acquisition data using a previously trained machine learning model.
[0048] At block 402, the computerized method 400 includes acquiring the sparse acquisition data of the specimen. An acoustic inspection system, the acoustic inspection system 100 of FIG. 1, may use a probe assembly to acquire FMC acoustic data, partial FMC acoustic data, or some other sparse acquisition data of the specimen.
[0049] At block 404, the computerized method 400 includes applying a previously trained machine learning model to the sparse acquisition data. One or more computing devices, such as the laptop 132 of the acoustic inspection system 100 of FIG. 1, may implement a previously trained machine learning model, such as the machine learning module 300 of FIG. 3, to which sparse acquisition data may be applied.
[0050] At block 406, the computerized method 400 includes generating, using the previously trained machine learning model while bypassing intermediate imaging, the NDT image that includes a geometry of the specimen. The computing device(s) may generate, such as using the machine learning module 300 of FIG. 3, the output NDT image, such as the output NDT image 312 of FIG. 3 or the output NDT image 204 of FIG. 2.
[0051] At block 408, the computerized method 400 includes displaying the NDT image, such as on the display 110 of FIG. 1.
[0052] FIG. 5 shows another example of a machine learning module 500 that may implement various techniques of this disclosure. The machine learning module 500 is another example of the machine learning module 300 of FIG. 3. The machine learning module 500 may be implemented in whole or in part by one or more computing devices. In some examples, a training module 502 may be implemented by a different device than a prediction module 504. In these examples, the model 514 may be created on a first machine, e.g., a desktop computer, and then sent to a second machine, e.g., a handheld device.
[0053] The machine learning module 500 utilizes a training module 502 and a prediction module 504. The training module 502 may implement a computerized method of training processing circuitry, such as the processor 802 of FIG. 8, using machine learning in a system for non-destructive testing (NDT) to generate an NDT image of a specimen under inspection directly from sparse acquisition data image”).
[0054] The training module 502 inputs training data 506 into a selector module 508 that selects a training vector from the training data. The selector module 508 may include data normalization/standardization and cleaning, such as to remove any useless information. In some examples, the model 514 itself may perform aspects of the selector module, such as a gradient boosted trees.
[0055] The training module may train the machine learning module 500 on a plurality of flaw or no flaw conditions. The training data 506 may include, for example, simulated sparse acquisition data generated from simulated flaws and geometry. Corresponding ground truth data may include synthetic data from the simulated flaws and geometry used to generate the simulated sparse acquisition data. Aside from synthetic datasets, ground truth labels may also be obtained using other NDT methods, such as radiography, CT scans, and laser surface profiling. In addition, the training data 506 may include one or more of simulations of a plurality of types of material flaws in the material, simulations of a plurality of positions of material flaws in the material, or simulations of a plurality of ghost echoes in the material to simulate no flaw conditions.
[0056] The training data may be supplemented by experimental sparse acquisition data, such as a reading on a calibration block having the material and geometry characteristics corresponding to the planned inspection. In some examples, a model may be pre -trained with simulated flaws and have a few epoch of training on the calibration block.
[0057] The training data 506 may be labeled. In other examples, the training data may not be labeled, and the model may be trained using feedback data — such as through a reinforcement learning method.
[0058] The selector module 508 selects a training vector 510 from the training data 506. The selected data may fill the training vector 510 and includes a set of the training data that is determined to be predictive of a classification. Information chosen for inclusion in the training vector 510 may be all the training data 506 or in some examples, may be a subset of all the training data 506. The training vector 510 may be utilized (along with any applicable labels) by the machine learning algorithm 512 to produce a model 514 (a trained machine learning model). In some examples, other data structures other than vectors may be used. The machine learning algorithm 512 may learn one or more layers of a model.
[0059] Example layers may include convolutional layers, dropout layers, pooling/up sampling layers, SoftMax layers, and the like. Example models may be a neural network, where each layer is comprised of a plurality of neurons that take a plurality of inputs, weight the inputs, input the weighted inputs into an activation function to produce an output which may then be sent to another layer. Example activation functions may include a Rectified Linear Unit (ReLu), and the like. Layers of the model may be fully or partially connected.
[0060] In the prediction module 504, data 516 may be input to the selector module 518. The data 516 may include an acoustic imaging data set, such as an S -matrix. The selector module 518 may operate the same, or differently than the selector module 508 of the training module 502. In some examples, the selector modules 508 and 518 are the same modules or different instances of the same module. The selector module 518 produces a vector 520, which is input into the model 514 to generate an output NDT image of the specimen, resulting in an image 522.
[0061] For example, the weightings and/or network structure learned by the training module 502 may be executed on the vector 520 by applying vector 520 to a first layer of the model 514 to produce inputs to a second layer of the model 514, and so on until the image is output. As previously noted, other data structures may be used other than a vector (e.g., a matrix).
[0062] In some examples, there may be hidden layers between the input and output. In some examples, a convolutional neural network (CNN) may be connected to the s -matrix and the s-matrix may be kept in matrix form (not vector).
[0063] The training module may train the machine learning module 500 on a plurality of flaw or no flaw conditions, such as described above. The training module 502 may operate in an offline manner to train the model 514. The prediction module 504, however, may be designed to operate in an online manner. It should be noted that the model 514 may be periodically updated via additional training and/or user feedback. For example, additional training data 506 may be provided to refine the model by the training module 502.
[0064] The machine learning algorithm 500 may be selected from among many different potential supervised or unsupervised machine learning algorithms. Examples of learning algorithms include artificial neural networks, convolutional neural networks, Bayesian networks, instance-based learning, support vector machines, decision trees (e.g., Iterative Dichotomiser 3, C4.5, Classification and Regression Tree (CART), Chi -squared Automatic Interaction Detector (CHAID), and the like), random forests, linear classifiers, quadratic classifiers, k-nearest neighbor, linear regression, logistic regression, a region based CNN, a full CNN (for semantic segmentation), a mask R-CNN algorithm for instance segmentation, and hidden Markov models. Examples of unsupervised learning algorithms include expectation-maximization algorithms, vector quantization, and information bottleneck method.
[0065] In this manner, the machine learning module 500 of FIG. 5 may assist in implementing a computerized method using processing circuitry for generating a nondestructive testing (NDT) image of a specimen under inspection directly from sparse acquisition data using a previously trained machine learning model, in accordance with this disclosure.
[0066] The techniques shown and described in this document may be performed using a portion or an entirety of an inspection system 100 as shown in FIG. 1 or otherwise using a machine 800 as discussed below in relation to FIG. 8.
[0067] FIG. 6 is a flow diagram of an example of a computerized method 600 of training processing circuitry using machine learning in a system for non -destructive testing (NDT) of a specimen. The computerized method 600 is for training a machine learning model, such as the machine learning module 300 of FIG. 3, to be used to generate NDT images directly from sparse acquisition data.
[0068] Sparse acquisition data 602 is applied to the machine learning model 604, e.g., a modified U-net generator, such as the machine learning module 300 of FIG. 3. The machine learning model 604 bypasses intermediate imaging and generates an output NDT image 606 that includes a geometry of the specimen and, in some examples, a flaw of the specimen.
[0069] The computerized method 600 compares the output of the machine learning model, e.g., the output NDT image 606, to the corresponding ground truth data 608, e.g., including the geometry and flaw(s) (if present) of the specimen, to generate a loss function 610. The loss function 610 information is fed back to the machine learning model 604 and the machine learning model 604 adjusts parameters based on the loss function information. The computerized method 600 iteratively repeats the applying the sparse acquisition data, comparing the output and the ground truth data, and adjusting the machine learning model to optimize the machine learning model.
[0070] FIG. 7 is a flow diagram of another example of a computerized method 700 of training processing circuitry using machine learning in a system for non -destructive testing (NDT) of a specimen. At block 702, the computerized method 700 includes training a machine learning model to be used to generate NDT images directly from sparse acquisition data. For example, the computerized method 700 includes training the machine learning module 300 of FIG. 3. In some examples, block 702 includes block 704 through block 712. [0071] At block 704, the computerized method 700 includes receiving training data including sparse acquisition data and corresponding ground truth data representing flaws and geometry of specimens under inspection.
[0072] At block 706, the computerized method 700 includes applying the sparse acquisition data to the machine learning model. For example, the sparse acquisition data 602 is applied to the machine learning model 604 in FIG. 6.
[0073] At block 708, the computerized method 700 includes comparing output of the machine learning model to the corresponding ground truth data to generate a loss function. For example, the output NDT image 606 is compared to the ground truth data 608 in FIG. 6 to generate the loss function 610.
[0074] At block 710, the computerized method 700 includes adjusting parameters of the machine learning model based on the loss function. For example, the loss function 610 is fed back to the machine learning model 604 in FIG. 6 and the machine learning model 604 adjusts parameters as needed.
[0075] In block 712, the computerized method 700 includes iteratively repeating the applying, comparing and adjusting to optimize the machine learning model.
[0076] FIG. 8 illustrates a block diagram of an example of a machine upon which any one or more of the techniques (e.g., methodologies) discussed herein may perform. In alternative embodiments, the machine 800 may operate as a standalone device or are connected (e.g., networked) to other machines. In a networked deployment, the machine 800 may operate in the capacity of a server machine, a client machine, or both in server - client network environments. In an example, the machine 800 may act as a peer machine in peer-to-peer (P2P) (or other distributed) network environment. The machine 800 is a personal computer (PC), a tablet PC, a set-top box (STB), a personal digital assistant (PDA), a mobile telephone, a smart phone, a web appliance, a network router, switch or bridge, a server computer, a database, conference room equipment, or any machine capable of executing instructions (sequential or otherwise) that specify actions to be taken by that machine. In various embodiments, machine 800 may perform one or more of the processes described above. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein, such as cloud computing, software as a service (SaaS), other computer cluster configurations. [0077] Examples, as described herein, may include, or may operate on, logic or a number of components, modules, or mechanisms (all referred to hereinafter as
“modules”). Modules are tangible entities (e.g., hardware) capable of performing specified operations and is configured or arranged in a certain manner. In an example, circuits are arranged (e.g., internally or with respect to external entities such as other circuits) in a specified manner as a module. In an example, the whole or part of one or more computer systems (e.g., a standalone, client or server computer system) or one or more hardware processors are configured by firmware or software (e.g., instructions, an application portion, or an application) as a module that operates to perform specified operations. In an example, the software may reside on a non-transitory computer readable storage medium or other machine readable medium. In an example, the software, when executed by the underlying hardware of the module, causes the hardware to perform the specified operations.
[0078] Accordingly, the term “module” is understood to encompass a tangible entity, be that an entity that is physically constructed, specifically configured (e.g., hardwired), or temporarily (e.g., transitorily) configured (e.g., programmed) to operate in a specified manner or to perform part or all of any operation described herein. Considering examples in which modules are temporarily configured, each of the modules need not be instantiated at any one moment in time. For example, where the modules comprise a general-purpose hardware processor configured using software, the general -purpose hardware processor is configured as respective different modules at different times. Software may accordingly configure a hardware processor, for example, to constitute a particular module at one instance of time and to constitute a different module at a different instance of time.
[0079] Machine (e.g., computer system) 800 may include a hardware processor 802 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof), a main memory 804, and a static memory 806, some or all of which may communicate with each other via an interlink 808 (e.g., bus). The machine 800 may further include a display unit 810, an alphanumeric input device 812 (e.g., a keyboard), and a user interface (UI) navigation device 814 (e.g., a mouse). In an example, the display unit 810, input device 812 and UI navigation device 814 are a touch screen display. The machine 800 may additionally include a storage device (e.g., drive unit) 816, a signal generation device 818 (e.g., a speaker), a network interface device 820, and one or more sensors 821, such as a global positioning system (GPS) sensor, compass, accelerometer, or other sensor. The machine 800 may include an output controller 828, such as a serial (e.g., universal serial bus (USB), parallel, or other wired or wireless (e.g., infrared (IR), near field communication (NFC), etc.) connection to communicate or control one or more peripheral devices (e.g., a printer, card reader, etc.).
[0080] The storage device 816 may include a machine readable medium 822 on which is stored one or more sets of data structures or instructions 824 (e.g., software) embodying or utilized by any one or more of the techniques or functions described herein. The instructions 824 may also reside, completely or at least partially, within the main memory 804, within static memory 806, or within the hardware processor 802 during execution thereof by the machine 800. In an example, one or any combination of the hardware processor 802, the main memory 804, the static memory 806, or the storage device 816 may constitute machine readable media.
[0081] While the machine readable medium 822 is illustrated as a single medium, the term "machine readable medium" may include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) configured to store the one or more instructions 824.
[0082] The term “machine readable medium” may include any medium that is capable of storing, encoding, or carrying instructions for execution by the machine 800 and that cause the machine 800 to perform any one or more of the techniques of the present disclosure, or that is capable of storing, encoding or carrying data structures used by or associated with such instructions. Non-limiting machine readable medium examples may include solid-state memories, and optical and magnetic media. Specific examples of machine readable media may include: non-volatile memory, such as semiconductor memory devices (e.g., Electrically Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM)) and flash memory devices; magnetic disks, such as internal hard disks and removable disks; magneto -optical disks; Random Access Memory (RAM); Solid State Drives (SSD); and CD-ROM and DVD-ROM disks. In some examples, machine readable media may include non -transitory machine readable media. In some examples, machine readable media may include machine readable media that is not a transitory propagating signal.
[0083] The instructions 824 may further be transmitted or received over a communications network 826 using a transmission medium via the network interface device 820. The machine 800 may communicate with one or more other machines utilizing any one of a number of transfer protocols (e.g., frame relay, internet protocol (IP), transmission control protocol (TCP), user datagram protocol (UDP), hypertext transfer protocol (HTTP), etc.). Example communication networks may include a local area network (LAN), a wide area network (WAN), a packet data network (e.g., the Internet), mobile telephone networks (e.g., cellular networks), Plain Old Telephone (POTS) networks, and wireless data networks (e.g., Institute of Electrical and Electronics Engineers (IEEE) 802.11 family of standards known as Wi-Fi®, IEEE 802.16 family of standards known as WiMax®), IEEE 802.15.4 family of standards, a Long Term Evolution (LTE) family of standards, a Universal Mobile Telecommunications System (UMTS) family of standards, peer-to-peer (P2P) networks, among others. In an example, the network interface device 820 may include one or more physical jacks (e.g., Ethernet, coaxial, or phone jacks) or one or more antennas to connect to the communications network 826. In an example, the network interface device 820 may include a plurality of antennas to wirelessly communicate using at least one of single -input multiple-output (SIMO), multiple-input multiple-output (MIMO), or multiple-input singleoutput (MISO) techniques. In some examples, the network interface device 820 may wirelessly communicate using Multiple User MIMO techniques.
[0084] Examples, as described herein, may include, or may operate on, logic or a number of components, modules, or mechanisms. Modules are tangible entities (e.g., hardware) capable of performing specified operations and are configured or arranged in a certain manner. In an example, circuits are arranged (e.g., internally or with respect to external entities such as other circuits) in a specified manner as a module. In an example, the whole or part of one or more computer systems (e.g., a standalone, client, or server computer system) or one or more hardware processors are configured by firmware or software (e.g., instructions, an application portion, or an application) as a module that operates to perform specified operations. In an example, the software may reside on a machine-readable medium. In an example, the software, when executed by the underlying hardware of the module, causes the hardware to perform the specified operations.
[0085] Accordingly, the term “module” is understood to encompass a tangible entity, be that an entity that is physically constructed, specifically configured (e.g., hardwired), or temporarily (e.g., transitorily) configured (e.g., programmed) to operate in a specified manner or to perform part or all of any operation described herein. Considering examples in which modules are temporarily configured, each of the modules need not be instantiated at any one moment in time. For example, where the modules comprise a general-purpose hardware processor configured using software, the general -purpose hardware processor is configured as respective different modules at different times. Software may accordingly configure a hardware processor, for example, to constitute a particular module at one instance of time and to constitute a different module at a different instance of time. [0086] Various embodiments are implemented fully or partially in software and/or firmware. This software and/or firmware may take the form of instructions contained in or on a non-transitory computer-readable storage medium. Those instructions may then be read and executed by one or more processors to enable performance of the operations described herein. The instructions are in any suitable form, such as but not limited to source code, compiled code, interpreted code, executable code, static code, dynamic code, and the like. Such a computer-readable medium may include any tangible non-transitory medium for storing information in a form readable by one or more computers, such as but not limited to read only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory; etc.
Various Notes
[0087] Each of the non-limiting claims or examples described herein may stand on its own, or may be combined in various permutations or combinations with one or more of the other examples.
[0088] The above detailed description includes references to the accompanying drawings, which form a part of the detailed description. The drawings show, by way of illustration, specific embodiments in which the invention may be practiced. These embodiments are also referred to herein as “examples.” Such examples may include elements in addition to those shown or described. However, the present inventors also contemplate examples in which only those elements shown or described are provided. Moreover, the present inventors also contemplate examples using any combination or permutation of those elements shown or described (or one or more claims thereof), either with respect to a particular example (or one or more claims thereof), or with respect to other examples (or one or more claims thereof) shown or described herein.
[0089] In the event of inconsistent usages between this document and any documents so incorporated by reference, the usage in this document controls.
[0090] In this document, the terms “a” or “an” are used, as is common in patent documents, to include one or more than one, independent of any other instances or usages of “at least one” or “one or more.” In this document, the term “or” is used to refer to a nonexclusive or, such that “A or B” includes “A but not B,” “B but not A,” and “A and B,” unless otherwise indicated. In this document, the terms “including” and “in which” are used as the plain-English equivalents of the respective terms “comprising” and “wherein.” Also, in the following claims, the terms “including” and “comprising” are open-ended, that is, a system, device, article, composition, formulation, or process that includes elements in addition to those listed after such a term in a claim are still deemed to fall within the scope of that claim. Moreover, in the following claims, the terms “first,” “second,” and “third,” etc. are used merely as labels, and are not intended to impose numerical requirements on their objects.
[0091] Method examples described herein may be machine or computer-implemented at least in part. Some examples may include a computer-readable medium or machine-readable medium encoded with instructions operable to configure an electronic device to perform methods as described in the above examples. An implementation of such methods may include code, such as microcode, assembly language code, a higher-level language code, or the like. Such code may include computer readable instructions for performing various methods. The code may form portions of computer program products. Further, in an example, the code may be tangibly stored on one or more volatile, non-transitory, or nonvolatile tangible computer-readable media, such as during execution or at other times. Examples of these tangible computer-readable media may include, but are not limited to, hard disks, removable magnetic disks, removable optical disks (e.g., compact discs and digital video discs), magnetic cassettes, memory cards or sticks, random access memories (RAMs), read only memories (ROMs), and the like.
[0092] The above description is intended to be illustrative, and not restrictive. For example, the above-described examples (or one or more claims thereof) may be used in combination with each other. Other embodiments may be used, such as by one of ordinary skill in the art upon reviewing the above description. The Abstract is provided to comply with 37 C.F.R. §1.72(b), to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. Also, in the above Detailed Description, various features may be grouped together to streamline the disclosure. This should not be interpreted as intending that an unclaimed disclosed feature is essential to any claim. Rather, inventive subject matter may lie in less than all features of a particular disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description as examples or embodiments, with each claim standing on its own as a separate embodiment, and it is contemplated that such embodiments may be combined with each other in various combinations or permutations. The scope of the invention should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.

Claims

CLAIMS What is claimed is:
1. A computerized method using processing circuitry for generating a non-destructive testing (NDT) image of a specimen under inspection directly from sparse acquisition data using a previously trained machine learning model, the computerized method comprising: acquiring the sparse acquisition data of the specimen; applying the previously trained machine learning model to the sparse acquisition data; generating, using the previously trained machine learning model while bypassing intermediate imaging, the NDT image that includes a geometry of the specimen; and displaying the NDT image.
2. The computerized method of claim 1, wherein acquiring the sparse acquisition data of the specimen includes acquiring Full Matrix Capture (FMC) acoustic data.
3. The computerized method of claim 2, wherein the FMC acoustic data includes three- dimensional data including transmit, receive, and time dimensions.
4. The computerized method of claim 1, wherein acquiring the sparse acquisition data of the specimen includes acquiring partial Full Matrix Capture (FMC) acoustic data.
5. The computerized method of claim 1, wherein acquiring the sparse acquisition data of the specimen includes acquiring amplitude-based acoustic data.
6. The computerized method of claim 1, wherein acquiring the sparse acquisition data of the specimen includes acquiring phase-based acoustic data without amplitude-based acoustic data.
7. The computerized method of claim 1, wherein bypassing intermediate imaging includes bypassing Total Focusing Method (TFM) imaging.
8. An ultrasound inspection system for generating a non -destructive testing (NDT) image of a specimen under inspection directly from sparse acquisition data using a previously trained machine learning model, the ultrasound inspection system comprising: an ultrasonic probe assembly; and a processor in communication with the ultrasonic probe assembly, the processor configured for: acquiring the sparse acquisition data of the specimen; applying the previously trained machine learning model to the sparse acquisition data; generating, using the previously trained machine learning model while bypassing intermediate imaging, the NDT image that includes geometry of the specimen; and displaying the NDT image.
9. The ultrasound inspection system of claim 8, wherein the sparse acquisition data includes Full Matrix Capture (FMC) acoustic data.
10. The ultrasound inspection system of claim 9, wherein the FMC acoustic data includes three-dimensional data including transmit, receive, and time dimensions.
11. The ultrasound inspection system of claim 8, wherein the sparse acquisition data includes partial Full Matrix Capture (FMC) acoustic data.
12. The ultrasound inspection system of claim 8, wherein the sparse acquisition data includes amplitude-based acoustic data.
13. The ultrasound inspection system of claim 8, wherein the sparse acquisition data comprises phase-based acoustic data without amplitude-based acoustic data.
14. The ultrasound inspection system of claim 8, wherein bypassing intermediate imaging includes bypassing Total Focusing Method (TFM) imaging.
15. A computerized method of training processing circuitry using machine learning in a system for non-destructive testing (NDT) of a specimen, the computerized method comprising: training a machine learning model to be used to generate NDT images directly from sparse acquisition data.
16. The computerized method of claim 15, wherein training the machine learning model comprises: receiving training data including sparse acquisition data and corresponding ground truth data representing flaws and geometry of specimens under inspection; applying the sparse acquisition data to the machine learning model; comparing output of the machine learning model to the corresponding ground truth data to generate a loss function; adjusting parameters of the machine learning model based on the loss function; and iteratively repeating the applying, comparing and adjusting to optimize the machine learning model.
17. The computerized method of claim 16, wherein receiving the training data includes receiving simulated sparse acquisition data generated from simulated flaws and geometry, and wherein the corresponding ground truth data includes the simulated flaws and geometry used to generate the simulated sparse acquisition data.
18. The computerized method of claim 16, wherein receiving the training data includes receiving experimental sparse acquisition data.
PCT/CA2024/051723 2024-01-05 2024-12-23 Non-destructive testing imaging using machine learning Pending WO2025145247A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202463618121P 2024-01-05 2024-01-05
US63/618,121 2024-01-05

Publications (1)

Publication Number Publication Date
WO2025145247A1 true WO2025145247A1 (en) 2025-07-10

Family

ID=96299844

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CA2024/051723 Pending WO2025145247A1 (en) 2024-01-05 2024-12-23 Non-destructive testing imaging using machine learning

Country Status (1)

Country Link
WO (1) WO2025145247A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200405269A1 (en) * 2018-02-27 2020-12-31 Koninklijke Philips N.V. Ultrasound system with a neural network for producing images from undersampled ultrasound data
US20220163665A1 (en) * 2020-11-24 2022-05-26 Olympus NDT Canada Inc. Techniques to reconstruct data from acoustically constructed images using machine learning
US20230098406A1 (en) * 2020-03-24 2023-03-30 Evident Canada, Inc. Compressive sensing for full matrix capture

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200405269A1 (en) * 2018-02-27 2020-12-31 Koninklijke Philips N.V. Ultrasound system with a neural network for producing images from undersampled ultrasound data
US20230098406A1 (en) * 2020-03-24 2023-03-30 Evident Canada, Inc. Compressive sensing for full matrix capture
US20220163665A1 (en) * 2020-11-24 2022-05-26 Olympus NDT Canada Inc. Techniques to reconstruct data from acoustically constructed images using machine learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
DAVOUDI NEDA, DEáN-BEN XOSé LUíS, RAZANSKY DANIEL: "Deep learning optoacoustic tomography with sparse data", NATURE MACHINE INTELLIGENCE, NATURE, vol. 1, no. 10, pages 453 - 460, XP093336118, ISSN: 2522-5839, DOI: 10.1038/s42256-019-0095-3 *

Similar Documents

Publication Publication Date Title
US12352727B2 (en) Acoustic imaging techniques using machine learning
US20250146979A1 (en) Phase-based approach for ultrasonic inspection
US20240329000A1 (en) Flaw classification during non-destructive testing
CN115362367B (en) Compressed sensing for full matrix capture
US11906468B2 (en) Acoustic profiling techniques for non-destructive testing
US12153132B2 (en) Techniques to reconstruct data from acoustically constructed images using machine learning
EP4298461A1 (en) Small-footprint acquisition scheme for acoustic inspection
WO2025145247A1 (en) Non-destructive testing imaging using machine learning
US20240319145A1 (en) Estimation of acoustic inspection measurement accuracy
EP4612522A1 (en) Non-destructive test (ndt) flaw and anomaly detection
US20240402132A1 (en) Color representation of complex-valued ndt data
JP7662774B2 (en) Automated TFM grid resolution setting tool
JP7596545B2 (en) Acoustic impact map based defect size imaging
WO2025194253A1 (en) Feature size estimation for acoustic inspection
WO2024221099A1 (en) Image-to-image translation for acoustic inspection
US20240192179A1 (en) Probe position encoding by ultrasound image correlation
US12510514B2 (en) 3D image enhancement for flaw detection
WO2024138262A1 (en) Amplitude filtering for phase-coherence imaging
WO2024221098A1 (en) Compressive sensing for phased-array acoustic inspection
US20240036009A1 (en) 3d image enhancement for flaw detection

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24914498

Country of ref document: EP

Kind code of ref document: A1