[go: up one dir, main page]

WO2025038564A2 - Système et procédé de détection de déplacement et de contrainte à l'aide d'un suivi de couleur - Google Patents

Système et procédé de détection de déplacement et de contrainte à l'aide d'un suivi de couleur Download PDF

Info

Publication number
WO2025038564A2
WO2025038564A2 PCT/US2024/041952 US2024041952W WO2025038564A2 WO 2025038564 A2 WO2025038564 A2 WO 2025038564A2 US 2024041952 W US2024041952 W US 2024041952W WO 2025038564 A2 WO2025038564 A2 WO 2025038564A2
Authority
WO
WIPO (PCT)
Prior art keywords
color
image
target object
centroid
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/US2024/041952
Other languages
English (en)
Other versions
WO2025038564A3 (fr
Inventor
Xiaodong Li
Timothy HARRELL
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
UVA Licensing and Ventures Group
Original Assignee
University of Virginia Patent Foundation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Virginia Patent Foundation filed Critical University of Virginia Patent Foundation
Publication of WO2025038564A2 publication Critical patent/WO2025038564A2/fr
Publication of WO2025038564A3 publication Critical patent/WO2025038564A3/fr
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01LMEASURING FORCE, STRESS, TORQUE, WORK, MECHANICAL POWER, MECHANICAL EFFICIENCY, OR FLUID PRESSURE
    • G01L1/00Measuring force or stress, in general
    • G01L1/24Measuring force or stress, in general by measuring variations of optical properties of material when it is stressed, e.g. by photoelastic stress analysis using infrared, visible light, ultraviolet
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Definitions

  • This disclosure relates generally to optical inspection and, in particular, to systems and methods for determining displacement, strain, or other mechanical parameters in a non-contact manner using color tracking.
  • DIC Digital image correlation
  • speckle patterns surface of a material to determine the displacement and strain.
  • DIC typically requires expensive hardware and software and can be time-consuming to implement.
  • comparative methods such as DIC, are impractical due to the inability to mark the material with a speckle pattern (e.g., because the material under inspection is very thin, such as a carbon fiber) or too expensive, alternative techniques are required.
  • Color tracking is a technique that allows for the detection of a specific color or color pattern in an image. This technique has been used in a range of applications such as bridge deflections, tracking fill dirt, and target tracking. Color tracking can be performed using standard imaging equipment making it a more cost-effective as a potential alternative is to use a smartphone camera.
  • comparative techniques using color tracking have been unable to provide accurate modeling. For example, color tracking modalities using edge tracking algorithms have provided poor accuracy in general cases, and even poorer accuracy in cases where strain or stress acts on the feature under inspection. Moreover, comparative examples of color tracking have been unable to provide measurements of strain. Therefore, there is a need for alternative techniques that offer improved accuracy, flexibility, and cost-effectiveness.
  • the present disclosure addresses needs in the field by presenting systems, methods, and computer readable media for providing, among other things, a) accurate displacement and strain measurements using a cost-effective color camera and color tracking algorithm and/or b) accurate displacement and strain measurement using a color tracking algorithm.
  • an optical inspection method comprises, for a plurality of image frames of a target object captured by an image sensor, the target object including at least one fiducial marker: receiving a respective image frame, the image frame including an n-dimensional array of pixel values corresponding to a plurality of colors, converting the image frame to a color-adjusted image, the color-adjusted image including a two-dimensional array of pixel values corresponding to a first color of the plurality of colors, and determining at least one centroid corresponding to at least one region in the color- adjusted image tracking the at least one centroid across at least two of the plurality of image frames; and outputting an indication of a mechanical parameter of the target object based on the tracking.
  • an optical inspection system comprising an imaging system configured to capture a plurality of image frames of a target object, the target object including at least one fiducial marker; and at least one processor configured to, for the plurality of image frames: receive a respective image frame, the image frame including an n-dimensional array of pixel values corresponding to a plurality of colors, convert the image frame to a color-adjusted image, the color-adjusted image including a two-dimensional array of pixel values corresponding to a first color of the plurality of colors, and determine at least one centroid corresponding to at least one region in the color-adjusted image, track the at least one centroid across at least two of the plurality of image frames, and output an indication of a mechanical parameter of the target object based on the tracking.
  • a non-transitory computer- readable medium stores instructions that, when executed by at least one processor of a computer, cause the computer to perform operations comprising, for a plurality of image frames of a target object captured by an image sensor, the target object including at least one fiducial marker: receiving a respective image frame, the image frame including an n-dimensional array of pixel values corresponding to a plurality of colors, converting the image frame to a color-adjusted image, the color-adjusted image including a two-dimensional array of pixel values corresponding to a first color of the plurality of colors, and determining at least one centroid corresponding to at least one region in the color-adjusted image tracking the at least one centroid across at least two of the plurality of image frames; and outputting an indication of a mechanical parameter of the target object based on the tracking.
  • FIG. 1 shows an example of the output of a color tracking algorithm in accordance with various aspects of the present disclosure.
  • FIG. 2 shows an example of the effects of removal factor and threshold variables in accordance with various aspects of the present disclosure.
  • FIG. 3 shows a set of example graphs from a displacement resolution experiment in accordance with various aspects of the present disclosure.
  • FIG. 4 shows an example of displacement measurement test results in accordance with various aspects of the present disclosure.
  • FIG. 5 shows an example of tensile test results in accordance with various aspects of the present disclosure.
  • FIG. 6 shows an example of arm flexion test results in accordance with various aspects of the present disclosure.
  • FIG. 7 shows an example schematic of a machine on which various aspects of the present disclosure can be implemented.
  • FIG. 8 shows an example of an optical inspection method in accordance with various aspects of the present disclosure.
  • any element, part, section, subsection, or component described with reference to any specific embodiment above may be incorporated with, integrated into, or otherwise adapted for use with any other embodiment described herein unless specifically noted otherwise or if it should render the embodiment device non-functional.
  • any step described with reference to a particular method or process may be integrated, incorporated, or otherwise combined with other methods or processes described herein unless specifically stated otherwise or if it should render the embodiment method nonfunctional.
  • multiple embodiment devices or embodiment methods may be combined, incorporated, or otherwise integrated into one another to construct or develop further embodiments of the invention described herein.
  • any of the components or modules referred to with regards to any of the present invention embodiments discussed herein, may be integrally or separately formed with one another. Further, redundant functions or structures of the components or modules may be implemented. Moreover, the various components may be communicated locally and/or remotely with any user/operator/customer/client or with any machine/system/computer/processor. Moreover, the various components may be in communication via wireless and/or hardwire or other desirable and available communication means, systems, and hardware. Moreover, various components and modules may be substituted with other modules or components that provide similar functions.
  • the device may constitute various sizes, dimensions, contours, rigidity, shapes, flexibility and materials as it pertains to the components or portions of components of the device, and therefore may be varied and utilized as desired or required.
  • the devices, systems, apparatuses, modules, compositions, articles of manufacture, materials, computer program products, non-transitory computer readable medium, and methods of various embodiments of the invention disclosed herein may utilize aspects (such as devices, apparatuses, modules, systems, compositions, articles of manufacture, materials, computer program products, non-transitory computer readable medium, and methods) disclosed in the following references, applications, publications and patents and which are hereby incorporated by reference herein in their entirety (and which are not admitted to be prior art with respect to the present invention by inclusion in this section): A. U. S. Utility Patent Application Serial No.
  • a subject may be a human or any animal. It should be appreciated that an animal may be a variety of any applicable type, including, but not limited thereto, mammal, veterinarian animal, livestock animal or pet type animal, etc. As an example, the animal may be a laboratory animal specifically selected to have certain characteristics similar to human (e.g. rat, dog, pig, monkey), etc. It should be appreciated that the subject may be any applicable human patient, for example.
  • 1 to 5 includes 1, 1.5, 2, 2.75, 3, 3.90, 4, 4.24, 5, and so on).
  • numerical ranges recited herein by endpoints include subranges subsumed within that range (e g. 1 to 5 includes 1-1.5, 1.5-2, 2-2.75, 2.75-3, 3-3.90, 3.90-4, 4-4.24, 4.24-5, 2-5, 3-5, 1-4, 2-4, and so on). It is also to be understood that all numbers and fractions thereof are presumed to be modified by the term “about.”
  • any reference to an element herein using a designation such as “first,” “second,” and so forth does not limit the quantity or order of those elements, unless such limitation is explicitly stated. Rather, these designations may be used herein as a convenient method of distinguishing between two or more elements or instances of an element. Thus, a reference to first and second elements does not mean that only two elements may be employed or that the first element must precede the second element in some manner.
  • the term “or” as used herein is intended to indicate exclusive alternatives only when preceded by terms of exclusivity, such as, e.g., “either,” “one of,” “only one of,” or “exactly one of.” Further, a list preceded by “one or more” (and variations thereon) and including “or” to separate listed elements indicates options of one or more of any or all of the listed elements.
  • the phrases “one or more of A, B, or C” and “at least one of A, B, or C” indicate options of: one or more A; one or more B; one or more C; one or more A and one or more B; one or more B and one or more C; one or more A and one or more C; and one or more of each of A, B, and C.
  • a list preceded by “a plurality of’ (and variations thereon) and including “or” to separate listed elements indicates options of multiple instances of any or all of the listed elements.
  • the phrases “a plurality of A, B, or C” and “two or more of A, B, or C” indicate options of: A and B; B and C; A and C; and A, B, and C.
  • control unit may be any computing device configured to send and/or receive information (e.g., including instructions) to/from various systems and/or devices.
  • a control unit may comprise processing circuitry configured to execute operating routine(s) stored in a memory.
  • the control unit may comprise, for example, a processor, microcontroller, application-specific integrated circuit (ASIC), field-programmable gate array (FPGA), and the like, any other digital and/or analog components, as well as combinations of the foregoing, and may further comprise inputs and outputs for processing control instructions, control signals, drive signals, power signals, sensor signals, and the like.
  • ASIC application-specific integrated circuit
  • FPGA field-programmable gate array
  • control unit is not limited to a single device with a single processor, but may encompass multiple devices (e.g., computers) linked in a system, devices with multiple processors, special purpose devices, devices with various peripherals and input and output devices, software acting as a computer or server, and combinations of the above.
  • control unit may be configured to implement cloud processing, for example by invoking a remote processor.
  • processor may include one or more individual electronic processors, each of which may include one or more processing cores, and/or one or more programmable hardware elements.
  • the processor may be or include any type of electronic processing device, including but not limited to central processing units (CPUs), graphics processing units (GPUs), ASICs, FPGAs, microcontrollers, digital signal processors (DSPs), or other devices capable of executing software instructions.
  • CPUs central processing units
  • GPUs graphics processing units
  • ASICs application specific integrated circuits
  • FPGAs field-programmable gate arrays
  • DSPs digital signal processors
  • a device is referred to as “including a processor,” one or all of the individual electronic processors may be external to the device (e.g., to implement cloud or distributed computing).
  • individual operations described herein may be performed by any one or more of the microprocessors or processing cores, in series or parallel, in any combination.
  • the term “memory” may be any storage medium, including a nonvolatile medium, e.g., a magnetic media or hard disk, optical storage, or flash memory, including read-only memory (ROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM); a volatile medium, such as system memory, e.g., random access memory (RAM) such as dynamic RAM (DRAM), synchronous dynamic RAM (SDRAM), static RAM (SRAM), extended data out (EDO) DRAM, extreme data rate dynamic (XDR) RAM, double data rate (DDR) SDRAM, etc.; on-chip memory; and/or an installation medium where appropriate, such as software media, e.g., a CD-ROM, a DVD-ROM, a Blu-ray disc, or floppy disks, on which programs may be stored and/or data communications may be buffered.
  • the term “memory” may also include other types of memory or combinations thereof.
  • the present disclosure presents the application of a color tracking algorithm that can be used with any camera to determine the displacement and strain of materials and structures using a fiducial marker.
  • the fiducial may be, for example, a sticker applied to the target object, a three-dimensional bead adhered to the target object, a paint or other marker applied to the target object, and the like.
  • the fiducial may have any shape, including circular, square, rectangular, polygonal, spherical, cuboidal, polyhedral, etc.
  • a mobile phone camera may be used as the camera.
  • the algorithm set forth herein calculates the displacements and strain by finding the centroid of the specified color and is compared to the results obtained from DIC correlation analysis as proof-of-concept.
  • This technique may be extended to stereo measurements, and as such the color tracking approach presents use cases for numerous applications in cost-effective manner.
  • the ability to utilize a mobile phone camera allows for more accessibility to this technique and the use of fiducial marker (e.g., a green marker) is simple and easy to implement. This provides possibilities for widespread adoption in various fields where precise displacement and strain measurements are required and as an educational tool for students learning about optical techniques.
  • an example of this technique is used to measure the movement of a human arm.
  • these experiments are presented merely by way of example. In practice, the systems and methods set forth herein may be applied to any tensile or compressive test.
  • the present disclosure includes a discussion directed to enabling the use of a color tracking algorithm to determine the displacement and strain of materials through experimental proofs-of-concept.
  • Three different experiments were used to assess the tracking ability.
  • the first experiment was a nylon fiber with a green ball attached to the base.
  • the color tracking algorithm was compared to the results of a DIC correlation of the fiducial marker.
  • the color tacking algorithm was able to capture more of the movement of the ball with less noise than the DIC correlation.
  • the second experiment was a carbon fiber tow tensile test.
  • the experiment included two fiducial markers allowing for the calculation of the strains during loading.
  • the third experiment was a biomechanics application visualizing the movement of a human arm.
  • the experiment included three fiducial markers and was able to measure the displacement of the markers and the angle of the joint.
  • An image captured from a camera is outputted to a 3D matrix with red, green, and blue color intensity values.
  • Color tracking algorithms are a particular branch of computer vision that can be used to detect and track specific colors or color patterns in an image or video.
  • the first step in a color tracking algorithm is to identify the colors or color patterns that need to be tracked.
  • the color tracking algorithm used for this example combines the colors from the different channels to determine an intensity in regard to a specific color. For this case, green was utilized because color cameras have two times the amount of green filters compared to red and blue.
  • the intensity of the green intensity was calculated using the Equation (1). It is implemented on each pixel to convert the 3D matrix to a 2D array of green intensity values.
  • Equation (1) r, g, and b are the intensity values from the red, green, and blue channels, respectively, and a is a removal factor. In implementations where red or blue are utilized instead of green, Equation (1) may be modified accordingly. In any case, the intensity values are normalized to better assess the results of the intensity calculation. The normalization is performed using Equation (2).
  • FIG. 1 shows the colors that are found after performing the color tracking algorithm.
  • images (a) and (b) show the colors for the full range of RGB values at the white and dark intensity values, respectively.
  • Image (c) shows the unwrapped version of the colors on the maximum surface of the RGB values, and images (d) through (I) show the resulting colors remaining after running the color tracking algorithm for different values of the removal factor a namely, 0.2, 0.5, and 0.8.
  • careful selection of a can lead to a down selection of colors detected in the algorithm.
  • the color tracking experiments were captured using a Google Pixel 2 XL.
  • the smartphone is equipped with 2 cameras: a main back facing camera and secondary front facing camera.
  • the back facing camera is a 12.2 MP, f/1.8, 27 mm wide, 1/2.55", 1.4 pm sensor size, dual pixel PDAF, Laser AF, OIS camera and is the only one used for this example.
  • the camera specification changes to 4K at 30 frames per second (fps), 1080p at 30/60/120 fps, or 720p at 240 fps.
  • Each experiment discussed here was captured in video mode carrying 30 fps to 120 fps.
  • the captured videos were processed with the phone’s native software.
  • the resulting video files were then transferred to a computer for further analysis.
  • a custom script was created in MATLAB 2022b to perform the color tracking.
  • the script starts by importing the video and extracting an image from the desired frame. The image is then separated into the red, green, and blue channels. The intensities of the green values are calculated via Equation (1) above.
  • T To determine the appropriate removal factor, a, and threshold value, T, a small parametric study was performed to assess the effect as shown in FIG. 2.
  • the image matrix shows the results of changing the removal factor and threshold value.
  • the removal factor a and the threshold value T By selecting particular values of the removal factor a and the threshold value T, only the green dot remains.
  • the particular values may be manually selected or automatically determined.
  • the script then fines the centroid of the dots by using the ‘regionprops’ function implemented in MATLAB. This function finds so-called regions from a binary image and determines the area and centroid of that region. The region results are sorted from largest to smallest. The first entries are selected typically corresponding to the number of dots used within the experiment. For one dot, there is no further refinement needed and the centroid of the data is found for the dot in all the frames. For multiple dots, the full region is sorted by area and down selected to only the number of dots in the images. Using multiple dots can allow for more complex measurements such as strain or angles.
  • the initial frame (or multiple zero-strain frames) is used to calculate an original length. Afterwards, the change in length is calculated for each frame and divided by the original length to calculate strain data.
  • the displacement resolution was determined by taking a video of one fiducial marker on a stationary wall. A video was taken from the camera for ⁇ 1 minute at 30 fps capturing 1800 images. In addition, the stand-off distance was varied from 10 to 250 cm to determine the change as the field of view (FOV) became larger. The images were then processed using the algorithm of the present disclosure to determine the centroid of the fiducial marker. In some practical implementations, an algorithm may analyze an image taken at one stand-off distance and, based on the analysis (e.g., based on a signal to noise ratio, based on a determination of whether the fiducials are properly in frame), instruct an operator of the camera to move closer to or further from the target object.
  • the analysis e.g., based on a signal to noise ratio, based on a determination of whether the fiducials are properly in frame
  • FIG. 3 shows a series of graphs from the 10 cm stand-off distance displacement resolution experiment.
  • Graph (a) shows a photograph of the full image
  • graph (b) shows an image zoomed in toward the centroid value with experimental measurements overlaid on the first frame.
  • Lf ov is the diagonal length of the FOV
  • C avg is the average position of the centroid
  • Exp is the experimental measurements
  • D max is the maximum distance from the average centroid.
  • the blue circles in graph (b) are all the centroids extracted from the experiment, and the dashed line shows a circle that exhibits the maximum distance from the average centroid. From graph (b), it can be seen that the method set forth herein has subpixel resolution.
  • the displacement strain results are shown in graph (c) for the average noise as a function of stand-off distance, and in graph (d) for relative noise.
  • the pixel noise is determined by taking the average distance from the centroid and is shown as both physical and pixel units. As can be seen from graph (c), the average pixel noise grows as stand-off distance increases, but then drops. This may be due to less pixels being available for the centroid to generate noise. However, the physical distance shows that even though the noise is lower in pixel units, the physical units increase as stand-off distance increases.
  • Graph (d) shows the average noise normalized by the length of the FOV Lfov. The relative noise shows the same trend as the average noise.
  • Equation (3) AL is the change in length, and L is the original gauge length. Applying the noise to the system and assuming that the change in length in zero, Equation (3) becomes Equation (4).
  • N is the noise level. As the gauge length decreases, the noise increases, thus making the strain resolution variable.
  • a moving pendulum experiment was conducted as an example of displacement tracking with the higher frame rate (120 fps).
  • a green bead was strung through a tow of nylon fibers that was tied to a weight. The whole system is rigidly moving in an upwards direction until out of the frame.
  • a tow of nylon fibers was tied to an S-hook to act as a weight.
  • a green bead was threaded through the fibers and sat atop of the weight.
  • the fibers were then clamped to an Admet expert universal tensile machine (UTM).
  • the weight was moved and released to create a pendulum.
  • the Admet UTM was then jogged in an upward direction at a rate of 100 mm/min.
  • the camera captured the movement of the ball until it proceeded out of frame.
  • tracking of the point was performed with a DIC process using VIC -2D software.
  • the full image was used as an area of interest.
  • the subsets were varied to contain the most data over the time and provide a compromise between noise and accuracy.
  • FIG. 4 presents the horizontal and vertical displacements from the proposed color tracking (CT) algorithm, and the DIC correlation.
  • Image (a) shows the full displacement time history of both measurements overlay ed on the final frame of the video. It shows that the DIC and CT measurements were close over the first -10% of the video.
  • the DIC algorithm decorrelated after this point, as can be seen from graphs (b) and (d).
  • the failure of DIC is due to the lack of a random speckle pattern on this test and the fact that DIC is developed for small displacements and small strains.
  • Graphs (c) and (e) show the region that the DIC correlated. Over this region, the DIC correlated well and maintained similar measurements with the color tracking algorithm within 5%. However, it was noisier than the color tracking especially near the transitions or the peaks of the wave.
  • the average of the absolute difference between these two sets of data are 3.08 and 1.14 pixels for the horizontal and vertical displacements, respectively.
  • a tensile test was conducted on a carbon fiber tow specimen to evaluate the performance of the color tracking algorithm in determining strain between two fiducial markers.
  • the ability to track the displacement on the material rather than the grips is important to truly testing material properties due to inaccuracies with the measurements. These inaccuracies stem from errors such as compliance of the test machine with stiff materials, grip slippage, and the like.
  • the tow was made ofZoltek Panex 35 carbon fiber with West Systems 105 epoxy resin with -55% volume fraction.
  • the tensile test was performed in accordance with ASTM D4018. The test was recorded at 120 fps frame rate for 35 seconds.
  • FIG. 5 illustrates the tensile test.
  • Image (a) shows the experimental setup.
  • the color tracking algorithm offers subpixel displacement accuracy. This means that the algorithm can detect and measure displacements with a resolution finer than a single pixel. Subpixel displacement resolution typically translates into measurements on the order of 200 pm.
  • the use of the color tracking algorithm as set forth herein provides several advantages over the comparative examples.
  • One advantage is in affordability and availability.
  • the primary equipment required for this technique is a color camera, and in the study described above, a mobile phone camera (Google Pixel 2 XL) was used. In video mode, this is only 2 megapixels in camera resolution; therefore, the experiments above demonstrate that even relatively low resolution solutions work.
  • the cost of the camera used in the experiments was a refurbished phone for a total cost of $109, which is significantly lower than specialized cameras employed in comparative optical measurement methods such as DIC.
  • the color tracking algorithm By leveraging the capabilities of a mobile phone camera, the color tracking algorithm eliminates the need for this specialized equipment and opens up new possibilities for its application in various fields where cost constraints may have previously limited the adoption of accurate measurement techniques. Additionally, the low-cost nature of the approach makes it particularly attractive as an educational tool, allowing students to gain hands- on experience with optical techniques without significant financial barriers. Furthermore, the methodology set forth herein may operate with only information regarding the fiducial markers, and without information regarding the object under inspection.
  • the color tracking algorithm also is applicable for measuring motion in other contexts. As demonstrated by the example of measuring the movement of a human arm, the algorithm can be applied to capture the motion of complex objects. This versatility opens up possibilities for its adoption in biomechanics research, rehabilitation studies, sports related tracking, and other fields where precise motion tracking is essential.
  • the performance of the color tracking algorithm can be affected by variations in lighting conditions. Changes in illumination, shadows, or reflections may introduce noise and affect the accuracy of the measurements. Careful control of lighting conditions may be performed to ensure reliable and consistent results.
  • the use of the fiducial marker (as opposed to alternative tracking modalities, such as feature detection) provides some resilience to lighting condition variations, in addition to resilience to stress and strain.
  • the color tracking algorithm offers subpixel displacement resolution, the spatial resolution may be limited by the camera’s capabilities and image quality.
  • the above description shows the efficacy of the algorithm even with comparatively low-resolution cameras, higher-resolution cameras may be used to capture more detailed deformation patterns, especially in materials with small-scale features or fine spatial variations.
  • the above experiments illustrate the applicability and effectiveness of color tracking algorithms applied to two-dimensional images acquired by a single image sensor.
  • the color tracking algorithms and methodologies set forth herein may be applied to three-dimensional images acquired by either a single or multiple image sensors.
  • two smartphones may be provided according to a known physical relationship (e.g., a set distance apart) in order to capture stereoscopic images, which may be used to provide stereoscopic analysis of the target object.
  • the field of view of the camera used in the study may be restricted. If the fiducial marker displaces out of the field of view, measurements are no longer able to be taken. This limitation may be overcome by employing stitching techniques to combine multiple images and extend the coverage area.
  • FIG. 7 is a block diagram illustrating an example of a machine upon which one or more aspects of embodiments of the present invention can be implemented.
  • an aspect of an embodiment of the present invention includes, but not limited thereto, a system, method, and computer readable medium that provides a) accurate displacement and strain measurements using a cost-effective color camera and color tracking algorithm and/or b) accurate displacement and strain measurement using a color tracking algorithm.
  • FIG. 7 illustrates a block diagram of an example machine 400 upon which one or more embodiments (e.g., discussed methodologies) can be implemented (e.g., run).
  • Examples of machine 400 can include logic, one or more components, circuits (e g., modules), or mechanisms. Circuits are tangible entities configured to perform certain operations. In an example, circuits can be arranged (e.g., internally or with respect to external entities such as other circuits) in a specified manner. In an example, one or more computer systems (e.g., a standalone, client or server computer system) or one or more hardware processors (processors) can be configured by software (e.g., instructions, an application portion, or an application) as a circuit that operates to perform certain operations as described herein. In an example, the software can reside (1) on a non-transitory machine readable medium or (2) in a transmission signal.
  • circuits e.g., modules
  • Circuits are tangible entities configured to perform certain operations.
  • circuits can be arranged (e.g., internally or with respect to external entities such as other circuits) in a specified manner.
  • one or more computer systems e.g., a standalone, client or server computer system
  • a circuit can be implemented mechanically or electronically.
  • a circuit can comprise dedicated circuitry or logic that is specifically configured to perform one or more techniques such as discussed above, such as including a special-purpose processor, an FPGA, or an ASIC.
  • a circuit can comprise programmable logic (e.g., circuitry, as encompassed within a general-purpose processor or other programmable processor) that can be temporarily configured (e.g., by software) to perform the certain operations. It will be appreciated that the decision to implement a circuit mechanically (e.g., in dedicated and permanently configured circuitry), or in temporarily configured circuitry (e.g., configured by software) can be driven by cost and time considerations.
  • circuit is understood to encompass a tangible entity, be that an entity that is physically constructed, permanently configured (e g., hardwired), or temporarily (e.g., transitorily) configured (e.g., programmed) to operate in a specified manner or to perform specified operations.
  • each of the circuits need not be configured or instantiated at any one instance in time.
  • the circuits comprise a general -purpose processor configured via software
  • the general- purpose processor can be configured as respective different circuits at different times.
  • Software can accordingly configure a processor, for example, to constitute a particular circuit at one instance of time and to constitute a different circuit at a different instance of time.
  • circuits can provide information to, and receive information from, other circuits.
  • the circuits can be regarded as being communicatively coupled to one or more other circuits.
  • communications can be achieved through signal transmission (e.g., over appropriate circuits and buses) that connect the circuits.
  • communications between such circuits can be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple circuits have access.
  • one circuit can perform an operation and store the output of that operation in a memory device to which it is communicatively coupled.
  • a further circuit can then, at a later time, access the memory device to retrieve and process the stored output.
  • circuits can be configured to initiate or receive communications with input or output devices and can operate on a resource (e.g., a collection of information).
  • a resource e.g., a collection of information.
  • the various operations of method examples described herein can be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors can constitute processor-implemented circuits that operate to perform one or more operations or functions.
  • the circuits referred to herein can comprise processor-implemented circuits.
  • the methods described herein can be at least partially processor- implemented. For example, at least some of the operations of a method can be performed by one or processors or processor-implemented circuits. The performance of certain of the operations can be distributed among the one or more processors, not only residing within a single machine, but deployed across a number of machines. In an example, the processor or processors can be located in a single location (e.g., within a home environment, an office environment or as a server farm), while in other examples the processors can be distributed across a number of locations.
  • the one or more processors can also operate to support performance of the relevant operations in a "cloud computing" environment or as a "software as a service” (SaaS). For example, at least some of the operations can be performed by a group of computers (as examples of machines including processors), with these operations being accessible via a network (e.g., the Internet) and via one or more appropriate interfaces (e.g., Application Program Interfaces (APIs).)
  • Example embodiments e.g., apparatus, systems, or methods
  • Example embodiments can be implemented using a computer program product (e.g., a computer program, tangibly embodied in an information carrier or in a machine readable medium, for execution by, or to control the operation of, data processing apparatus such as a programmable processor, a computer, or multiple computers).
  • a computer program product e.g., a computer program, tangibly embodied in an information carrier or in a machine readable medium, for execution by, or to control the operation of, data processing apparatus such as a programmable processor, a computer, or multiple computers.
  • a computer program can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a software module, subroutine, or other unit suitable for use in a computing environment.
  • a computer program can be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network.
  • operations can be performed by one or more programmable processors executing a computer program to perform functions by operating on input data and generating output. Examples of method operations can also be performed by, and example apparatus can be implemented as, special purpose logic circuitry (e.g., an FPGA or an ASIC).
  • the computing system can include clients and servers.
  • a client and server are generally remote from each other and generally interact through a communication network.
  • the relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
  • both hardware and software architectures require consideration.
  • the choice of whether to implement certain functionality in permanently configured hardware e.g., an ASIC
  • temporarily configured hardware e.g., a combination of software and a programmable processor
  • a combination of permanently and temporarily configured hardware can be a design choice.
  • hardware e.g., machine 400
  • software architectures that can be deployed in example embodiments.
  • the machine 400 can operate as a standalone device or the machine 400 can be connected (e.g., networked) to other machines. In a networked deployment, the machine 400 can operate in the capacity of either a server or a client machine in server-client network environments. In an example, machine 400 can act as a peer machine in peer-to-peer (or other distributed) network environments.
  • the machine 400 can be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a mobile telephone, a web appliance, a network router, switch or bridge, or any machine capable of executing instructions (sequential or otherwise) specifying actions to be taken (e.g., performed) by the machine 400.
  • PC personal computer
  • PDA Personal Digital Assistant
  • STB set-top box
  • mobile telephone a web appliance
  • network router switch or bridge
  • Example machine 400 can include a processor 402 (e.g., a CPU, a GPU, or both), a main memory 404, and a static memory 406, some or all of which can communicate with each other via a bus 408.
  • the machine 400 can further include a display unit 410, an alphanumeric input device 412 (e.g., a keyboard), and a user interface (UI) navigation device 411 (e.g., a mouse).
  • the display unit 810, input device 417 and UI navigation device 414 can be a touch screen display.
  • the machine 400 can additionally include a storage device (e.g., drive unit) 416, a signal generation device 418 (e.g., a speaker), a network interface device 420, and one or more sensors 421, such as a global positioning system (GPS) sensor, compass, accelerometer, or other sensor.
  • a storage device e.g., drive unit
  • a signal generation device 418 e.g., a speaker
  • a network interface device 420 e.g., a wireless local area network
  • sensors 421 such as a global positioning system (GPS) sensor, compass, accelerometer, or other sensor.
  • GPS global positioning system
  • the storage device 416 can include a machine readable medium 422 on which is stored one or more sets of data structures or instructions 424 (e.g., software) embodying or utilized by any one or more of the methodologies or functions described herein, and may be any type of “memory” described above.
  • the instructions 424 can also reside, completely or at least partially, within the main memory 404, within static memory 406, or within the processor 402 during execution thereof by the machine 400.
  • one or any combination of the processor 402, the main memory 404, the static memory 406, or the storage device 416 can constitute machine readable media.
  • machine readable medium 422 is illustrated as a single medium, the term “machine readable medium” can include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that configured to store the one or more instructions 424.
  • the term “machine readable medium” can also be taken to include any tangible medium that is capable of storing, encoding, or carrying instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure or that is capable of storing, encoding or carrying data structures utilized by or associated with such instructions.
  • the term “machine readable medium” can accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media.
  • machine readable media can include non-volatile memory, including, by way of example, semiconductor memory devices (e.g., EPROM, EEPROM, and flash memory devices); magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD- ROM and DVD-ROM disks.
  • semiconductor memory devices e.g., EPROM, EEPROM, and flash memory devices
  • magnetic disks such as internal hard disks and removable disks
  • magneto-optical disks and CD- ROM and DVD-ROM disks.
  • the instructions 424 can further be transmitted or received over a communications network 426 using a transmission medium via the network interface device 420 utilizing any one of a number of transfer protocols (e.g., frame relay, IP, TCP, UDP, HTTP, etc.).
  • Example communication networks can include a local area network (LAN), a wide area network (WAN), a packet data network (e.g., the Internet), mobile telephone networks (e.g., cellular networks), Plain Old Telephone (POTS) networks, and wireless data networks (e.g., IEEE 802.11 standards family known as Wi-Fi®, IEEE 802.16 standards family known as WiMax®), peer-to-peer (P2P) networks, among others.
  • the term “transmission medium” shall be taken to include any intangible medium that is capable of storing, encoding or carrying instructions for execution by the machine, and includes digital or analog communications signals or other intangible medium to facilitate communication of such software.
  • FIG. 7 shows a flow chart of an example optical inspection method 500.
  • the method 500 includes a loop of operations that are repeated for multiple image frames (e.g., for a multiple frames of a video) of a target object captured by an image sensor, the target object including at least one fiducial marker.
  • the method 500 may be performed by the machine 400 of FIG. 6.
  • an image frame is received.
  • the image frame includes an n-dimensional array of pixel values corresponding to a plurality of colors.
  • the image frame may include a three-dimensional array of red pixel values, blue pixel values, and green pixel values.
  • the image frame is converted to a color-adjusted image, the color-adjusted image including a two-dimensional array of pixel values corresponding to a first color of the plurality of colors.
  • the first color is green and the conversion may be accomplished by the process set forth above, including applying Equation (1).
  • the first color may be any one of the plurality of colors, and may be the same color as the fiducial marker.
  • Operation 520 may include applying at least one of the removal factor a or the threshold value T, as described above.
  • at least one centroid is determined corresponding to at least one region in the color-adjusted image.
  • the loop including operations 510-530 may repeat any number of times, each iteration corresponding to a different image frame.
  • the centroid determined at operation 530 is tracked across multiple iterations of the loop (i.e., across two or more image frames). Based on parameters of the centroid, at operation 550 at least one mechanical parameter is output.
  • the mechanical parameter may be at least one of a displacement, a strain, or a pose of the target object.
  • the particular mechanical parameter output may be related to the number of fiducial markers, centroids, and/or regions. For example, one fiducial may be used to determine and track a centroid for a single region; in this case, the mechanical parameter may be a displacement (or velocity, acceleration, etc.) of the target object.
  • two fiducials may be used to determine and track centroids for two regions; in this case, the mechanical parameter may be a relative displacement or strain of the target object.
  • three or more fiducials may be used to determine and track centroids for three or more regions; in this case, the mechanical parameter may be a pose of the target object.
  • the method 500 may output mechanical parameters corresponding to a lower number of fiducials, centroids, and/or regions than the total number present in the system. For example, if the target object includes three fiducials, the method 500 may output the strain between two fiducials (or multiple strains between different pairs of fiducials) in addition to or instead of the pose.
  • the method 500 may be performed in a real-time or post-processing manner.
  • operations 540 and 550 may be performed immediately after the second iteration of the loop of operations 510-530, and may be continually performed or updated after successive iterations. This may provide real time (or, to account for the time required to perform computational operations, near real time) information regarding the mechanical parameter.
  • operations 540 and 550 may be performed at later time, once all iterations of the loop of operations 510-530 have been performed.
  • the device performing the processing operations 510-550 may be the same as or different from the device capturing the images (i.e., may be the smartphone having the camera, or a different device).
  • different devices may be configured to perform different ones or sets of the operations of the method 500.
  • some pre-processing operations e.g., operation 520
  • subsequent operations may be performed by a second device.
  • the present disclosure sets forth a cost-effective technique for accurate displacement and strain measurement using a color tracking algorithm.
  • the algorithm provides a low- cost alternative to comparative optical measurement methods.
  • Other types of cameras may be implemented other than mobile cameras.
  • any activity can be repeated, any activity can be performed by multiple entities, and/or any element can be duplicated. Further, any activity or element can be excluded, the sequence of activities can vary, and/or the interrelationship of elements can vary. Unless clearly specified to the contrary, there is no requirement for any particular described or illustrated activity or element, any particular sequence or such activities, any particular size, speed, material, dimension or frequency, or any particular interrelationship of such elements. Accordingly, the descriptions and drawings are to be regarded as illustrative in nature, and not as restrictive. Moreover, when any number or range is described herein, unless clearly stated otherwise, that number or range is approximate. When any range is described herein, unless clearly stated otherwise, that range includes all values therein and all sub ranges therein.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Geometry (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

Des systèmes, des procédés et des supports d'inspection optique sont configurés pour effectuer et/ou mettre en œuvre, pour une pluralité de trames d'image d'un objet cible capturées par un capteur d'image, l'objet cible comprenant au moins un marqueur de repère, consistant à : recevoir une trame d'image respective, la trame d'image comprenant un réseau n-dimensionnel de valeurs de pixel correspondant à une pluralité de couleurs, convertir la trame d'image en une image à couleur ajustée, l'image à couleur ajustée comprenant un réseau bidimensionnel de valeurs de pixel correspondant à une première couleur de la pluralité de couleurs, et déterminer au moins un centroïde correspondant à au moins une région dans l'image à couleur ajustée qui suit ledit centroïde à travers au moins deux de la pluralité de trames d'image ; et délivrer en sortie une indication d'un paramètre mécanique de l'objet cible sur la base du suivi.
PCT/US2024/041952 2023-08-11 2024-08-12 Système et procédé de détection de déplacement et de contrainte à l'aide d'un suivi de couleur Pending WO2025038564A2 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202363519144P 2023-08-11 2023-08-11
US63/519,144 2023-08-11

Publications (2)

Publication Number Publication Date
WO2025038564A2 true WO2025038564A2 (fr) 2025-02-20
WO2025038564A3 WO2025038564A3 (fr) 2025-04-03

Family

ID=94632616

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2024/041952 Pending WO2025038564A2 (fr) 2023-08-11 2024-08-12 Système et procédé de détection de déplacement et de contrainte à l'aide d'un suivi de couleur

Country Status (1)

Country Link
WO (1) WO2025038564A2 (fr)

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10043282B2 (en) * 2015-04-13 2018-08-07 Gerard Dirk Smits Machine vision for ego-motion, segmenting, and classifying objects

Also Published As

Publication number Publication date
WO2025038564A3 (fr) 2025-04-03

Similar Documents

Publication Publication Date Title
Ye et al. Vision-based structural displacement measurement: System performance evaluation and influence factor analysis
Chen et al. High-accuracy multi-camera reconstruction enhanced by adaptive point cloud correction algorithm
CN107871328B (zh) 机器视觉系统和机器视觉系统实现的校准方法
CN107328502B (zh) 一种锚杆托盘载荷可视化数字成像方法
US8803943B2 (en) Formation apparatus using digital image correlation
US9118823B2 (en) Image generation apparatus, image generation method and storage medium for generating a target image based on a difference between a grip-state image and a non-grip-state image
US11954844B2 (en) Fatigue crack detection in civil infrastructure
JP2008224626A5 (fr)
Tang et al. Vision‐based three‐dimensional reconstruction and monitoring of large‐scale steel tubular structures
EP2372611A3 (fr) Système de génération de références de correspondance de scènes et système de mesure de la position
JP2014225843A5 (fr)
JP2008261755A5 (fr)
WO2012091144A4 (fr) Appareil et procédé de traitement d'information
JP2018026064A5 (fr)
CN118982623A (zh) 一种三维重建方法、装置、设备及介质
CN111397541A (zh) 排土场的边坡角测量方法、装置、终端及介质
IL299069B1 (en) 3D structure or metrology inspection using deep learning
JP2014069272A5 (fr)
TW201310004A (zh) 編列數位影像關係裝置
JP2015106290A5 (fr)
JP2012015891A5 (fr)
Shao et al. 3D displacement measurement using a single-camera and mesh deformation neural network
Shan et al. A noise-robust vibration signal extraction method utilizing intensity optical flow
Nguyen et al. A comprehensive RGB-D dataset for 6D pose estimation for industrial robots pick and place: Creation and real-world validation
JP2012123631A (ja) 注目領域検出方法、注目領域検出装置、及びプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24854783

Country of ref document: EP

Kind code of ref document: A2