[go: up one dir, main page]

WO2013070945A1 - Systèmes d'analyse d'image à capacités d'accentuation des contours d'image et procédés les utilisant - Google Patents

Systèmes d'analyse d'image à capacités d'accentuation des contours d'image et procédés les utilisant Download PDF

Info

Publication number
WO2013070945A1
WO2013070945A1 PCT/US2012/064195 US2012064195W WO2013070945A1 WO 2013070945 A1 WO2013070945 A1 WO 2013070945A1 US 2012064195 W US2012064195 W US 2012064195W WO 2013070945 A1 WO2013070945 A1 WO 2013070945A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
linear
reference image
test
data processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/US2012/064195
Other languages
English (en)
Inventor
Steve T. KACENJAR
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lockheed Martin Corp
Original Assignee
Lockheed Corp
Lockheed Martin Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lockheed Corp, Lockheed Martin Corp filed Critical Lockheed Corp
Publication of WO2013070945A1 publication Critical patent/WO2013070945A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration

Definitions

  • the present invention generally relates to image analysis, and, more specifically, to automated registration and analysis of time sequence images.
  • fields such as, for example, earth remote sensing, aerospace systems and medical imaging
  • searching for time-dependent, regional changes of significance (e.g., material stress patterns, surface roughness, changes in inclusions and the like) across a generalized deformable surface can be complicated by extraneous factors including, for example, target movement, image acquisition device geometries, color, lighting and background clutter changes.
  • standard, rigid-body registration techniques often can fail to address and correct for these extraneous factors, which can prevent adequate image overlayment from be realized, thereby leading to an incorrect assessment of change over the deformable surface between time sequence images.
  • a generalized deformable surface will refer to any surface that does not deform uniformly when subjected to an external or internal stress during a series of observations.
  • a generalized deformable surface possesses color, thermal, conductive and/or polarimetric temporal variance due to factors such as, for example, source lighting conditions and/or physical chemistry surface alterations during a series of observations.
  • application of an external or internal stress can cause the surface to deform in a non-linear fashion such that inclusions thereon can be affected in both two- and three-dimensions. That is, inclusions contained upon the generalized deformable surface may not move the same amount relative to one another when the surface is deformed and the surface's measurable contrast can vary between itself and background due to the
  • inclusion will refer to any spatially localized characteristic in an image that differs from image background.
  • Illustrative examples of inclusions that can be present on a generalized deformable surface can include, without limitation, buildings, rocks, trees, fingerprints, skin pores, moles, and the like.
  • source illumination and/or chemical changes upon the deformable surface can also result in superficial changes that can alter reflective properties that can superficially alter the appearance of the inclusions.
  • both surface deformation and surface physical changes can result in superficial artifacts that are not indicative of actual changes to the inclusions. This type of non-uniform spatial movements and appearance changes can make image registration especially problematic.
  • time variation of background can be a significant problem alone.
  • imprinted patterns superimposed across a deformable surface can also be spatially variable but distinct from the inclusions of interest in an image (e.g. , a building complex representing an inclusion of interest can be embedded in a field of trees that is swaying in the wind, where the trees represent a time variant background that is not rigidly positioned in the image).
  • an image registration process needs to be capable of handling such time variant background.
  • image processing devices and Data Registration Processes that perform image sharpening before conducting a two-dimensional image registration technique are described herein.
  • the image sharpening and two- dimensional image registration can occur prior to further image registration through use of a nonlinear data processing algorithm.
  • image analysis systems described herein include at least one image collection device, an image processing device operating a non-linear data processing algorithm, and at least one data output device.
  • the image processing device is operable to overlay a test image and a reference image upon one another and perform a comparison therebetween.
  • the image processing device can further sharpen the test image and/or the reference image before overlaying.
  • image analysis systems described herein include at least one image collection device, an image processing device operating non-linear data processing algorithm, and at least one data output device.
  • the image processing device is operable to overlay a test image and a reference image upon one another and perform a comparison therebetween by processing both linear parameters and non-linear parameters, where each image contains a plurality of inclusions.
  • the non-linear data processing algorithm is selected from the group including a particle swarm optimizer, a neural network, a genetic algorithm, and any combination thereof.
  • the image processing device can further sharpen the test image and/or the reference image before overlaying.
  • methods described herein include acquiring a reference image containing a plurality of inclusions, acquiring a test image containing the plurality of inclusions, overlaying the test image upon the reference image by using a non-linear data processing algorithm, and producing an output that illustrates any differences between the test image and the reference image after overlaying takes place.
  • the methods can further include sharpening the test image and/or the reference image before overlaying takes place.
  • FIGURE 1 is a schematic diagram showing the use of spatial dicing of time sequenced images to perform the data registration process, according to an example
  • FIGURES 2A and 2B show illustrative images containing a plurality of mole inclusions across a patient's back taken with different camera orientations and lighting conditions, according to an example embodiment;
  • FIGURES 2C and 2D show illustrative images of a single mole inclusion thereon acquired with the different camera orientations and lighting conditions, according to an example embodiment;
  • FIGURE 3 shows an illustrative flowchart demonstrating how time sequence images can be overlaid in a particular embodiment, according to an example embodiment
  • FIGURE 4 shows an illustrative flowchart demonstrating how time sequence images can be overlaid in another particular embodiment, according to an example embodiment
  • FIGURES 5A and 5B show illustrative test and reference images of a mole inclusion before and after alignment, respectively, according to an example embodiment
  • FIGURE 5C shows an illustrative difference image of the misaligned images in
  • FIGURE 5A according to an example embodiment
  • FIGURE 5D shows an illustrative difference image of the aligned images in
  • FIGURE 5B according to an example embodiment
  • FIGURE 6A shows an illustrative 4D scatter plot of mapping coefficients for four parameters (translation, rotation, magnification and background color) before processing with a particle swarm optimizer, according to an example embodiment
  • FIGURES 6B - 6D show illustrative 2D scatter plots of rotation, magnification and translation parameters before processing with a particle swarm optimizer, according to an example embodiment
  • FIGURES 5E - 6A show illustrative plots corresponding to those of FIGURES
  • FIGURE 7 is a schematic of an image registration process includes at least an
  • FIGURE 8 shows a diagrammatic representation of a computing device for a machine in the example electronic form of a computer system 2000, according to an example embodiment.
  • the present disclosure is directed, in part, to image analysis systems that utilize a non-linear data processing algorithm to detect and characterize changes between time sequence images.
  • the present disclosure is also directed, in part, to methods for analyzing time sequence images, including those having time-variant background clutter, using a non-linear data processing algorithm.
  • the image analysis systems and methods for analyzing time sequence images can further utilize sharpening of the time sequence images to improve the analysis.
  • image sharpening can be performed prior to performing two-dimensional image registration.
  • image sharpening can be performed prior to performing image registration using a non-linear data processing algorithm.
  • Illustrative systems and methods utilizing non-linear data processing algorithms are set forth in United States Patent Application 13/187,447, filed July 20, 2011. By applying initial image sharpening techniques, a better initial solution of the image overlay can be obtained prior to applying nonlinear data processing techniques.
  • mapping coefficients will refer to one of the outputs of the image analysis system.
  • the initial mapping coefficients determined from processing of linear parameters can be fed into a non-linear data processing algorithm as initial estimated parameters of an inclusion's location.
  • estimated parameters of an inclusion's location can be determined from an initial coarse alignment based upon rigid body alignment techniques (e.g. , using two-dimensional image correlation techniques). Using the estimated solution of an inclusion's location can advantageously provide a more rapid convergence of the non-linear data processing algorithm in determining finalized mapping coefficients.
  • Mapping coefficients can include the transformation coefficients that minimize differences across a reference image and a test image that result from geometric alterations and surface reflective properties.
  • FIGURE 1 is a schematic diagram showing the use of spatial dicing of time sequenced images to perform the data registration process, according to an example embodiment.
  • FIGURE 1 can also be thought of as a template-based correlation process.
  • the use of spatial dicing of time sequenced images plays a role in the effectiveness of performing the data registration process.
  • Spatial gridding can alleviate non-linear processing constraints by minimizing the number of degrees of freedom that characterizes local deformation across the gridded area. The depiction of this approach is shown in FIGURE 1 and assumes that the target gridded area is larger than a roughly positioned reference area.
  • Local discrete image correlation methods can then be used to provide first-order translational correction. Such an operation can serve as a pre-processing step in later gridded iterative operations to correct for higher order deformations such as but not limited to magnification and rotation.
  • Time-variant background clutter can arise from the surface being imaged and/or from sensor noise within an image collection device being used for detection, for example.
  • body hair and varying skin pigmentation can complicate the registration of skin images.
  • FIGURES 2A and 2B show illustrative images containing a plurality of mole inclusions across a patient's back taken with different camera orientations and lighting conditions
  • FIGURES 2C and 2D show illustrative images of a single mole inclusion thereon acquired with the different camera orientations and lighting conditions.
  • FIGURES 2A - 2D the issues associated with the misalignment of multiple inclusions (moles) can be a particularly daunting, given the number of inclusions involved and their non-uniform degree of deformation in a series of images.
  • Illustrative non-linear image parameters can include but are not limited to, for example, image collection device rotation and tilt (e.g. , image collection device shear), lighting, magnification, image tone, image gain, time- variant background changes, and the like.
  • image collection device rotation and tilt e.g. , image collection device shear
  • magnification image tone
  • image gain time- variant background changes
  • time-variant background changes and the like.
  • both single modality image collection devices and multiple modality image collection devices can be used.
  • at least two different types of image collection devices can be used to investigate different attributes of inclusions located within an image. For example, time sequence visual images can be superimposed with time sequence thermal images, polarimetric images, radiographic images, magnetic images, and/or the like in order to develop a more effective and informative inclusion overlay.
  • changes in an inclusion can be characterized in terms of regional size differences, color differences, asymmetry changes, and boundary changes, for example.
  • these changes can be further augmented with changes such as, for example, density differences, chemical differences, magnetic differences, and/or polarimetric differences.
  • one such attribute can be essentially fixed in an image, such that an inclusion being imaged can be oriented with respect to the fixed point (e.g. , another inclusion that does not change), thereby constituting a geographical information system (GIS).
  • GIS geographical information system
  • the present image analysis systems and related methods can find particular utility.
  • the present image analysis systems and methods can be especially useful in fields including, for example, medical imaging, structural fatigue monitoring, satellite imaging, geological testing and surface chemistry monitoring. It should be recognized that images obtained in these fields and others can have inclusions located upon a deformable surface.
  • the skin and underlying tissue can exhibit differential elasticity (e.g., due to weight gain or loss or a change in musculature) and make its surface spatially deformable.
  • changing skin pigmentation and hair covering can represent time-variant background clutter that can complicate the overlay of skin images.
  • the earth' s surface can similarly be considered to be deformable.
  • a bendable surface e.g. , an airplane wing or a structural support
  • the relative positions of inclusions thereon e.g. , rivets
  • the change in relative positions of inclusions located on a bendable surface can be used as a means to gauge structural fatigue.
  • the morphological classification of skin lesions ("moles") and monitoring them over time is important for the detection of melanoma and other types of skin cancer.
  • the present image analysis systems and methods can be particularly advantageous for these types of dermatology applications.
  • observation of changes in the color, shape and size of moles over time can lead to the early detection of skin cancer while it is still readily treatable.
  • typical patients have several hundred moles, all of which need to be monitored over time, which can complicate visual inspection efforts.
  • a skin cancer may have already metastasized beyond its point of origin and become much more difficult to treat.
  • the present image analysis systems and methods can also be used for monitoring other skin conditions including, for example, rashes, burns and healing.
  • fixed inclusions such as, for example, skin pores can be utilized as fixed reference points that do not substantially change during the course of acquiring time sequence images.
  • the present image analysis systems and methods can also be extended to subsurface imaging such as, for example, breast mammography and internal imaging such as, for example, colon, stomach, esophageal and lung imaging. It should be noted that the present image analysis systems and methods are not limited to visual images, particularly in the medical field. Particularly, overlay and comparison of images such as, for example, PET, SPECT, X-RAY, CT, CAT, MRI and other like images can be accomplished with the present image analysis systems and methods. Appropriate imaging protocols using these imaging techniques will be followed and used.
  • Computer hardware used to implement the various illustrative blocks, modules, elements, components, methods and algorithms described herein can include a processor configured to execute one or more sequences of instructions, programming or code stored on a readable medium.
  • the processor can be, for example, a general purpose microprocessor, a microcontroller, a graphical processing unit, a digital signal processor, an application specific integrated circuit, a field programmable gate array, a programmable logic device, a controller, a state machine, a gated logic, discrete hardware components, or any like suitable entity that can perform calculations or other manipulations of data.
  • computer hardware can further include elements such as, for example, a memory [e.g.
  • FIG. 8 shows a diagrammatic representation of a computing device for a machine in the example electronic form of a computer system 2000, within which a set of instructions for causing the machine to perform any one or more of the comparisons or correction methodologies discussed herein can be executed or is adapted to include the apparatus for the comparisons or correction methodologies as described herein.
  • the machine operates as a standalone device or can be connected (e.g., networked) to other machines.
  • the machine can operate in the capacity of a server or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment.
  • the machine can be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a portable music player (e.g., a portable hard drive audio device such as a Moving Picture Experts Group Audio Layer 3 (MP3) player, a web appliance, a network router, a switch, a bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine.
  • a portable music player e.g., a portable hard drive audio device such as a Moving Picture Experts Group Audio Layer 3 (MP3) player
  • MP3 Moving Picture Experts Group Audio Layer 3
  • web appliance e.g., a web appliance, a network router, a switch, a bridge, or any machine capable of
  • the example computer system 2000 includes a processor or multiple processors
  • the computer system 2000 can further include a video display unit 2010 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)).
  • the computer system 2000 also includes an alphanumeric input device 2012 (e.g., a keyboard), a cursor control device 2014 (e.g., a mouse), a disk drive unit 2016, a signal generation device 2018 (e.g., a speaker) and a network interface device 2020.
  • the disk drive unit 2016 includes a computer-readable medium 2022 on which is stored one or more sets of instructions and data structures (e.g., instructions 2024) embodying or utilized by any one or more of the methodologies or functions described herein.
  • the instructions 2024 can also reside, completely or at least partially, within the main memory 2004 and/or within the processors 2002 during execution thereof by the computer system 2000.
  • the main memory 2004 and the processors 2002 also constitute machine-readable media.
  • the instructions 2024 can further be transmitted or received over a network 2026 via the network interface device 2020 utilizing any one of a number of well-known transfer protocols (e.g., Hyper Text Transfer Protocol (HTTP), CAN, Serial, or Modbus).
  • HTTP Hyper Text Transfer Protocol
  • CAN Serial, or Modbus
  • computer-readable medium 2022 is shown in an example embodiment to be a single medium, the term "computer-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions and provide the instructions in a computer readable form.
  • the term "computer-readable medium” shall also be taken to include any medium that is capable of storing, encoding, or carrying a set of instructions for execution by the machine and that causes the machine to perform any one or more of the methodologies of the present application, or that is capable of storing, encoding, or carrying data structures utilized by or associated with such a set of instructions.
  • computer-readable medium shall accordingly be taken to include, but not be limited to, solid-state memories, optical and magnetic media, tangible forms and signals that can be read or sensed by a computer. Such media can also include, without limitation, hard disks, floppy disks, flash memory cards, digital video disks, random access memory (RAMs), read only memory (ROMs), and the like.
  • non-linear data processing algorithms and other executable sequences described herein can be implemented with one or more sequences of code contained in a memory.
  • code can be read into the memory from another machine- readable medium.
  • Execution of the sequences of instructions contained in the memory can cause a processor to perform the process steps described herein.
  • processors in a multiprocessing arrangement can also be employed to execute instruction sequences in the memory.
  • hard- wired circuitry can be used in place of or in combination with software instructions to implement various embodiments described herein.
  • embodiments are not limited to any specific combination of hardware and software.
  • a generalized machine executes a set of instructions in the form of non-transitory signals, the generalized machine generally is transformed into a specialized machine having a specific purpose and function.
  • a machine-readable medium will refer to any medium that directly or indirectly provides instructions to a processor for execution.
  • a machine -readable medium can take on many forms including, for example, non-volatile media, volatile media, and transmission media.
  • Non- volatile media can include, for example, optical and magnetic disks.
  • Volatile media can include, for example, dynamic memory.
  • Transmission media can include, for example, coaxial cables, wire, fiber optics, and wires that form a bus.
  • Machine- readable media can include, for example, floppy disks, flexible disks, hard disks, magnetic tapes, other like magnetic media, CD-ROMs, DVDs, other like optical media, punch cards, paper tapes and like physical media with patterned holes, RAM, ROM, PROM, EPROM and flash EPROM.
  • image analysis systems described herein include at least one image collection device, an image processing device operating a non-linear data processing algorithm, and at least one data output device.
  • the image processing device is operable to overlay a test image and a reference image upon one another and perform a comparison there between.
  • an image registration process includes at least two building blocks.
  • the building blocks are typically implemented in software, hardware or a combination of the two.
  • the first building block is an Image Preprocessing (IP) block 710 and the second building block is a Registration Method (RM) block.
  • a "nonlinear data processing (NDP) algorithm” will refer to a class of methods where image registration is performed when either the IP or RM blocks or both blocks in the process change has nonlinear processing elements.
  • non-linear IP can include but is not limited to image sharpening and intensity thresholding methods.
  • linear image scaling followed by a non-linear registration processes such as with the use of PSO methods, also results in a nonlinear data process algorithm during the image registration process. It should be noted that in a data registration process, such as the one shown in Figure 7, if either IP or MR or both relate to a nonlinear process, the entire image registration process is referred to as a non-linear data registration process.
  • a non-linear data processing algorithm will refer to a class of algorithms for characterizing a geometric transformation used in overlaying two or more images that contain inclusions, particularly images that have a changing background and are subject to surface deformation.
  • a non-linear data processing algorithm can utilize parameters that are not described by the inclusions' translational or rotational coordinates (e.g. , spectral, thermal, radiographic, magnetic, polarimetric parameters, and/or the like).
  • Such geometric transformations can include both linear translational mappings as well as higher-order mappings such as, for example, image rotation, shear, magnification and the like.
  • non-linear data processing algorithm can provide image background normalization coefficient estimates to address reflective and color differences between the test image and the reference image.
  • non-linear data processing algorithm can include various preprocessing operations that can be performed prior to performing the geometric transformation.
  • Illustrative pre-processing operations can include, for example, morphological filtering of the image and spatial image sharpening.
  • the images can be subdivided into a plurality of sectors prior to applying the non-linear data processing algorithm.
  • Illustrative non-linear data processing algorithms can include, for example, particle swarm optimizers, neural networks, genetic algorithms, unsharp masking, image segmentation, morphological filtering and any combination thereof. Although certain details in the description that follows are directed to particle swarm optimizers, it is to be recognized that a particle swarm optimizer can be replaced by or used in combination with any suitable non-linear data processing algorithm, including those set forth above.
  • the non-linear data processing algorithm can be a particle swarm optimizer.
  • particle swarm optimization is a computational technique that optimizes a problem by iteratively seeking to improve upon a candidate solution with regard to a given measure of quality.
  • Particle swarm optimization techniques involve moving a population of particles (e.g., inclusions, which are state vectors, that are described by various parameters being fed into a model) toward a candidate solution for each particle according to simple mathematical formulas relating to the state vector for each particle within a state space.
  • a "state vector" will describe a potential candidate solution for a set of input parameters (both linear parameters and non-linear parameters) that minimizes differences between a reference image and a test image.
  • a two- parameter state vector can be used to describe each particle in a particle swarm.
  • Related two- dimensional state spaces and higher order state spaces are also contemplated by the embodiments described herein.
  • Each particle of a particle swarm has a unique location that corresponds to unique rotation and magnification parameters, for example, in an illustrative two-dimensional state space.
  • the parameters can be used to distort the test image, which can then be compared to the reference image.
  • distortion of the test image can take place by mapping each pixel from the original target space into new locations and then performing a re-sampling of the distorted image to check for convergence.
  • This comparison can take on several different forms such as, for example, an objective function used by the particle swarm optimizer (e.g. , differential entropy, Hamming distance, and/or the like).
  • a particle' s movement is influenced by its best known local position, which is influenced by the value of the objective function that is computed during a particular iteration.
  • Each particle is also guided toward the best known positions in the state space, which are continually updated as better positions are found by other particles. That is, the iteratively determined location for a given particle is influenced by (1) its position that gives its minimum objective function value during any previous iteration and (2) the optimal position identified by the particle swarm as provided by the minimization of objective function values across the entire particle swarm.
  • Each iteration is expected to move the particle swarm toward the best global solution for the particle positions.
  • This process can be generalized to as many parameters as required to minimize mapping differences.
  • a particle swarm optimizer can be an especially useful non-linear data processing algorithm for addressing the time-changing environment across image pairs.
  • the presence of inclusions and background features can be simultaneously evaluated, since each pixel of the test image and the reference image can be compared.
  • an objective function can be computed and recorded.
  • the inclusions form a fixed reference over which the objective function can be minimized as the particle swarm evolves.
  • the time-variant background can convey random noise to the measurement of the objective function, which can be addressed through successive iterations that converge toward the mapping coefficients of the inclusions of interest within the images.
  • the present image processing systems and methods can detect changes in the shape, size and boundary conditions for a plurality of inclusions over a period of time.
  • detection of such changes can involve acquisition of a reference image and then acquisition of at least one test image at a later time.
  • an initial coarse alignment of the plurality of inclusions in the test image can be performed upon the plurality of inclusions in the reference image (e.g. , using two-dimensional image correlation techniques). By performing an initial coarse alignment of the plurality of inclusions, a more rapid convergence of the non-linear data processing algorithm can be realized when aligning the inclusions.
  • coarse alignment can be performed manually.
  • a hybrid landmark/intensity-based registration method can be used to identify tie-points across each image in order to perform coarse alignment. For example, invariant inclusions on the surface being imaged can be established as markers for performing image alignment. In some embodiments, an optical matched filter can be used in performing the coarse alignment. It should be noted that in the embodiments described herein, the inclusions in the reference image are held fixed, while the inclusions in the test image are transformed to their optimized positions using the non-linear data processing algorithm.
  • initial coarse alignment of a test image and a reference image can take place using a two-dimensional correlation technique.
  • Illustrative two- dimensional correlation techniques can include, for example, cross correlation, sum of absolute difference correlation, sum squared distance cross correlation, and normalized cross correlation. Other two-dimensional correlation techniques are also envisioned. Additional details regarding the above two-dimensional correlation techniques can be found in the Appendix I of the disclosure.
  • these two-dimensional correlation techniques can be applied prior to performing a non-linear data processing algorithm.
  • sharpening of a test image and/or a reference image can take place prior to applying the two-dimensional correlation technique.
  • Image sharpening can take place by any technique.
  • a 2- pixel image sharpening technique can be applied.
  • an unsharp masking filter can be applied after image sharpening.
  • a generalized three-dimensional transformation can be used during or subsequent to utilizing the non-linear data processing algorithm.
  • higher order model generalizations can be used in overlaying a test image upon a reference image.
  • the foregoing transformations can account for non-linear parameters in a test image and a reference image and allow sectors of the test image to be deformed onto the reference image, as described in more detail below.
  • An Affine transformation involves a geometric spatial transformation (e.g. , rotation, scaling, and/or shear) and a translation (movement) of an inclusion.
  • a generalized Perspective transformation can be used to handle higher dimensional surface topographies.
  • the image processing device can be operable for subdividing each image into a plurality of sectors and determining a set of mapping coefficients for each of the plurality of sectors.
  • the image processing device can be operable to deform each sector in the test image onto a corresponding sector in the reference image, after determining the set of mapping coefficients for each sector, thereby overlaying the inclusions therein. By deforming each sector in a test image onto a corresponding sector in a reference image, inclusions therein can be overlaid and compared for differences according to some embodiments.
  • the image processing device can process both linear parameters and non-linear parameters in overlaying the test image and the reference image.
  • the image processing device can be operable to determine morphological changes that occur in inclusions in the test image relative to the reference image. In some embodiments, these changes can be listed as a signature vector for the inclusions. Attributes of the signature vector can include, for example, changes in aerial size, inclusion spatial asymmetry, inclusion boundary characterization, color changes, and the like.
  • the image processing device can be operable to provide visual depictions of each element of the signature vectors or combined depictions of the elements of the signature vectors as Geographical Information System (GIS) information maps that depict the type and magnitude of changes that exist across each inclusion.
  • GIS Geographical Information System
  • linear parameters are the modeling coefficients that describe the linear translation between a test image and a reference image.
  • Linear parameters include vector quantities that describe an inclusion' s real position in three-dimensional space, particularly x-, y- and z-coordinates.
  • non-linear parameters are the modeling parameters used in the non-linear data processing algorithm, including, for example, rotation, magnification, shear and the like. Collectively, the linear parameters and the non-linear parameters can alter the apparent real position or appearance of an inclusion in two- and three-dimensional space.
  • the image processing device can process the linear parameters prior to processing the non-linear parameters.
  • the linear parameters of the state vector are easier to address computationally and can be used to achieve a better initial solution for the position of each inclusion.
  • the initial solution can be fed into the non-linear data processing algorithm when the non-linear parameters are processed.
  • the nonlinear parameters can be processed to "fine tune" the optimal linear position for the mapping of sectors in the test image onto corresponding sectors in the reference image. This can provide an enhanced non-linear correction.
  • both the linear parameters and the nonlinear parameters can be processed in each iteration of the non-linear data processing algorithm.
  • the linear parameters can be processed separately prior to using the nonlinear data processing algorithm.
  • the non-linear parameters are processed initially by the non-linear data processing algorithm, and the non-linear parameters are temporarily ignored.
  • the non-linear parameters can be processed separately or in combination with the linear parameters.
  • Such initial processing of the linear parameters can advantageously increase processing speed.
  • the non-linear parameters can be initially processed by a processing algorithm that is separate from the non-linear data processing algorithm, before an initial solution for the inclusions' positions is fed into the non-linear data processing algorithm.
  • the images can be sharpened prior to processing of the linear parameters and the non-linear parameters.
  • non-linear parameters are processed using the non-linear data processing algorithm.
  • linear parameters can many times be effectively addressed through standard image processing techniques, as noted above.
  • standard techniques can be inefficient when addressing the non-linear parameters related to the images.
  • the non-linear data processing algorithms used in the present embodiments can be particularly adept at addressing the non-linear parameters associated with the geometric transformation used in the non-linear data processing algorithm.
  • the convergence rate can nearly double by having the non-linear data processing algorithm process only the non-linear parameters.
  • the increase in convergence rate can be even greater.
  • overlay of the test image and the reference image can be iteratively performed for a fixed number of cycles. In other embodiments, overlay of the test image and the reference image can be iteratively performed using the non-linear data processing algorithm until a desired degree of convergence is reached through optimization. In some embodiments, convergence can be determined when an objective function within the test image is minimized or a difference of the objective function is minimized between iterations. That is, in such embodiments, convergence can be determined when the error (as measured by the change in objective function between iterations) between the test image and the reference image is minimized.
  • Illustrative objective functions can include, for example, image entropy, hamming distance, gray level per band, mutual information estimation, and any combination thereof.
  • the non-linear data processing algorithm can be used to find a global minimum across each sector by adjusting the mapping coefficients. Once the optimal values for the mapping coefficients have been determined, any remaining differences can be characterized in terms of morphological changes in the inclusions within an image or due to residual alignment error.
  • the inclusion of non-linear parameters advantageously can provide better registration and change sensitivity detection between corresponding sectors within a test image and a reference image. When only linear parameters are processed to affect registration, higher levels of systematic errors can be introduced.
  • processing can be performed until mapping coefficient estimates and/or objective function estimates in successive iterations differ by less than a user defined value. It is to be recognized that a desired degree of convergence will vary depending upon the intended application in which the image analysis system is used. Some applications may require a tighter convergence, while others will require less.
  • the sectors in the test image and the reference image are substantially identical in size.
  • the sectors in the test image can be larger than the sectors in the reference image.
  • Advantages of making the sectors in the test image larger can include allowing any residual error in sector positions remaining after the linear parameters are initially processed to be adequately compensated for when the non-linear parameters are processed using the nonlinear data processing algorithm.
  • the entropy difference is zero. After an optimal overlay has been achieved, any non-zero entropy difference either represents morphological changes in the inclusion(s) over time or residual alignment error from the non-linear data processing algorithm.
  • image processing device is operable to determine any differences between the test image and the reference image for each inclusion after the overlay has been performed.
  • image comparison on an inclusion-by-inclusion basis can be performed by visual inspection after the overlay has been performed.
  • image comparison can be performed by the image processing device (e.g. , a computer or graphical processing unit) on a regional- or pixel-based basis.
  • factors that can influence the overlay efficiency and the accurate determination of a difference output include, for example, the ability to correct for global or local background alterations and local surface deformation about each inclusion.
  • the order in which the test image and the reference image are acquired can take place in any order. That is, in various embodiments, the order in which the test image and the reference image are acquired can take place in any order. That is, in various combinations
  • the test image can be acquired either before or after the reference image.
  • the processes described herein can provide mapping coefficient regardless of the acquisition order or if the roles of the images are changed.
  • FIGURE 3 shows an illustrative flowchart demonstrating how time sequence images can be overlaid in a particular embodiment.
  • the non-linear data processing algorithm is a particle swarm optimizer.
  • operation 200 a reference image is acquired at a first time.
  • a particle swarm model can be applied in operation 201 in order to generate a population of synthetic images in operation 202 that provides objective function information 203, which can later be used in analyzing a test image. This operation can provide an initial topography assessment of the state space.
  • a test image is acquired in operation 204.
  • a convergence check 205 is applied to test the goodness of fit of the inclusion overlay in the test image and the reference image.
  • the comparison between images can take place over the entire image or between sub-image sectors within the entire image.
  • Objective function information 203 can include differential entropy between the test image (or sector) and a reference image (or sector). If the overlay has not converged to a desired degree, the particle swarm model can be applied again, and the convergence check repeated.
  • the parameters of the inclusions in the test image become part of the objective function information 203 that is used in further assessing the goodness of fit for each inclusion.
  • Operation 206 can involve a deformation of sectors containing the inclusions in the reference image using a geometric transformation (e.g. , an Affine transformation or a Perspective transformation) in some embodiments.
  • a geometric transformation e.g. , an Affine transformation or a Perspective transformation
  • changes in the inclusions between the test image and the reference image can be assessed in operation 207, and an output illustrating the differences for each inclusion can be produced in operation 208.
  • all the inclusions are illustrated in the output.
  • the output can be filtered such that only inclusions having selected physical attributes (e.g. , size, color and/or aspect ration) are indicated as being changed between the test image and the reference image.
  • FIGURE 4 shows an illustrative flowchart demonstrating how time sequence images can be overlaid in another particular embodiment.
  • reference image data and test image data can be collected in operations 301 and 304, respectively, and partitioned into sectors in operations 302 and 305. Morphological filtering of the images can then take place in operations 303 and 306, which can remove background clutter from the images. Thereafter, a "quick-look" difference of the reference image and the test image can be performed in operation 307. Spatial image sharpening of the test image and the reference image can be performed in operation 308. Processing of linear image parameters can then be used to produce a translational estimation for each sector of the image overlay in operation 309.
  • a sector translation vector assessment can be generated for each sector in operation 310, followed by test sector redicing of the original test image in operation 311. Based upon the estimated translational differences, a revised test image partition can be generated in operation 312. Any of the foregoing operations can be performed iteratively in order to achieve a desired degree of convergence for the translational overlay of the test image and the reference image.
  • a particle swarm optimizer can be used in operation 313 to further refine the positions of the inclusions within the various sectors. Thereafter, the test image and the reference image can be registered in operation 314 and a change assessment in the images can be performed in operation 315. Again, any of the operations for processing the non-linear parameters can also be processed iteratively to achieve a desired degree of convergence. An output can be produced in the form of a change map output in operation 316.
  • FIGURES 5A - 5D show an illustrative series of images before and after alignment using the present image analysis systems and methods, and the corresponding difference images produced in each case.
  • FIGURES 5A and 5B show illustrative test and reference images of a mole inclusion before and after alignment
  • FIGURE 5C shows an illustrative difference image of the misaligned images in FIGURE 5A.
  • FIGURE 5D shows an illustrative difference image of the aligned images in FIGURE 5B.
  • the difference image of FIGURE 5C might be interpreted by the image analysis system as a significant change.
  • the difference image of FIGURE 5D might not be interpreted by the image analysis system as a significant change.
  • the difference image of FIGURE 5C could represent a false positive result that would need further analysis by a physician.
  • the present image analysis systems and methods can lessen the number of false positive results needing further clinical analysis.
  • FIGURE 6A shows an illustrative 4D scatter plot of mapping coefficients for four parameters (translation, rotation, magnification and background color) before processing with a particle swarm optimizer.
  • FIGURES 6B - 6D show illustrative 2D scatter plots of rotation, magnification and translation parameters before processing with a particle swarm optimizer.
  • FIGURES 6E - 6H show illustrative plots corresponding to those of FIGURES 6A - 6D illustrating the convergence of mapping coefficients after processing with the particle swarm optimizer.
  • Various image collection devices can be used in association with the present image analysis systems and methods. In some embodiments, the image collection device can acquire a visual image such as a photograph.
  • the image collection device can be a camera.
  • image collection devices other than visual image collection devices can be used.
  • confocal microscopes, magnetic imaging devices (e.g. MRI) hyperspectral imaging devices, multispectral imaging devices, thermal sensing devices, polarimetric sensing devices, radiometric sensing devices, and any other like sensing device can be used. That is, the present image analysis systems and methods are not limited to the analysis of inclusions contained within visual images.
  • more than one image collection device can be used in overlaying the inclusions in the test image with those in the reference image.
  • a combination of a visual image and a thermal image might be used to produce a more accurate overlay.
  • the visual image might not be significantly changed between a test image and a reference image, but a thermal property of the inclusion might be altered between the two.
  • Other combinations of visual and non- visual imaging techniques or between various non- visual imaging techniques are also envisioned and within the scope of this invention.
  • the present image analysis systems and methods can produce an output via at least one data output device.
  • Suitable data output devices can include, for example, computer monitors, printers, electronic storage devices and the like.
  • the image processing device can produce a difference image at the data output device that highlights any significant changes between the test image and the reference image for any of the inclusions therein.
  • Image differencing is a scalar quantity.
  • Vector quantities can be utilized in image comparison as well.
  • morphological changes in a test image can be represented in the form of a state vector where elements of the state vector correspond to changes in inclusion size, color, geometry and border characteristics. This information can then be presented to a user of the present systems in the form of a Geographical Information System (GIS) where two-dimensional image planes represent the magnitude of each vector component.
  • GIS Geographical Information System
  • the image processing devices described herein can contain a computer.
  • the image processing devices can utilize a graphical processing unit. Such graphical processing units can be part of a computer or they can be a standalone module, if desired. Computers and graphical processing units can utilize any of the previously described computer hardware, software, or other like processing components known in the art.
  • image analysis systems described herein include at least one image collection device, an image processing device operating a particle swarm optimizer, and at least one data output device.
  • the image processing device is operable to overlay a test image and a reference image upon one another and perform a comparison therebetween by processing both linear parameters and non-linear parameters, where each image contains a plurality of inclusions.
  • the test image and the reference image can be subdivided into a plurality of sectors, where each sector contains at least one inclusion.
  • the test image and/or reference image can be sharpened prior to overlayment.
  • methods for overlaying and analyzing images containing a plurality of inclusions include acquiring a reference image containing a plurality of inclusions, acquiring a test image containing the plurality of inclusions, overlaying the plurality of inclusions in the test image upon the plurality of inclusions in the reference image by using a non-linear data processing algorithm, and producing an output that illustrates any differences for each inclusion between the test image and the reference image after overlaying takes place.
  • the plurality of inclusions can be located on a deformable surface. In other embodiments, the plurality of inclusions can be located on a rigid surface.
  • the methods can further include performing a coarse alignment of the plurality of inclusions in the test image upon the plurality of inclusions in the reference image, prior to using the non-linear data processing algorithm.
  • performing a coarse alignment can be further facilitated by positioning the at least one image collection device and the area being imaged into a standard orientation. For example, a patient being imaged may be requested to stand or sit in a specified orientation from image to image. By employing a standard orientation of the image collection device(s) and the area being imaged, it can be possible to orient the plurality of inclusions in the test image as close as possible to their "correct" positions by minimizing translational-type errors and image processing device alignment-type errors.
  • the test image and/or the reference image can be sharpened prior to overlaying the images.
  • the present methods can involve dividing the reference image into a plurality of sectors. By performing this operation, the optimal orientation parameters for the image collection device(s) can be determined for each reference sector prior to the analysis of a corresponding sector in the test image. Thus, the local topography about each inclusion in the test image can be initially assessed prior to application of the non-linear data processing algorithm for analyzing the test image.
  • the sectors can be uniform in size. In some embodiments, the sectors can be variable in size. In some
  • each sector can contain at least one inclusion.
  • the sectors are small relative to the overall image space, such that they are substantially rigid on a local basis about each inclusion. Thus, by having small sectors, rigid body alignment techniques can be applied on a local basis for each inclusion in a test image.
  • the present methods can further include analyzing the reference image using linear parameters to determine an initial topography solution for the test image. As noted above, determination of an initial topography solution for the test image can enhance the convergence rate of the non-linear data processing algorithm.
  • the present methods can further include determining mapping coefficients for the inclusions in the test image and/or the reference image.
  • the linear parameters can be processed before the non-linear parameters. In some embodiments, only the non-linear parameters are processed using the non-linear data processing algorithm. In some embodiments, an initial optimization of the linear parameters can be fed into the non-linear data processing algorithm and processed with the non-linear parameters. In some embodiments, both linear parameters and non-linear parameters can be used to overlay the sectors in the test image upon the corresponding sector in the reference image.
  • overlaying can be performed iteratively until a desired degree of convergence is reached. In some embodiments, overlaying can be performed iteratively until a fixed number of cycles have been conducted. In some embodiments, a desired degree of convergence can be based upon a rate or amount of change of the mapping coefficients estimated in successive iterations. In some embodiments, the desired degree of convergence can be based up a minimization of an objective function for the plurality of sectors within a test image, or a difference thereof between successive iterations. In some embodiments, the desired degree of convergence can be based upon minimization of an objective function obtained from a difference image generated after overlaying the test image and the reference image.
  • the present methods can further include deforming each sector of the test image onto a corresponding sector of the reference image.
  • each sector can be deformed using an Affine transformation or a Perspective transformation.
  • the output of the present methods can be filtered.
  • the output can be filtered such that only inclusions having selected physical attributes are indicated as being changed between the test image and the reference image.
  • compositions and methods are described in terms of “comprising,” “containing,” or “including” various components or steps, the compositions and methods can also “consist essentially of or “consist of the various components and operations. All numbers and ranges disclosed above can vary by some amount. Whenever a numerical range with a lower limit and an upper limit is disclosed, any number and any subrange falling within the broader range is specifically disclosed. Also, the terms in the claims have their plain, ordinary meaning unless otherwise explicitly and clearly defined by the patentee. If there is any conflict in the usages of a word or term in this specification and one or more patent or other documents that may be incorporated herein by reference, the definitions that are consistent with this specification should be adopted.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

L'invention concerne des systèmes d'analyse d'image qui utilisent un algorithme de traitement de données non linéaire pour superposer et comparer des images chronologiques. Le système et le procédé d'analyse d'image accentuent également les contours d'au moins une image parmi les images chronologiques durant le processus afin d'améliorer la précision d'alignement des images.
PCT/US2012/064195 2011-11-08 2012-11-08 Systèmes d'analyse d'image à capacités d'accentuation des contours d'image et procédés les utilisant Ceased WO2013070945A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201161557377P 2011-11-08 2011-11-08
US61/557,377 2011-11-08

Publications (1)

Publication Number Publication Date
WO2013070945A1 true WO2013070945A1 (fr) 2013-05-16

Family

ID=48290554

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2012/064195 Ceased WO2013070945A1 (fr) 2011-11-08 2012-11-08 Systèmes d'analyse d'image à capacités d'accentuation des contours d'image et procédés les utilisant

Country Status (1)

Country Link
WO (1) WO2013070945A1 (fr)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104112278A (zh) * 2014-08-01 2014-10-22 西安电子科技大学 基于协方差的多光谱图像实时配准方法
CN106204601A (zh) * 2016-07-15 2016-12-07 华东师范大学 一种基于波段扫描形式的活体高光谱序列图像并行配准方法
WO2019024137A1 (fr) * 2017-07-31 2019-02-07 中国科学院地质与地球物理研究所 Procédé d'imagerie à haute précision par topographie tridimensionnelle de fissures de test de fracturation hydraulique de roches
CN109859110A (zh) * 2018-11-19 2019-06-07 华南理工大学 基于光谱维控制卷积神经网络的高光谱图像全色锐化方法
CN110175194A (zh) * 2019-04-19 2019-08-27 中国矿业大学 一种基于关联规则挖掘的煤矿巷道围岩变形破裂辨识方法
GR1011078B (el) * 2025-01-09 2025-11-21 Εταρεια Υδρευσης Και Αποχετευσης Θεσσαλονικης Ανωνυμη Εταιρεια, Οπτικη μεθοδος τηλεπισκοπησης εντοπισμου ιχνων στην επιφανεια του εδαφους ως δεικτες υπογειων δομων

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009148596A1 (fr) * 2008-06-04 2009-12-10 Raytheon Company Système et procédés de traitement d’images pour aligner des détails de la peau en vue d’une détection précoce du cancer de la peau

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009148596A1 (fr) * 2008-06-04 2009-12-10 Raytheon Company Système et procédés de traitement d’images pour aligner des détails de la peau en vue d’une détection précoce du cancer de la peau

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
GOTTESFELD BROWN L: "A SURVEY OF IMAGE REGISTRATION TECHNIQUES", ACM COMPUTING SURVEYS, ACM, NEW YORK, NY, US, US, vol. 24, no. 4, 1 December 1992 (1992-12-01), pages 325 - 376, XP002942558, ISSN: 0360-0300, DOI: 10.1145/146370.146374 *
RADKE R J ET AL: "Image Change Detection Algorithms: A Systematic Survey", IEEE TRANSACTIONS ON IMAGE PROCESSING, IEEE SERVICE CENTER, PISCATAWAY, NJ, US, vol. 14, no. 3, 1 March 2005 (2005-03-01), pages 294 - 307, XP002602265, ISSN: 1057-7149, DOI: 10.1109/TIP.2004.838698 *
ZITOVA B ET AL: "IMAGE REGISTRATION METHODS: A SURVEY", IMAGE AND VISION COMPUTING, ELSEVIER, GUILDFORD, GB, vol. 21, no. 11, 1 October 2003 (2003-10-01), pages 977 - 1000, XP002522120, ISSN: 0262-8856, [retrieved on 20030827], DOI: 10.1016/S0262-8856(03)00137-9 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104112278A (zh) * 2014-08-01 2014-10-22 西安电子科技大学 基于协方差的多光谱图像实时配准方法
CN106204601A (zh) * 2016-07-15 2016-12-07 华东师范大学 一种基于波段扫描形式的活体高光谱序列图像并行配准方法
CN106204601B (zh) * 2016-07-15 2018-09-28 华东师范大学 一种基于波段扫描形式的活体高光谱序列图像并行配准方法
WO2019024137A1 (fr) * 2017-07-31 2019-02-07 中国科学院地质与地球物理研究所 Procédé d'imagerie à haute précision par topographie tridimensionnelle de fissures de test de fracturation hydraulique de roches
CN109859110A (zh) * 2018-11-19 2019-06-07 华南理工大学 基于光谱维控制卷积神经网络的高光谱图像全色锐化方法
CN109859110B (zh) * 2018-11-19 2023-01-06 华南理工大学 基于光谱维控制卷积神经网络的高光谱图像全色锐化方法
CN110175194A (zh) * 2019-04-19 2019-08-27 中国矿业大学 一种基于关联规则挖掘的煤矿巷道围岩变形破裂辨识方法
GR1011078B (el) * 2025-01-09 2025-11-21 Εταρεια Υδρευσης Και Αποχετευσης Θεσσαλονικης Ανωνυμη Εταιρεια, Οπτικη μεθοδος τηλεπισκοπησης εντοπισμου ιχνων στην επιφανεια του εδαφους ως δεικτες υπογειων δομων

Similar Documents

Publication Publication Date Title
US20130188878A1 (en) Image analysis systems having image sharpening capabilities and methods using same
US20120020573A1 (en) Image analysis systems using non-linear data processing techniques and methods using same
US12405588B1 (en) Surface data, acquisition, storage, and assessment system
De Senneville et al. EVolution: an edge-based variational method for non-rigid multi-modal image registration
CN104838422B (zh) 图像处理设备及方法
US11164670B2 (en) Methods and apparatus for identifying skin features of interest
AU2013343577B2 (en) Skin image analysis
US20080292164A1 (en) System and method for coregistration and analysis of non-concurrent diffuse optical and magnetic resonance breast images
CN107007267A (zh) 用于分析热图像的方法、设备和系统
WO2013070945A1 (fr) Systèmes d'analyse d'image à capacités d'accentuation des contours d'image et procédés les utilisant
WO2012012576A1 (fr) Systèmes d'analyse d'images utilisant des techniques de traitement de données non linéaires et leurs procédés d'utilisation
Kretschmer et al. ADR-anatomy-driven reformation
Shakeri et al. Statistical shape analysis of subcortical structures using spectral matching
Alam et al. Evaluation of medical image registration techniques based on nature and domain of the transformation
Liu et al. Mutual information based three-dimensional registration of rat brain magnetic resonance imaging time-series
Afzali et al. Inter-patient modelling of 2D lung variations from chest X-ray imaging via Fourier descriptors
Javaherian et al. A fast time-difference inverse solver for 3D EIT with application to lung imaging
Shboul et al. Quantitative mr image analysis for brain tumor
Sargent et al. Semi-automatic 3D lung nodule segmentation in CT using dynamic programming
CN103230274A (zh) 一种弥散磁共振图像计算方法及基于其的分析方法
Vicente et al. Gradient‐based 3D skin roughness rendering from an in‐vivo skin image for dynamic haptic palpation
CN117541493A (zh) 一种基于改进分数阶累积量的三维特征医学图像融合方法
Mosaliganti et al. An imaging workflow for characterizing phenotypical change in large histological mouse model datasets
Jamil et al. Image registration of medical images
Alam et al. Quantitative evaluation of intrinsic registration methods for medical images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12788699

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12788699

Country of ref document: EP

Kind code of ref document: A1