[go: up one dir, main page]

WO2006059337A2 - System and method for automatic detection of inconstancies of objects in a sequence of images - Google Patents

System and method for automatic detection of inconstancies of objects in a sequence of images Download PDF

Info

Publication number
WO2006059337A2
WO2006059337A2 PCT/IL2005/001298 IL2005001298W WO2006059337A2 WO 2006059337 A2 WO2006059337 A2 WO 2006059337A2 IL 2005001298 W IL2005001298 W IL 2005001298W WO 2006059337 A2 WO2006059337 A2 WO 2006059337A2
Authority
WO
WIPO (PCT)
Prior art keywords
images
segment
image
inconstancy
segments
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/IL2005/001298
Other languages
French (fr)
Other versions
WO2006059337A3 (en
Inventor
Chen Brestel
Yair Shimoni
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Elbit Systems Electro Optics ELOP Ltd
Original Assignee
Elbit Systems Electro Optics ELOP Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Elbit Systems Electro Optics ELOP Ltd filed Critical Elbit Systems Electro Optics ELOP Ltd
Publication of WO2006059337A2 publication Critical patent/WO2006059337A2/en
Publication of WO2006059337A3 publication Critical patent/WO2006059337A3/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Definitions

  • the disclosed technique relates to image processing, in general, and to methods of detecting inconstancies of objects in an image, in particular.
  • Some image sequences may be acquired differently. These differences may consist of time interval, in which the images of a scene were acquired. This time interval may sometimes be in the order of days or even years.
  • the type of the image such as aerial or satellite may be different.
  • the devices, by which the images were acquired, such as camera, may change as well.
  • the image sequences that were acquired differently may cause complex scenes, which may include rocks, bushes, buildings and vehicles, to appear very different in two different images. These differences may be semantic (i.e., meaningful objects may have appeared or disappeared). These differences may further be different light conditions, changes in shadows, change of viewpoint, seasonal changes of the scene and distortions of the scene.
  • Some image sequences may be acquired by different imaging devices. These different imaging devices may employ different types of sensors.
  • sensors may be sensors operative at different spectrums (e.g., one imaging device is operative at the visible light spectrum and another imaging device is operative at the infrared spectrum), sensors that are operative with different resolutions, sensors from different vendors or sensors of different models. These differences, in image acquisition devices or in the image acquisition time, may result in discrepancies in the images.
  • Inconstancy detection relates closely to the field of change detection, known in the art.
  • Change detection is directed to comparing between different images of the same scene, and detecting regions in these images, in which change has occurred. These different images are either acquired at different times or by different image acquisition devices.
  • Techniques for detecting changes between two images which are known in the art, are based on examining the properties of individual picture elements (i.e., pixels). A decision is made whether change has occurred in a pixel.
  • the publication "Image Change Detection Algorithms: A Systematic Survey” by Radke et. al., provides an overview of such techniques. This can be found in the following address: http://www. ecse. rpi.
  • Image segmentation is a technique that divides the image into regions (i.e., segments) based on colour, texture, brightness and other properties. Each segment may represent a meaningful object in the image such as a building, a car or a field.
  • Multi- scale segmentation is a technique of creating multiple segmentations of the image. Each segmentation is of a different scale (i.e., the segmentation is finer or coarser with respect to the segment size). These techniques measure the differences between a segment in one image and a corresponding segment in another image.
  • the system may interpret (i.e., classify) the differences as changes, for example in vegetation growth.
  • European patent 1217580 issued to Kim et. al, entitled “Method And Apparatus For Measuring Colour-Texture Distance And Image Segmentation Based On Said Measure” directs to a system and method for multi-scale segmentation.
  • the system pre-processes the image and calculates a colour measure and a texture measure for each pixel.
  • the system calculates a colour distance and a texture distance, between two pixels.
  • the system adds the colour distance and the texture distance to form the colour-texture distance between two pixels.
  • the system considers two pixels as belonging to the same segment, if their colour- texture distance does not exceed a certain threshold.
  • a large threshold value will cause a coarser segmentation (i.e., the segments will be larger).
  • the system creates an image graph describing the relationship between the segments.
  • the image graph further contains information about each segment.
  • the system refines the segmentation by merging neighbouring segments with similar colour-texture distances based on a second threshold and updates the image graph.
  • the publication "Comparison Of Object Oriented Classification Techniques And Standard Image Analysis for The Use Of Change Detection Between SPOT Multispectral Satellite Images and Aerial Photos" by G. Willhauck in ISPRS Vol. XXXIII, 2000, describes the use of multi-scale segmentation for the purpose of change detection. Specifically, the publication is directed at a method for detecting the deforested areas since nineteen sixty in the temperate forest in Tierra del Fuego in Argentina.
  • the technique uses commercial computer software.
  • the technique uses three recent satellite images, and one aerial photo from nineteen sixty.
  • the technique uses the computer software to segment the aerial image from nineteen sixty into two regions, and classifies the segments as forest and non-forest.
  • the software segments the recent satellite images to a finer scale based on the coarser segmentation of the aerial image.
  • the software classifies segments of non-forest originating from a forest segment as deforested areas. Thus, changes between the image acquired in nineteen sixty to the current image are detected.
  • a system for detecting inconstant representations of objects across a plurality of images of substantially the same scene includes an image segmentor and an inconstancy detector.
  • the image segmentor is coupled with the inconstancy detector.
  • the image segmentor segments each of the images, into a plurality of segments, thereby producing a respective segmentation representation.
  • the inconstancy detector detects segment inconstancy between the segmentation representation of one of the images and the segmentation representation of at least another image, thereby identifying inconstant segments.
  • Segment inconstancy is defined by the existence of a certain segment, at a certain location, in one image and the inexistence of a substantially similar segment or of a group of pixels which can be used to define a substantially similar segment, at essentially the same location, in the other images.
  • a method for detecting inconstant representations of objects across a plurality of images of substantially the same scene includes the procedures of segmenting each of the images into a plurality of segments, thereby producing a respective segmentation representation and detecting segment inconstancy. Segments inconstancy is detected between segmentation representation of one of the images and the segmentation representation of at least another image.
  • the procedure of detecting segments inconstancy identifies inconstant segments. Segment inconstancy is defined by the existence of a certain segment, at a certain location, in one image and the inexistence of a substantially similar segment or of a group of pixels which can be used to define a substantially similar segment, at essentially the same location, in the other images.
  • a system for detecting inconstant representations of objects across a plurality of images of substantially the same scene includes an image segmentor, a segment identifier and an inconstancy detector.
  • the segment identifier is coupled with the image segmentor and with the inconstancy detector.
  • the image segmentor segments each of the images, into a plurality of segments, thereby producing a respective segmentation representation.
  • the segment identifier identifies, in each of the images, segments of interest, with essentially the same segment characteristics.
  • the inconstancy detector detects inconstancy between the segmentation representation of one of the images and the segmentation representation of at least another image, thereby identifying inconstant segments.
  • Detecting inconstancy is defined by the existence of a certain segment of interest, at a certain location, in one image and the inexistence of a substantially similar segment or of a group of pixels which can be used to define a respective segment being substantially similar to a certain segment of interest, at essentially the same location, in the other images.
  • a method for detecting inconstant representations of objects across a plurality of images of substantially the same scene includes the procedures of segmenting each said images into a plurality of segments thereby producing a respective segmentation representation, identifying segments of interest with essentially the same segment characteristics in each of the images, and detecting segments inconstancy. Segments inconstancy is detected between segmentation representation of one of the images and the segmentation representation of at least another image. The procedure of detecting segments inconstancy identifies inconstant segments.
  • Segment inconstancy is defined by the existence of a certain segment, at a certain location, in one image and the inexistence of a substantially similar segment or of a group of pixels which can be used to define a substantially similar segment, at essentially the same location, in the other images.
  • Figure 1 is a schematic illustration of a system for detecting inconstancy between images constructed and operative in accordance with an embodiment of the disclosed technique
  • Figure 2 is a schematic illustration of a method for detecting inconstancy between images, constructed and operative in accordance with another embodiment of the disclosed technique
  • Figure 3 is a schematic illustration of a system for detecting inconstancy between images constructed and operative in accordance with a further embodiment of the disclosed technique
  • Figure 4 is a schematic illustration of a method for detecting inconstancy between images, constructed and operative in accordance with a another embodiment of the disclosed technique
  • Figures 5A and 6A are illustrations of two images, each acquired at a different time, to be analyzed, according to the disclosed technique;
  • Figures 5B and 6B are illustrations of respective segmentations of the images of Figures 5A and 6A, at certain segmentation levels, according to the disclosed technique;
  • Figures 5C and 6C provide object images which demonstrate the results of inconstancy analysis and detection, according to the disclosed technique.
  • Figure 7 is the same image as in Figure 6C with the circles from Figure 5C superimposed according to the disclosed technique. DETAILED DESCRIPTION OF THE EMBODIMENTS
  • the disclosed technique overcomes the disadvantages of the prior art by providing a system and method for automatically detecting inconstancies of selected segments, representing objects in a scene, in a sequence of multi-scale segmented images.
  • the disclosed technique compares selected segments in one image with segments from multiple segmentation scales in another image.
  • the system detects inconstancies between images, which were acquired at different time instances or by different imaging devices.
  • the system may further detect inconstancies between an image, of substantially the same scene, acquired at one time instance with a first imaging device, and images acquired at other time instances with at least a second imaging device.
  • one image may be acquired by a camera employing a first type of sensor (e.g., a visible light sensor) at a first point in time
  • another image may be acquired by a camera employing a second type of sensor (e.g., an infrared sensor) at a second point in time.
  • the different imaging devices may employ different types of sensors.
  • sensors may be sensors operative at different spectrums (e.g., one imaging device is operative at the visible light spectrum and another imaging device is operative at the infrared spectrum), sensors that are operative with different resolutions, sensors from different vendors or sensors of different models.
  • the disclosed technique may be adapted for civil or military purposes (e.g., detecting changes in general scenery), medical purposes (e.g., detecting changes in tissues), industrial purposes (e.g., detecting the appearance over time of failures such as cracks in structures using X-ray imaging), and the like.
  • the types of imaging system i.e., and the respective types of sensors used
  • the types of imaging system which may be employed by the disclosed technique for acquiring these images, can be visible light imaging systems, near infrared imaging systems, microbolometer imaging systems, ultraviolet imaging systems, X-ray imaging systems, MRI imaging systems, ultrasound imaging systems, and the like.
  • the different images, in the image sequences may include discrepancies in image properties, such as contrast, luminance, chrominance, resolution, and the like.
  • a system initially co-registers the images, to align approximately the images coordinate systems.
  • the system may further smooth the images.
  • the system segments the images multiple times, each at a different scale. Each segment may represent a meaningful object in the scene.
  • the system attempts to identify, for each segment at each segmentation scale, a corresponding segment from any of the segmentation scales in the other images.
  • the system disregards segments, which agree in location shape and size.
  • the system further disregards segments in one image, for which the pixels of the segment are correlated with a group of pixels, which can be used to define a substantially similar segment at essentially the same location in the other images (i.e., a segment does not necessarily exist in the respective area of the other image).
  • the system retains the segments to which a corresponding segment was not identified.
  • the system further retains segments, for which the pixels of the segment are not correlated with a group of pixels in the respective area of the other images.
  • the system selects segments from the retained segments according to color, texture, shape and size criteria.
  • the system declares the selected segments inconstant.
  • the system described above is able to find inconstancies between images which were acquired by different imaging devices or different sensors or sensor types.
  • the different images may include many differences in contrast, in light level, in resolution and in other image properties, but real objects in the scene cause the segmentor to create similar segments regardless of the imaging device.
  • the system selects segments of interest according to color, size, shape and texture criteria.
  • the system disregards segments to which a corresponding segment was identified in the other images.
  • the system further disregards segments in one image, for which the pixels of the segment are correlated with a group of pixels, which can be used to define a substantially similar segment at essentially the same location in the other images.
  • System 100 receives a sequence of images as its input and outputs a sequence of images indicating the inconstant objects in the scene.
  • System 100 includes a pre-processor 102, an image segmentor 104, an inconstancy detector 106 and a segments identifier 108.
  • Pre-processor 102 is coupled with Image segmentor 104.
  • Image segmentor 104 is coupled with inconstancy detector 106.
  • Inconstancy detector 106 is coupled with segments identifier 108.
  • Pre-processor 102 receives a sequence of images, and performs operations (e.g., co-registering, smoothing and enhancing) which are required to prepare the images for inconstancy detection.
  • Preprocessor 102 provides the prepared images to image segmentor 104.
  • Image segmentor 104 segments the images at multiple segmentation scales.
  • Image Segmentor 104 provides segmented images to inconstancy detector 106.
  • Inconstancy detector 106 declares a segment constant or inconstant.
  • Inconstancy detector 106 provides the segmented images with designated inconstant segments to segment identifier 108. Segments identifier 108 identifies segments of interest from each of the images and provides a representation, indicating the inconstant segments.
  • a plurality of images to be compared for inconstancies are pre-processed.
  • Such pre-processing is directed at preparing the images for inconstancy detection.
  • a pre-processing sub-procedure may include co-registration. Co-registration is aimed at aligning the image coordinate systems.
  • Pre-processing may further include a smoothing sub- procedure. Smoothing is aimed at removing noise artifacts from the images.
  • Pre-processing may further include distortion correction, enhancement and any other operation required to prepare the images for inconstancy detection.
  • pre-processor 102 pre processes a plurality of images to be compared for inconstancies.
  • a plurality of prepared images are segmented. Such segmentation is aimed at dividing the images into regions (i.e., segments).
  • the images are segmented a plurality of times, each at a different scale.
  • a segment at any scale may represent a meaningful object in the scene.
  • image segmentor 104 segments the prepared images at a plurality of different scales.
  • procedure 124 for each segment in one image, an attempt is made to identify a corresponding segment in the other images.
  • the attempt to identify a corresponding segment is made at any of the segmentation scales.
  • a segment is identified, if a corresponding segment in the other images exists in essentially the same location with essentially the same segment characteristics as the selected segment.
  • the segment characteristics may include color, size, shape and texture.
  • inconstancy detector 106 attempts to identify a corresponding segment in the other images.
  • procedure 126 for each segment in one image an attempt is made to identify a group of pixels, used to define a substantially similar segment, at essentially the same location, in the other image correlated with the pixels of the segment.
  • a segment is identified, if a group of pixels in the other images, correlated with the pixels of the segment, exists in essentially the same location of the segment.
  • inconstancy detector 106 attempts to identify a group of pixels in the other images.
  • inconstancy detector 106 retains the segments in one image, to which a corresponding segment in the other images was not identified.
  • segments of interest are selected from the retained segments.
  • the segments are selected according to segment characteristics. Segment characteristics may include color, size, shape and texture.
  • segments identifier 108 selects the segments of interest.
  • the selected segments are declared inconstant.
  • segments identifier 108 declares the selected segments inconstant.
  • the inconstant objects are represented.
  • the inconstant objects may be represented as a list, marked on an image or a map, alerted for, displayed on a video monitor or saved on a computer memory.
  • segments identifier 108 provides a representation of the inconstant objects.
  • the system may first select segments of interest. The system then detects if these segments of interest are constant or not.
  • System 160 receives a sequence of images as its input and outputs a sequence of images indicating the inconstant objects in the scene.
  • System 160 includes a pre-processor 162, an image segmentor 164, a segments identifier 166 and an inconstancy detector 168.
  • Pre-processor 162 is coupled with Image segmentor 164.
  • Image segmentor 164 is coupled with segments identifier 166.
  • Segments identifier 166 is coupled with Inconstancy detector 168.
  • Pre-processor 162 receives a sequence of image, prepares the images for inconstancy detection and performs operations (e.g., co- registering, smoothing and enhancing) which are required to prepare the images for inconstancy detection.
  • Pre-processor 162 provides the prepared images to image segmentor 164.
  • Image segmentor 164 segments the images at multiple segmentation scales.
  • Image Segmentor 164 provides segmented images to segments identifier 166. Segments identifier 166 identifies segments of interest from each of the images and provides the segmented images with the segments of interest designated to inconstancy detector 168.
  • Inconstancy detector 168 declares a segment constant or inconstant.
  • Inconstancy detector 168 provides a representation, indicating the inconstant segments.
  • Figure 4 is a schematic illustration of a method for detecting inconstancy between images, operative in accordance with another embodiment of the disclosed technique.
  • a plurality of images to be compared for inconstancies are pre-processed.
  • Such pre-processing is directed at preparing the images for inconstancy detection.
  • a preprocessing sub-procedure may include co-registration. Co-registration is aimed at aligning the image coordinate systems.
  • Pre-processing may further include a smoothing sub-procedure. Smoothing is aimed at removing noise artifacts from the images.
  • Pre-processing may further include distortion correction, enhancement and any other operation required to prepare the images for inconstancy detection.
  • pre-processor 162 pre processes a plurality of images to be compared for inconstancies.
  • a plurality of prepared images are segmented. Such segmentation is aimed at dividing the images into regions (i.e., segments).
  • the images are segmented a plurality times, each at a different scale.
  • a segment at any scale may represent a meaningful object in the scene.
  • image segmentor 164 segments the prepared images, at a plurality of different scales.
  • segments of interest are selected in the multi-scale segmented images. These segments are selected from any of the segmentation scales. The segments are selected according to the segment characteristics. The segments characteristics may include color, shape, size and texture. With reference to Figure 3, segments identifier 166 selects the segments of interest from a plurality of segmentation scales, according to segment characteristics.
  • procedure 186 an attempt is made to identify, for each selected segment of interest in one image, a corresponding group of pixels which can be used to define a substantially similar segment, at essentially the same location, in the other image.
  • the group of pixels may be a segment in the other image.
  • the attempt to identify a corresponding segment is based on location of the segments in the images.
  • the attempt to identify a corresponding segment is further based on the characteristics of the segments, and correlation between the pixels of the segments.
  • segments identifier 166 attempts to identify for each selected segment of interest in one image, a corresponding selected segment in the other images.
  • procedure 188 a selected segment of interest is disregarded in one image, if a corresponding segment is identified in the other images.
  • the segment is disregarded, if the corresponding segment in the other image is identified in essentially the same location with essentially the same segment characteristics as the selected segment.
  • the corresponding segment may be a selected segment of interest.
  • the segment characteristics may include color, size, shape and texture.
  • inconstancy detector 168 disregards a segment according to the compared segments location and characteristics.
  • a selected segment in one image is disregarded, if the pixels of the segment are correlated with a group of pixels used to define a substantially similar segment, at essentially the same location, in the other image.
  • a segment is disregarded, if a group of pixels in the other images, correlated with the pixels of the segment, exists in essentially the same location of the segment.
  • inconstancy detector 168 disregards a segment according to compared segments location and pixel correlation.
  • a selected segment is declared inconstant, if a corresponding selected segment, which agrees in location, color, shape and size, was not identified in the other images.
  • a selected segment is further declared inconstant if a group of pixels, correlated with the pixels of the segment was not identified in essentially the same location in the other images.
  • inconstancy detector 168 declares a segment inconstant.
  • the inconstant objects are represented. The inconstant objects may represented as a list, marked on an image or a map, alerted for, displayed on a video monitor or saved on a computer memory.
  • inconstancy detector 168 provides a representation of the inconstant objects. Reference is now made to Figures 5A, 5B, 5C, 6A, 6B, 6C and 7.
  • Figures 5A and 6A are illustrations of two images, each acquired at a different time, to be analyzed, according to the disclosed technique.
  • Each of the images of Figures 5A and 6A exhibits objects that are candidates for inconstancy detection. Both images exhibit a highway section with surrounding scene, and some vehicle traffic.
  • Figures 5B and 6B are illustrations of respective segmentations of the images of Figures 5A and 6A, at certain segmentation levels. It is noted that the respective segmentation levels of the images, can be identical or different. Some of the segments represent the vehicles on the highway section.
  • Figures 5C and 6C provide object images which demonstrate the results of inconstancy analysis and detection, according to the disclosed technique. The detection is presented over the images of Figures 5A and 6A.
  • the objects marked (i.e., by circles) in Figure 5C are objects in Figure 5A which exhibit a change, with respect to the objects of Figure 6A.
  • the objects marked (i.e., by circles) in Figure 6C are objects in Figure 6A which exhibit change, with respect to the objects of Figure 5A.
  • Figure 7 which is the same image as in Figure 6C with the circles from Figure 5C superimposed.
  • Circle 214 represents circle 210 in Figure 5C.
  • Circle 216 represents circle 212 in Figure 6C.
  • Circles 214 and 216 are in close proximity to one another. Therefore, the object marked by the circles 210 and 212 may be declared constant in both images. The rest of the marked objects in Figure 5C are declared inconstant in Figure 6C. Similarly, all the objects in Figure 6C, excluding the object marked by circle 212, are declared inconstant in Figure 5C. It will be appreciated by persons skilled in the art that the disclosed technique is not limited to what has been particularly shown and described hereinabove. Rather the scope of the disclosed technique is defined only by the claims, which follow.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)

Abstract

System for detecting inconstant representations of objects across a plurality of images of substantially the same scene, the system includes an image segmentor and an inconstancy detector, the image segmentor being operative to segment each of the images, into a plurality of segments, thereby producing a respective segmentation representation, the inconstancy detector is coupled with the image segmentor, and is operative to detect segment inconstancy between the segmentation representation of one of the images and the segmentation representation of at least another of the images, thereby identifying inconstant segments, the segment inconstancy is defined by the existence of a certain segment, at a certain location, in one image and the inexistence of a group of pixels which can be used to define a substantially similar segment, at essentially the same location, in the at least other image.

Description

SYSTEM AND METHOD FOR AUTOMATIC DETECTION OF INCONSTANCIES OF OBJECTS IN A SEQUENCE OF IMAGES
FIELD OF THE DISCLOSED TECHNIQUE The disclosed technique relates to image processing, in general, and to methods of detecting inconstancies of objects in an image, in particular.
BACKGROUND OF THE DISCLOSED TECHNIQUE
It is a common technique to observe aerial and satellite images to determine the appearance or disappearance of an object or objects in a sequence of images of a scene, including disappearance in one part of the scene and appearance in another part. It is often desired to use an automatic technique to detect the appearance or disappearance of an object or objects in a sequence of images of a scene. Detecting the appearance or disappearance of an object or objects in a sequence of images of a scene will be referred hereinafter as inconstancy detection or detecting inconstancies.
Some image sequences may be acquired differently. These differences may consist of time interval, in which the images of a scene were acquired. This time interval may sometimes be in the order of days or even years. The type of the image, such as aerial or satellite may be different. The devices, by which the images were acquired, such as camera, may change as well. The image sequences that were acquired differently may cause complex scenes, which may include rocks, bushes, buildings and vehicles, to appear very different in two different images. These differences may be semantic (i.e., meaningful objects may have appeared or disappeared). These differences may further be different light conditions, changes in shadows, change of viewpoint, seasonal changes of the scene and distortions of the scene. Some image sequences may be acquired by different imaging devices. These different imaging devices may employ different types of sensors. These different types of sensors may be sensors operative at different spectrums (e.g., one imaging device is operative at the visible light spectrum and another imaging device is operative at the infrared spectrum), sensors that are operative with different resolutions, sensors from different vendors or sensors of different models. These differences, in image acquisition devices or in the image acquisition time, may result in discrepancies in the images.
Inconstancy detection relates closely to the field of change detection, known in the art. Change detection is directed to comparing between different images of the same scene, and detecting regions in these images, in which change has occurred. These different images are either acquired at different times or by different image acquisition devices. Techniques for detecting changes between two images, which are known in the art, are based on examining the properties of individual picture elements (i.e., pixels). A decision is made whether change has occurred in a pixel. The publication "Image Change Detection Algorithms: A Systematic Survey" by Radke et. al., provides an overview of such techniques. This can be found in the following address: http://www. ecse. rpi. edu/homepaaes/riradke/papers/radketip04. pdf It is often advantageous to detect changes based on examining a region or regions in the images. Image segmentation is a technique that divides the image into regions (i.e., segments) based on colour, texture, brightness and other properties. Each segment may represent a meaningful object in the image such as a building, a car or a field. Multi- scale segmentation is a technique of creating multiple segmentations of the image. Each segmentation is of a different scale (i.e., the segmentation is finer or coarser with respect to the segment size). These techniques measure the differences between a segment in one image and a corresponding segment in another image. The system may interpret (i.e., classify) the differences as changes, for example in vegetation growth. European patent 1217580 issued to Kim et. al, entitled "Method And Apparatus For Measuring Colour-Texture Distance And Image Segmentation Based On Said Measure", directs to a system and method for multi-scale segmentation. The system pre-processes the image and calculates a colour measure and a texture measure for each pixel. The system calculates a colour distance and a texture distance, between two pixels. The system adds the colour distance and the texture distance to form the colour-texture distance between two pixels. The system considers two pixels as belonging to the same segment, if their colour- texture distance does not exceed a certain threshold. A large threshold value will cause a coarser segmentation (i.e., the segments will be larger). The system creates an image graph describing the relationship between the segments. The image graph further contains information about each segment. The system refines the segmentation by merging neighbouring segments with similar colour-texture distances based on a second threshold and updates the image graph.
The publication "Comparison Of Object Oriented Classification Techniques And Standard Image Analysis for The Use Of Change Detection Between SPOT Multispectral Satellite Images and Aerial Photos" by G. Willhauck in ISPRS Vol. XXXIII, 2000, describes the use of multi-scale segmentation for the purpose of change detection. Specifically, the publication is directed at a method for detecting the deforested areas since nineteen sixty in the temperate forest in Tierra del Fuego in Argentina. The technique uses commercial computer software. The technique uses three recent satellite images, and one aerial photo from nineteen sixty. The technique uses the computer software to segment the aerial image from nineteen sixty into two regions, and classifies the segments as forest and non-forest. The software segments the recent satellite images to a finer scale based on the coarser segmentation of the aerial image. The software classifies segments of non-forest originating from a forest segment as deforested areas. Thus, changes between the image acquired in nineteen sixty to the current image are detected.
SUMMARY OF THE PRESENT DISCLOSED TECHNIQUE
It is an object of the disclosed technique, to provide a novel system and method for detecting inconstancies between images of the same scene. In accordance with an aspect of the disclosed technique, there is thus provided a system for detecting inconstant representations of objects across a plurality of images of substantially the same scene. The system includes an image segmentor and an inconstancy detector. The image segmentor is coupled with the inconstancy detector. The image segmentor segments each of the images, into a plurality of segments, thereby producing a respective segmentation representation. The inconstancy detector detects segment inconstancy between the segmentation representation of one of the images and the segmentation representation of at least another image, thereby identifying inconstant segments. Segment inconstancy is defined by the existence of a certain segment, at a certain location, in one image and the inexistence of a substantially similar segment or of a group of pixels which can be used to define a substantially similar segment, at essentially the same location, in the other images.
According to another aspect of the disclosed technique there is thus provided a method for detecting inconstant representations of objects across a plurality of images of substantially the same scene. The method includes the procedures of segmenting each of the images into a plurality of segments, thereby producing a respective segmentation representation and detecting segment inconstancy. Segments inconstancy is detected between segmentation representation of one of the images and the segmentation representation of at least another image. The procedure of detecting segments inconstancy identifies inconstant segments. Segment inconstancy is defined by the existence of a certain segment, at a certain location, in one image and the inexistence of a substantially similar segment or of a group of pixels which can be used to define a substantially similar segment, at essentially the same location, in the other images.
According to a further aspect of the disclosed technique there is thus provided a system for detecting inconstant representations of objects across a plurality of images of substantially the same scene. The system includes an image segmentor, a segment identifier and an inconstancy detector. The segment identifier is coupled with the image segmentor and with the inconstancy detector. The image segmentor segments each of the images, into a plurality of segments, thereby producing a respective segmentation representation. The segment identifier identifies, in each of the images, segments of interest, with essentially the same segment characteristics. The inconstancy detector detects inconstancy between the segmentation representation of one of the images and the segmentation representation of at least another image, thereby identifying inconstant segments. Detecting inconstancy is defined by the existence of a certain segment of interest, at a certain location, in one image and the inexistence of a substantially similar segment or of a group of pixels which can be used to define a respective segment being substantially similar to a certain segment of interest, at essentially the same location, in the other images.
According to another aspect of the disclosed technique there is thus provided a method for detecting inconstant representations of objects across a plurality of images of substantially the same scene. The method includes the procedures of segmenting each said images into a plurality of segments thereby producing a respective segmentation representation, identifying segments of interest with essentially the same segment characteristics in each of the images, and detecting segments inconstancy. Segments inconstancy is detected between segmentation representation of one of the images and the segmentation representation of at least another image. The procedure of detecting segments inconstancy identifies inconstant segments. Segment inconstancy is defined by the existence of a certain segment, at a certain location, in one image and the inexistence of a substantially similar segment or of a group of pixels which can be used to define a substantially similar segment, at essentially the same location, in the other images.
BRIEF DESCRIPTION OF THE DRAWINGS
The disclosed technique will be understood and appreciated more fully from the following detailed description taken in conjunction with the drawings in which:
Figure 1 is a schematic illustration of a system for detecting inconstancy between images constructed and operative in accordance with an embodiment of the disclosed technique;
Figure 2 is a schematic illustration of a method for detecting inconstancy between images, constructed and operative in accordance with another embodiment of the disclosed technique;
Figure 3 is a schematic illustration of a system for detecting inconstancy between images constructed and operative in accordance with a further embodiment of the disclosed technique; Figure 4, is a schematic illustration of a method for detecting inconstancy between images, constructed and operative in accordance with a another embodiment of the disclosed technique;
Figures 5A and 6A are illustrations of two images, each acquired at a different time, to be analyzed, according to the disclosed technique; Figures 5B and 6B are illustrations of respective segmentations of the images of Figures 5A and 6A, at certain segmentation levels, according to the disclosed technique;
Figures 5C and 6C provide object images which demonstrate the results of inconstancy analysis and detection, according to the disclosed technique; and
Figure 7 is the same image as in Figure 6C with the circles from Figure 5C superimposed according to the disclosed technique. DETAILED DESCRIPTION OF THE EMBODIMENTS
The disclosed technique overcomes the disadvantages of the prior art by providing a system and method for automatically detecting inconstancies of selected segments, representing objects in a scene, in a sequence of multi-scale segmented images. The disclosed technique compares selected segments in one image with segments from multiple segmentation scales in another image.
The system according to the disclosed technique, detects inconstancies between images, which were acquired at different time instances or by different imaging devices. The system may further detect inconstancies between an image, of substantially the same scene, acquired at one time instance with a first imaging device, and images acquired at other time instances with at least a second imaging device. For example, one image may be acquired by a camera employing a first type of sensor (e.g., a visible light sensor) at a first point in time, and another image may be acquired by a camera employing a second type of sensor (e.g., an infrared sensor) at a second point in time. The different imaging devices may employ different types of sensors. These different types of sensors may be sensors operative at different spectrums (e.g., one imaging device is operative at the visible light spectrum and another imaging device is operative at the infrared spectrum), sensors that are operative with different resolutions, sensors from different vendors or sensors of different models.
The disclosed technique may be adapted for civil or military purposes (e.g., detecting changes in general scenery), medical purposes (e.g., detecting changes in tissues), industrial purposes (e.g., detecting the appearance over time of failures such as cracks in structures using X-ray imaging), and the like. Accordingly, the types of imaging system (i.e., and the respective types of sensors used) which may be employed by the disclosed technique for acquiring these images, can be visible light imaging systems, near infrared imaging systems, microbolometer imaging systems, ultraviolet imaging systems, X-ray imaging systems, MRI imaging systems, ultrasound imaging systems, and the like. The different images, in the image sequences, may include discrepancies in image properties, such as contrast, luminance, chrominance, resolution, and the like. However, actual objects which appear in the same place in the scene, in different images acquired by different imaging devices, are likely to result in substantially similar segments in the respective segmented images, regardless of the acquiring imaging device. The similarities of the segments are based on certain segment characteristics (e.g., color, size, shape and texture).
A system, according to an embodiment of the disclosed technique, initially co-registers the images, to align approximately the images coordinate systems. The system may further smooth the images. The system segments the images multiple times, each at a different scale. Each segment may represent a meaningful object in the scene. The system attempts to identify, for each segment at each segmentation scale, a corresponding segment from any of the segmentation scales in the other images. The system disregards segments, which agree in location shape and size. The system further disregards segments in one image, for which the pixels of the segment are correlated with a group of pixels, which can be used to define a substantially similar segment at essentially the same location in the other images (i.e., a segment does not necessarily exist in the respective area of the other image). The system retains the segments to which a corresponding segment was not identified. The system further retains segments, for which the pixels of the segment are not correlated with a group of pixels in the respective area of the other images. The system selects segments from the retained segments according to color, texture, shape and size criteria. The system declares the selected segments inconstant. The system described above is able to find inconstancies between images which were acquired by different imaging devices or different sensors or sensor types. The different images may include many differences in contrast, in light level, in resolution and in other image properties, but real objects in the scene cause the segmentor to create similar segments regardless of the imaging device. According to another embodiment of the disclosed technique, after segmenting the images, the system selects segments of interest according to color, size, shape and texture criteria. The system disregards segments to which a corresponding segment was identified in the other images. The system further disregards segments in one image, for which the pixels of the segment are correlated with a group of pixels, which can be used to define a substantially similar segment at essentially the same location in the other images.
Reference is now made to Figure 1 , which is a schematic illustration of a system, generally referenced 100, constructed and operative in accordance with an embodiment of the disclosed technique. System 100 receives a sequence of images as its input and outputs a sequence of images indicating the inconstant objects in the scene. System 100 includes a pre-processor 102, an image segmentor 104, an inconstancy detector 106 and a segments identifier 108. Pre-processor 102 is coupled with Image segmentor 104. Image segmentor 104 is coupled with inconstancy detector 106. Inconstancy detector 106 is coupled with segments identifier 108.
Pre-processor 102, receives a sequence of images, and performs operations (e.g., co-registering, smoothing and enhancing) which are required to prepare the images for inconstancy detection. Preprocessor 102 provides the prepared images to image segmentor 104. Image segmentor 104 segments the images at multiple segmentation scales. Image Segmentor 104 provides segmented images to inconstancy detector 106. Inconstancy detector 106 declares a segment constant or inconstant. Inconstancy detector 106 provides the segmented images with designated inconstant segments to segment identifier 108. Segments identifier 108 identifies segments of interest from each of the images and provides a representation, indicating the inconstant segments.
Reference is now made to Figure 2, which is a schematic illustration of a method for detecting inconstancy between images, operative in accordance with another embodiment of the disclosed technique. In procedure 120, a plurality of images to be compared for inconstancies, are pre-processed. Such pre-processing is directed at preparing the images for inconstancy detection. For example, a pre-processing sub-procedure may include co-registration. Co-registration is aimed at aligning the image coordinate systems. Pre-processing may further include a smoothing sub- procedure. Smoothing is aimed at removing noise artifacts from the images. Pre-processing may further include distortion correction, enhancement and any other operation required to prepare the images for inconstancy detection. With reference to Figure 1 , pre-processor 102 pre processes a plurality of images to be compared for inconstancies.
In procedure 122, a plurality of prepared images, are segmented. Such segmentation is aimed at dividing the images into regions (i.e., segments). The images are segmented a plurality of times, each at a different scale. A segment at any scale may represent a meaningful object in the scene. With reference to Figure 1 , image segmentor 104 segments the prepared images at a plurality of different scales.
In procedure 124, for each segment in one image, an attempt is made to identify a corresponding segment in the other images. The attempt to identify a corresponding segment is made at any of the segmentation scales. A segment is identified, if a corresponding segment in the other images exists in essentially the same location with essentially the same segment characteristics as the selected segment. The segment characteristics may include color, size, shape and texture. With reference to Figure 1 , inconstancy detector 106 attempts to identify a corresponding segment in the other images. In procedure 126, for each segment in one image an attempt is made to identify a group of pixels, used to define a substantially similar segment, at essentially the same location, in the other image correlated with the pixels of the segment. A segment is identified, if a group of pixels in the other images, correlated with the pixels of the segment, exists in essentially the same location of the segment. With reference to Figure 1 , inconstancy detector 106 attempts to identify a group of pixels in the other images.
In procedure 128, the segments in one image, to which a corresponding segment, which agree in location, color, shape, size and texture, does not exist in the other images, are retained. The segments in one image, to which a corresponding segment or a group of pixels, correlated with the pixels of the segment, was not identified in the other images, are further retained. With reference to Figure 1 , inconstancy detector 106 retains the segments in one image, to which a corresponding segment in the other images was not identified.
In procedure 130, segments of interest are selected from the retained segments. The segments are selected according to segment characteristics. Segment characteristics may include color, size, shape and texture. With reference to Figure 1 , segments identifier 108 selects the segments of interest.
In procedure 132, the selected segments are declared inconstant. With reference to Figure 1 , segments identifier 108 declares the selected segments inconstant. In procedure 134, the inconstant objects are represented. The inconstant objects may be represented as a list, marked on an image or a map, alerted for, displayed on a video monitor or saved on a computer memory. With reference to figure 1 , segments identifier 108 provides a representation of the inconstant objects. In order to reduce computational complexity, the system, according to a further embodiment of the disclosed technique, may first select segments of interest. The system then detects if these segments of interest are constant or not.
Reference is now made to Figure 3, which is a schematic illustration of a system, generally referenced 160, constructed and operative in accordance with a further embodiment of the disclosed technique. System 160 receives a sequence of images as its input and outputs a sequence of images indicating the inconstant objects in the scene. System 160 includes a pre-processor 162, an image segmentor 164, a segments identifier 166 and an inconstancy detector 168. Pre-processor 162 is coupled with Image segmentor 164. Image segmentor 164 is coupled with segments identifier 166. Segments identifier 166 is coupled with Inconstancy detector 168.
Pre-processor 162, receives a sequence of image, prepares the images for inconstancy detection and performs operations (e.g., co- registering, smoothing and enhancing) which are required to prepare the images for inconstancy detection. Pre-processor 162 provides the prepared images to image segmentor 164. Image segmentor 164 segments the images at multiple segmentation scales. Image Segmentor 164 provides segmented images to segments identifier 166. Segments identifier 166 identifies segments of interest from each of the images and provides the segmented images with the segments of interest designated to inconstancy detector 168. Inconstancy detector 168 declares a segment constant or inconstant. Inconstancy detector 168 provides a representation, indicating the inconstant segments. Reference is now made to Figure 4, which is a schematic illustration of a method for detecting inconstancy between images, operative in accordance with another embodiment of the disclosed technique.
In procedure 180, a plurality of images to be compared for inconstancies, are pre-processed. Such pre-processing is directed at preparing the images for inconstancy detection. For example, a preprocessing sub-procedure may include co-registration. Co-registration is aimed at aligning the image coordinate systems. Pre-processing may further include a smoothing sub-procedure. Smoothing is aimed at removing noise artifacts from the images. Pre-processing may further include distortion correction, enhancement and any other operation required to prepare the images for inconstancy detection. With reference to Figure 3, pre-processor 162 pre processes a plurality of images to be compared for inconstancies.
In procedure 182, a plurality of prepared images, are segmented. Such segmentation is aimed at dividing the images into regions (i.e., segments). The images are segmented a plurality times, each at a different scale. A segment at any scale may represent a meaningful object in the scene. With reference to Figure 3, image segmentor 164 segments the prepared images, at a plurality of different scales.
In procedure 184, segments of interest are selected in the multi-scale segmented images. These segments are selected from any of the segmentation scales. The segments are selected according to the segment characteristics. The segments characteristics may include color, shape, size and texture. With reference to Figure 3, segments identifier 166 selects the segments of interest from a plurality of segmentation scales, according to segment characteristics.
In procedure 186, an attempt is made to identify, for each selected segment of interest in one image, a corresponding group of pixels which can be used to define a substantially similar segment, at essentially the same location, in the other image. The group of pixels may be a segment in the other image. The attempt to identify a corresponding segment is based on location of the segments in the images. The attempt to identify a corresponding segment is further based on the characteristics of the segments, and correlation between the pixels of the segments. With reference to Figure 3, segments identifier 166 attempts to identify for each selected segment of interest in one image, a corresponding selected segment in the other images. In procedure 188, a selected segment of interest is disregarded in one image, if a corresponding segment is identified in the other images. The segment is disregarded, if the corresponding segment in the other image is identified in essentially the same location with essentially the same segment characteristics as the selected segment. The corresponding segment may be a selected segment of interest. The segment characteristics may include color, size, shape and texture. With reference to Figure 3, inconstancy detector 168 disregards a segment according to the compared segments location and characteristics. In procedure 190, a selected segment in one image is disregarded, if the pixels of the segment are correlated with a group of pixels used to define a substantially similar segment, at essentially the same location, in the other image. A segment is disregarded, if a group of pixels in the other images, correlated with the pixels of the segment, exists in essentially the same location of the segment. With reference to Figure 3, inconstancy detector 168 disregards a segment according to compared segments location and pixel correlation.
In procedure 192, a selected segment is declared inconstant, if a corresponding selected segment, which agrees in location, color, shape and size, was not identified in the other images. A selected segment is further declared inconstant if a group of pixels, correlated with the pixels of the segment was not identified in essentially the same location in the other images. With reference to Figure 3, inconstancy detector 168 declares a segment inconstant. In procedure 194, the inconstant objects are represented. The inconstant objects may represented as a list, marked on an image or a map, alerted for, displayed on a video monitor or saved on a computer memory. With reference to Figure 3, inconstancy detector 168 provides a representation of the inconstant objects. Reference is now made to Figures 5A, 5B, 5C, 6A, 6B, 6C and 7.
Figures 5A and 6A are illustrations of two images, each acquired at a different time, to be analyzed, according to the disclosed technique. Each of the images of Figures 5A and 6A exhibits objects that are candidates for inconstancy detection. Both images exhibit a highway section with surrounding scene, and some vehicle traffic. Figures 5B and 6B are illustrations of respective segmentations of the images of Figures 5A and 6A, at certain segmentation levels. It is noted that the respective segmentation levels of the images, can be identical or different. Some of the segments represent the vehicles on the highway section. Figures 5C and 6C provide object images which demonstrate the results of inconstancy analysis and detection, according to the disclosed technique. The detection is presented over the images of Figures 5A and 6A. The objects marked (i.e., by circles) in Figure 5C are objects in Figure 5A which exhibit a change, with respect to the objects of Figure 6A. The objects marked (i.e., by circles) in Figure 6C are objects in Figure 6A which exhibit change, with respect to the objects of Figure 5A.
Figure 7 which is the same image as in Figure 6C with the circles from Figure 5C superimposed. Circle 214 represents circle 210 in Figure 5C. Circle 216 represents circle 212 in Figure 6C. Circles 214 and 216 are in close proximity to one another. Therefore, the object marked by the circles 210 and 212 may be declared constant in both images. The rest of the marked objects in Figure 5C are declared inconstant in Figure 6C. Similarly, all the objects in Figure 6C, excluding the object marked by circle 212, are declared inconstant in Figure 5C. It will be appreciated by persons skilled in the art that the disclosed technique is not limited to what has been particularly shown and described hereinabove. Rather the scope of the disclosed technique is defined only by the claims, which follow.

Claims

1. System for detecting inconstant representations of objects across a plurality of images of substantially the same scene, the system comprising: an image segmentor for segmenting each of said images, into a plurality of segments, thereby producing a respective segmentation representation; an inconstancy detector, coupled with said image segmentor for detecting segment inconstancy between the segmentation representation of one of said images and the segmentation representation of at least another of said images, thereby identifying inconstant segments, wherein said segment inconstancy is defined by the existence of a certain segment, at a certain location, in said one image and the inexistence of a group of pixels which can be used to define a substantially similar segment, at essentially the same location, in said at least other image.
2. The system according to claim 1 , wherein said segment inconstancy is further defined by the existence of a certain segment, at a certain location, in said one image and the inexistence of a substantially similar segment, at essentially the same location, in said at least other image.
3. The system according to claim 1 , wherein said inconstancy detector is further operative for detecting segment inconstancy between the segmentation representation of said at least other image and said one image, thereby identifying inconstant segments.
4. The system according to claim 1 , wherein said image segmentor produces said segmentation representation, at multiple segmentation levels.
5. The system according to claim 1 , further comprising a preprocessor, coupled with said image segmentor for preparing said images for inconstancy detection, wherein said preparing includes at least one of smoothing said images and registering said images.
6. The system according to claim 1 , further comprising a segment identifier, coupled with said inconstancy detector for identifying segments of interest from said inconstant segments.
7. The system according to claim 6, wherein said segments of interest are identified according to at least one characteristic, selected from the group consisting of: size; color; texture; and shape.
8. The system according to claim 1 , wherein said segment inconstancy is further defined according to at least one characteristic, selected from the group consisting of: size; color; texture; shape; and correlation.
9. The system according to claim 1 , wherein said at least one of said plurality of images was acquired by a first image acquisition device and at least another of said plurality of images was acquired by a second image acquisition device.
5
10. The system according to claim 9, wherein said first image acquisition device employs a first type of sensor and said second image acquisition device employs at least a second type of sensor.
o 11. The system according to claim 1 , wherein said at least one of said plurality of images was acquired at a first time instance and at least a second of said plurality of images was acquired at a second time instance.
5 12. A method for detecting inconstant representations of objects across a plurality of images of substantially the same scene, the method comprising the procedures: segmenting each of said images, into a plurality of segments, thereby producing a respective segmentation representation; 0 detecting segment inconstancy between the segmentation representation of one of said images and the segmentation representation of at least another of said images, thereby identifying inconstant segments, wherein said segment inconstancy is defined by the existence of 5 a certain segment, at a certain location, in said one image and the inexistence of a group of pixels which can be used to define a substantially similar segment, at essentially the same location, in said at least one other image.
o 13. The method according to claim 12, wherein said segment inconstancy is further defined by the existence of a certain segment, at a certain location, in said one image and the inexistence of a substantially similar segment, at essentially the same location, in said at least other image.
14. The method according to claim 12, wherein said procedure of detecting segment inconstancy is further performed between the segmentation representations of said at least other image and said one image.
15. The method according to claim 12, wherein said segmenting produces said segmentation representation, at multiple segmentation levels.
16. The method according to claim 12, further comprising the procedure of preprocessing for preparing said images for inconstancy detection, wherein said preparing includes at least one of smoothing said images and registering said images.
17. The method according to claim 12, further comprising the procedure of identifying segments of interest from said inconstant segments.
18. The method according to claim 17, wherein said segments of interest are identified according to at least one characteristic, selected from the group consisting of: size; color; texture; and shape. 19. The method according to claim 12, wherein said segment inconstancy is further defined according to at least one characteristic, selected from the group consisting of: size;
5 color; texture; shape; and correlation.
o 20. The method according to claim 12, wherein said at least one of said plurality of images was acquired by a first image acquisition device and at least another of said plurality of images was acquired by a second image acquisition device.
5 21. The method according to claim 20, wherein said first image acquisition device employs a first type of sensor and said second image acquisition device employs at least a second type of sensor.
22. The method according to claim 12, wherein said a least one of said o plurality of images was acquired at a first time instance and at least a second of said plurality of images was acquired at a second time instance.
23. System for detecting inconstant representations of objects across a 5 plurality of images of substantially the same scene, each of the images acquired at a different time, the system comprising: an image segmentor for segmenting each of said images, into a plurality of segments, thereby producing a respective segmentation representation; a segments identifier coupled with said image segmentor for identifying, in each of said images, segments of interest, with essentially the same segment characteristics; an inconstancy detector, coupled with said segments identifier for detecting inconstancy between the segmentation representation of one of said images and the segmentation representation of at least another of said images, thereby identifying inconstant segments, wherein said segment inconstancy is defined by the existence of a certain segment of interest, at a certain location, in said one image and the inexistence of a group of pixels which can be used to define a substantially similar, at essentially the same location, in said at least other image.
24. The system according to claim 23, wherein said segment inconstancy is further defined by the existence of a certain segment of interest, at a certain location, in said one image and the inexistence of a substantially similar segment, at essentially the same location, in said at least other image.
25. The system according to claim 23, wherein said substantially similar segment is a segment of interest.
26. The system according to claim 23, wherein said inconstancy detector is further operative for detecting segment inconstancy between the segmentation representations of at least other image and said one image, thereby identifying inconstant segments.
27. The system according to claim 23, wherein said image segmentor produces said segmentation representation, at multiple segmentation levels.
28. The system according to claim 23, further comprising a preprocessor, coupled with said image segmentor for preparing said images for inconstancy detection, wherein said preparing includes at least one of smoothing said images and registering said images.
29. The system according to claim 23, wherein said segments of interest are identified according to at least one characteristic, selected from the group consisting of: size; color; texture; and shape.
30. The system according to claim 23, wherein said segment inconstancy is further defined according to at least one characteristic, selected from the group consisting of: size; color; texture; shape; and correlation.
31. The system according to claim 23, wherein said at least one of said plurality of images was acquired by a first image acquisition device and at least another of said plurality of images was acquired by a second image acquisition device.
32. The system according to claim 31 , wherein said first image acquisition device employs a first type of sensor and said second image acquisition device employs at least a second type of sensor.
33. The system according to claim 23, wherein said a least one of said plurality of images was acquired at a first time instance and at least a second of said plurality of images was acquired at a second time
5 instance.
34. A method for detecting inconstant representations of objects across a plurality of images of substantially the same scene, the method comprising the procedures: o segmenting each said images into a plurality of segments thereby producing a respective segmentation representation;
Identifying segments of interest, with essentially the same segment characteristics in each of said plurality of images; detecting segment inconstancy of said identified segment 5 between the segmentation representation of one of said images and the segmentation representation of at least another of said images, thereby identifying inconstant segments, wherein said segment inconstancy is defined by the existence of a segment of interest, at a certain location, in said one image and the o inexistence of a group of pixels which can be used to define a substantially similar segment, at essentially the same location, in said at least other image.
35. The method according to claim 34, wherein said segment 5 inconstancy is further defined by the existence of a certain segment of interest, at a certain location, in said one image and the inexistence of a substantially similar segment, at essentially the same location, in said at least other image.
o 36. The method according to claim 35, wherein said substantially similar segment is a segment of interest.
37. The method according to claim 34, wherein said procedure of detecting segment inconstancy is further performed between the segmentation representations of said at least other image, and said
5 one image.
38. The method according to claim 34, wherein said image segmenting produces said segmentation representation, at multiple segmentation levels. 0
39. The method according to claim 34, wherein said detecting segment inconstancy produces segments, detected as inconstant, in said images.
5 40. The method according to claim 34, further comprising the procedure of preprocessing for preparing said images for inconstancy detection, wherein said preparing includes at least one of smoothing said images and registering said images
o 41. The method according to claim 34, wherein said segments of interest are identified according to at least one characteristic, selected from the group consisting of: size; color; 5 texture; and shape.
42. The method according to claim 34, wherein said segment inconstancy is further defined according to at least one characteristic, o selected from the group consisting of: size; color; texture; shape; and correlation.
43. The method according to claim 34, wherein said at least one of said plurality of images was acquired by a first image acquisition device and at least another of said plurality of images was acquired by a second image acquisition device.
44. The method according to claim 43, wherein said first image acquisition device employs a first type of sensor and said second image acquisition device employs at least a second type of sensor.
45. The method according to claim 34, wherein said a least one of said plurality of images was acquired at a first time instance and at least a second of said plurality of images was acquired at a second time instance.
46. The systems according to any of the claims 1-11 and 23-33 substantially as described hereinabove or as illustrated in any of the drawings.
47. The methods according to any of the claims 12-22 and 34-45 substantially as described hereinabove or as illustrated in any of the drawings.
PCT/IL2005/001298 2004-12-05 2005-12-04 System and method for automatic detection of inconstancies of objects in a sequence of images Ceased WO2006059337A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IL165556 2004-12-05
IL165556A IL165556A (en) 2004-12-05 2004-12-05 System and method for automatic detection of inconstancies of objects in a sequence of images

Publications (2)

Publication Number Publication Date
WO2006059337A2 true WO2006059337A2 (en) 2006-06-08
WO2006059337A3 WO2006059337A3 (en) 2006-08-03

Family

ID=36283788

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2005/001298 Ceased WO2006059337A2 (en) 2004-12-05 2005-12-04 System and method for automatic detection of inconstancies of objects in a sequence of images

Country Status (2)

Country Link
IL (1) IL165556A (en)
WO (1) WO2006059337A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011084471A1 (en) * 2009-12-17 2011-07-14 Utility Risk Management Corp., Llc Method and system for estimating vegetation growth relative to an object of interest

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1217580A2 (en) 2000-11-13 2002-06-26 SAMSUNG ELECTRONICS Co. Ltd. Method and apparatus for measuring color-texture distance and image segmentation based on said measure

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1217580A2 (en) 2000-11-13 2002-06-26 SAMSUNG ELECTRONICS Co. Ltd. Method and apparatus for measuring color-texture distance and image segmentation based on said measure

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Comparison Of Object Oriented Classification Techniques And Standard Image Analyses for The Use Of Change Detection Between SPOT Multispectral Satellite Images and Aerial Photos" by G.Willhauck in ISPRS Vo. XXXIII, 2000.
"Image Change Detection Algorithms: A Systematic Survey" by Radke et al. http://www.ecse.rpi.edu/homepages/rjadke/papers/radketip04.pdf

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011084471A1 (en) * 2009-12-17 2011-07-14 Utility Risk Management Corp., Llc Method and system for estimating vegetation growth relative to an object of interest
US8352410B2 (en) 2009-12-17 2013-01-08 Utility Risk Management Corporation, Llc Method and system for estimating vegetation growth relative to an object of interest

Also Published As

Publication number Publication date
IL165556A0 (en) 2006-01-15
IL165556A (en) 2013-08-29
WO2006059337A3 (en) 2006-08-03

Similar Documents

Publication Publication Date Title
JP6497579B2 (en) Image composition system, image composition method, image composition program
JP6554169B2 (en) Object recognition device and object recognition system
JP5542889B2 (en) Image processing device
JP5419432B2 (en) Target area determination method and target area determination apparatus
CN111462128B (en) A pixel-level image segmentation system and method based on multi-modal spectral images
US10079974B2 (en) Image processing apparatus, method, and medium for extracting feature amount of image
JP2003302470A (en) Pedestrian detection device and pedestrian detection method
JP4964171B2 (en) Target region extraction method, apparatus, and program
US11455710B2 (en) Device and method of object detection
EP2124194B1 (en) Method of detecting objects
US7630990B2 (en) Endmember spectrum database construction method, endmember spectrum database construction apparatus and endmember spectrum database construction program
CN112613568B (en) Target recognition method and device based on visible light and infrared multispectral image sequence
JP4946878B2 (en) Image identification apparatus and program
JP7230507B2 (en) Deposit detection device
US7630534B2 (en) Method for radiological image processing
JP7092616B2 (en) Object detection device, object detection method, and object detection program
JP5906696B2 (en) Vehicle periphery photographing apparatus and vehicle periphery image processing method
WO2006059337A2 (en) System and method for automatic detection of inconstancies of objects in a sequence of images
JP2007272292A (en) Shadow recognition method and shadow boundary extraction method
US7346193B2 (en) Method for detecting object traveling direction
CN112571409B (en) Robot control method based on visual SLAM, robot and medium
KR20240057000A (en) A method and apparatus for detecting changes between heterogeneous image data for identifying disaster damage
CN114902282A (en) System and method for efficient sensing of collision threats
WO2024236944A1 (en) Shadow region detection device and shadow region detection method
CN120164106A (en) A method and system for monitoring deformation of engineering buildings

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KN KP KR KZ LC LK LR LS LT LU LV LY MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU LV MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 05813141

Country of ref document: EP

Kind code of ref document: A2