SYSTEM AND METHOD FOR AUTOMATIC DETECTION OF INCONSTANCIES OF OBJECTS IN A SEQUENCE OF IMAGES
FIELD OF THE DISCLOSED TECHNIQUE The disclosed technique relates to image processing, in general, and to methods of detecting inconstancies of objects in an image, in particular.
BACKGROUND OF THE DISCLOSED TECHNIQUE
It is a common technique to observe aerial and satellite images to determine the appearance or disappearance of an object or objects in a sequence of images of a scene, including disappearance in one part of the scene and appearance in another part. It is often desired to use an automatic technique to detect the appearance or disappearance of an object or objects in a sequence of images of a scene. Detecting the appearance or disappearance of an object or objects in a sequence of images of a scene will be referred hereinafter as inconstancy detection or detecting inconstancies.
Some image sequences may be acquired differently. These differences may consist of time interval, in which the images of a scene were acquired. This time interval may sometimes be in the order of days or even years. The type of the image, such as aerial or satellite may be different. The devices, by which the images were acquired, such as camera, may change as well. The image sequences that were acquired differently may cause complex scenes, which may include rocks, bushes, buildings and vehicles, to appear very different in two different images. These differences may be semantic (i.e., meaningful objects may have appeared or disappeared). These differences may further be different light conditions, changes in shadows, change of viewpoint, seasonal changes of the scene and distortions of the scene. Some image sequences may be acquired by different imaging devices. These different imaging devices
may employ different types of sensors. These different types of sensors may be sensors operative at different spectrums (e.g., one imaging device is operative at the visible light spectrum and another imaging device is operative at the infrared spectrum), sensors that are operative with different resolutions, sensors from different vendors or sensors of different models. These differences, in image acquisition devices or in the image acquisition time, may result in discrepancies in the images.
Inconstancy detection relates closely to the field of change detection, known in the art. Change detection is directed to comparing between different images of the same scene, and detecting regions in these images, in which change has occurred. These different images are either acquired at different times or by different image acquisition devices. Techniques for detecting changes between two images, which are known in the art, are based on examining the properties of individual picture elements (i.e., pixels). A decision is made whether change has occurred in a pixel. The publication "Image Change Detection Algorithms: A Systematic Survey" by Radke et. al., provides an overview of such techniques. This can be found in the following address: http://www. ecse. rpi. edu/homepaaes/riradke/papers/radketip04. pdf It is often advantageous to detect changes based on examining a region or regions in the images. Image segmentation is a technique that divides the image into regions (i.e., segments) based on colour, texture, brightness and other properties. Each segment may represent a meaningful object in the image such as a building, a car or a field. Multi- scale segmentation is a technique of creating multiple segmentations of the image. Each segmentation is of a different scale (i.e., the segmentation is finer or coarser with respect to the segment size). These techniques measure the differences between a segment in one image and a corresponding segment in another image. The system may interpret (i.e., classify) the differences as changes, for example in vegetation growth.
European patent 1217580 issued to Kim et. al, entitled "Method And Apparatus For Measuring Colour-Texture Distance And Image Segmentation Based On Said Measure", directs to a system and method for multi-scale segmentation. The system pre-processes the image and calculates a colour measure and a texture measure for each pixel. The system calculates a colour distance and a texture distance, between two pixels. The system adds the colour distance and the texture distance to form the colour-texture distance between two pixels. The system considers two pixels as belonging to the same segment, if their colour- texture distance does not exceed a certain threshold. A large threshold value will cause a coarser segmentation (i.e., the segments will be larger). The system creates an image graph describing the relationship between the segments. The image graph further contains information about each segment. The system refines the segmentation by merging neighbouring segments with similar colour-texture distances based on a second threshold and updates the image graph.
The publication "Comparison Of Object Oriented Classification Techniques And Standard Image Analysis for The Use Of Change Detection Between SPOT Multispectral Satellite Images and Aerial Photos" by G. Willhauck in ISPRS Vol. XXXIII, 2000, describes the use of multi-scale segmentation for the purpose of change detection. Specifically, the publication is directed at a method for detecting the deforested areas since nineteen sixty in the temperate forest in Tierra del Fuego in Argentina. The technique uses commercial computer software. The technique uses three recent satellite images, and one aerial photo from nineteen sixty. The technique uses the computer software to segment the aerial image from nineteen sixty into two regions, and classifies the segments as forest and non-forest. The software segments the recent satellite images to a finer scale based on the coarser segmentation of the aerial image. The software classifies segments of non-forest originating from a forest segment as deforested areas. Thus,
changes between the image acquired in nineteen sixty to the current image are detected.
SUMMARY OF THE PRESENT DISCLOSED TECHNIQUE
It is an object of the disclosed technique, to provide a novel system and method for detecting inconstancies between images of the same scene. In accordance with an aspect of the disclosed technique, there is thus provided a system for detecting inconstant representations of objects across a plurality of images of substantially the same scene. The system includes an image segmentor and an inconstancy detector. The image segmentor is coupled with the inconstancy detector. The image segmentor segments each of the images, into a plurality of segments, thereby producing a respective segmentation representation. The inconstancy detector detects segment inconstancy between the segmentation representation of one of the images and the segmentation representation of at least another image, thereby identifying inconstant segments. Segment inconstancy is defined by the existence of a certain segment, at a certain location, in one image and the inexistence of a substantially similar segment or of a group of pixels which can be used to define a substantially similar segment, at essentially the same location, in the other images.
According to another aspect of the disclosed technique there is thus provided a method for detecting inconstant representations of objects across a plurality of images of substantially the same scene. The method includes the procedures of segmenting each of the images into a plurality of segments, thereby producing a respective segmentation representation and detecting segment inconstancy. Segments inconstancy is detected between segmentation representation of one of the images and the segmentation representation of at least another image. The procedure of detecting segments inconstancy identifies inconstant segments. Segment inconstancy is defined by the existence of a certain segment, at a certain location, in one image and the inexistence of a substantially similar segment or of a group of pixels which can be used to
define a substantially similar segment, at essentially the same location, in the other images.
According to a further aspect of the disclosed technique there is thus provided a system for detecting inconstant representations of objects across a plurality of images of substantially the same scene. The system includes an image segmentor, a segment identifier and an inconstancy detector. The segment identifier is coupled with the image segmentor and with the inconstancy detector. The image segmentor segments each of the images, into a plurality of segments, thereby producing a respective segmentation representation. The segment identifier identifies, in each of the images, segments of interest, with essentially the same segment characteristics. The inconstancy detector detects inconstancy between the segmentation representation of one of the images and the segmentation representation of at least another image, thereby identifying inconstant segments. Detecting inconstancy is defined by the existence of a certain segment of interest, at a certain location, in one image and the inexistence of a substantially similar segment or of a group of pixels which can be used to define a respective segment being substantially similar to a certain segment of interest, at essentially the same location, in the other images.
According to another aspect of the disclosed technique there is thus provided a method for detecting inconstant representations of objects across a plurality of images of substantially the same scene. The method includes the procedures of segmenting each said images into a plurality of segments thereby producing a respective segmentation representation, identifying segments of interest with essentially the same segment characteristics in each of the images, and detecting segments inconstancy. Segments inconstancy is detected between segmentation representation of one of the images and the segmentation representation of at least another image. The procedure of detecting segments inconstancy identifies inconstant segments. Segment inconstancy is
defined by the existence of a certain segment, at a certain location, in one image and the inexistence of a substantially similar segment or of a group of pixels which can be used to define a substantially similar segment, at essentially the same location, in the other images.
BRIEF DESCRIPTION OF THE DRAWINGS
The disclosed technique will be understood and appreciated more fully from the following detailed description taken in conjunction with the drawings in which:
Figure 1 is a schematic illustration of a system for detecting inconstancy between images constructed and operative in accordance with an embodiment of the disclosed technique;
Figure 2 is a schematic illustration of a method for detecting inconstancy between images, constructed and operative in accordance with another embodiment of the disclosed technique;
Figure 3 is a schematic illustration of a system for detecting inconstancy between images constructed and operative in accordance with a further embodiment of the disclosed technique; Figure 4, is a schematic illustration of a method for detecting inconstancy between images, constructed and operative in accordance with a another embodiment of the disclosed technique;
Figures 5A and 6A are illustrations of two images, each acquired at a different time, to be analyzed, according to the disclosed technique; Figures 5B and 6B are illustrations of respective segmentations of the images of Figures 5A and 6A, at certain segmentation levels, according to the disclosed technique;
Figures 5C and 6C provide object images which demonstrate the results of inconstancy analysis and detection, according to the disclosed technique; and
Figure 7 is the same image as in Figure 6C with the circles from Figure 5C superimposed according to the disclosed technique.
DETAILED DESCRIPTION OF THE EMBODIMENTS
The disclosed technique overcomes the disadvantages of the prior art by providing a system and method for automatically detecting inconstancies of selected segments, representing objects in a scene, in a sequence of multi-scale segmented images. The disclosed technique compares selected segments in one image with segments from multiple segmentation scales in another image.
The system according to the disclosed technique, detects inconstancies between images, which were acquired at different time instances or by different imaging devices. The system may further detect inconstancies between an image, of substantially the same scene, acquired at one time instance with a first imaging device, and images acquired at other time instances with at least a second imaging device. For example, one image may be acquired by a camera employing a first type of sensor (e.g., a visible light sensor) at a first point in time, and another image may be acquired by a camera employing a second type of sensor (e.g., an infrared sensor) at a second point in time. The different imaging devices may employ different types of sensors. These different types of sensors may be sensors operative at different spectrums (e.g., one imaging device is operative at the visible light spectrum and another imaging device is operative at the infrared spectrum), sensors that are operative with different resolutions, sensors from different vendors or sensors of different models.
The disclosed technique may be adapted for civil or military purposes (e.g., detecting changes in general scenery), medical purposes (e.g., detecting changes in tissues), industrial purposes (e.g., detecting the appearance over time of failures such as cracks in structures using X-ray imaging), and the like. Accordingly, the types of imaging system (i.e., and the respective types of sensors used) which may be employed by the disclosed technique for acquiring these images, can be visible light imaging systems, near infrared imaging systems, microbolometer imaging
systems, ultraviolet imaging systems, X-ray imaging systems, MRI imaging systems, ultrasound imaging systems, and the like. The different images, in the image sequences, may include discrepancies in image properties, such as contrast, luminance, chrominance, resolution, and the like. However, actual objects which appear in the same place in the scene, in different images acquired by different imaging devices, are likely to result in substantially similar segments in the respective segmented images, regardless of the acquiring imaging device. The similarities of the segments are based on certain segment characteristics (e.g., color, size, shape and texture).
A system, according to an embodiment of the disclosed technique, initially co-registers the images, to align approximately the images coordinate systems. The system may further smooth the images. The system segments the images multiple times, each at a different scale. Each segment may represent a meaningful object in the scene. The system attempts to identify, for each segment at each segmentation scale, a corresponding segment from any of the segmentation scales in the other images. The system disregards segments, which agree in location shape and size. The system further disregards segments in one image, for which the pixels of the segment are correlated with a group of pixels, which can be used to define a substantially similar segment at essentially the same location in the other images (i.e., a segment does not necessarily exist in the respective area of the other image). The system retains the segments to which a corresponding segment was not identified. The system further retains segments, for which the pixels of the segment are not correlated with a group of pixels in the respective area of the other images. The system selects segments from the retained segments according to color, texture, shape and size criteria. The system declares the selected segments inconstant. The system described above is able to find inconstancies between images which were acquired by different imaging devices or different
sensors or sensor types. The different images may include many differences in contrast, in light level, in resolution and in other image properties, but real objects in the scene cause the segmentor to create similar segments regardless of the imaging device. According to another embodiment of the disclosed technique, after segmenting the images, the system selects segments of interest according to color, size, shape and texture criteria. The system disregards segments to which a corresponding segment was identified in the other images. The system further disregards segments in one image, for which the pixels of the segment are correlated with a group of pixels, which can be used to define a substantially similar segment at essentially the same location in the other images.
Reference is now made to Figure 1 , which is a schematic illustration of a system, generally referenced 100, constructed and operative in accordance with an embodiment of the disclosed technique. System 100 receives a sequence of images as its input and outputs a sequence of images indicating the inconstant objects in the scene. System 100 includes a pre-processor 102, an image segmentor 104, an inconstancy detector 106 and a segments identifier 108. Pre-processor 102 is coupled with Image segmentor 104. Image segmentor 104 is coupled with inconstancy detector 106. Inconstancy detector 106 is coupled with segments identifier 108.
Pre-processor 102, receives a sequence of images, and performs operations (e.g., co-registering, smoothing and enhancing) which are required to prepare the images for inconstancy detection. Preprocessor 102 provides the prepared images to image segmentor 104. Image segmentor 104 segments the images at multiple segmentation scales. Image Segmentor 104 provides segmented images to inconstancy detector 106. Inconstancy detector 106 declares a segment constant or inconstant. Inconstancy detector 106 provides the segmented images with designated inconstant segments to segment identifier 108. Segments
identifier 108 identifies segments of interest from each of the images and provides a representation, indicating the inconstant segments.
Reference is now made to Figure 2, which is a schematic illustration of a method for detecting inconstancy between images, operative in accordance with another embodiment of the disclosed technique. In procedure 120, a plurality of images to be compared for inconstancies, are pre-processed. Such pre-processing is directed at preparing the images for inconstancy detection. For example, a pre-processing sub-procedure may include co-registration. Co-registration is aimed at aligning the image coordinate systems. Pre-processing may further include a smoothing sub- procedure. Smoothing is aimed at removing noise artifacts from the images. Pre-processing may further include distortion correction, enhancement and any other operation required to prepare the images for inconstancy detection. With reference to Figure 1 , pre-processor 102 pre processes a plurality of images to be compared for inconstancies.
In procedure 122, a plurality of prepared images, are segmented. Such segmentation is aimed at dividing the images into regions (i.e., segments). The images are segmented a plurality of times, each at a different scale. A segment at any scale may represent a meaningful object in the scene. With reference to Figure 1 , image segmentor 104 segments the prepared images at a plurality of different scales.
In procedure 124, for each segment in one image, an attempt is made to identify a corresponding segment in the other images. The attempt to identify a corresponding segment is made at any of the segmentation scales. A segment is identified, if a corresponding segment in the other images exists in essentially the same location with essentially the same segment characteristics as the selected segment. The segment characteristics may include color, size, shape and texture. With reference to Figure 1 , inconstancy detector 106 attempts to identify a corresponding segment in the other images.
In procedure 126, for each segment in one image an attempt is made to identify a group of pixels, used to define a substantially similar segment, at essentially the same location, in the other image correlated with the pixels of the segment. A segment is identified, if a group of pixels in the other images, correlated with the pixels of the segment, exists in essentially the same location of the segment. With reference to Figure 1 , inconstancy detector 106 attempts to identify a group of pixels in the other images.
In procedure 128, the segments in one image, to which a corresponding segment, which agree in location, color, shape, size and texture, does not exist in the other images, are retained. The segments in one image, to which a corresponding segment or a group of pixels, correlated with the pixels of the segment, was not identified in the other images, are further retained. With reference to Figure 1 , inconstancy detector 106 retains the segments in one image, to which a corresponding segment in the other images was not identified.
In procedure 130, segments of interest are selected from the retained segments. The segments are selected according to segment characteristics. Segment characteristics may include color, size, shape and texture. With reference to Figure 1 , segments identifier 108 selects the segments of interest.
In procedure 132, the selected segments are declared inconstant. With reference to Figure 1 , segments identifier 108 declares the selected segments inconstant. In procedure 134, the inconstant objects are represented. The inconstant objects may be represented as a list, marked on an image or a map, alerted for, displayed on a video monitor or saved on a computer memory. With reference to figure 1 , segments identifier 108 provides a representation of the inconstant objects. In order to reduce computational complexity, the system, according to a further embodiment of the disclosed technique, may first select
segments of interest. The system then detects if these segments of interest are constant or not.
Reference is now made to Figure 3, which is a schematic illustration of a system, generally referenced 160, constructed and operative in accordance with a further embodiment of the disclosed technique. System 160 receives a sequence of images as its input and outputs a sequence of images indicating the inconstant objects in the scene. System 160 includes a pre-processor 162, an image segmentor 164, a segments identifier 166 and an inconstancy detector 168. Pre-processor 162 is coupled with Image segmentor 164. Image segmentor 164 is coupled with segments identifier 166. Segments identifier 166 is coupled with Inconstancy detector 168.
Pre-processor 162, receives a sequence of image, prepares the images for inconstancy detection and performs operations (e.g., co- registering, smoothing and enhancing) which are required to prepare the images for inconstancy detection. Pre-processor 162 provides the prepared images to image segmentor 164. Image segmentor 164 segments the images at multiple segmentation scales. Image Segmentor 164 provides segmented images to segments identifier 166. Segments identifier 166 identifies segments of interest from each of the images and provides the segmented images with the segments of interest designated to inconstancy detector 168. Inconstancy detector 168 declares a segment constant or inconstant. Inconstancy detector 168 provides a representation, indicating the inconstant segments. Reference is now made to Figure 4, which is a schematic illustration of a method for detecting inconstancy between images, operative in accordance with another embodiment of the disclosed technique.
In procedure 180, a plurality of images to be compared for inconstancies, are pre-processed. Such pre-processing is directed at preparing the images for inconstancy detection. For example, a preprocessing sub-procedure may include co-registration. Co-registration is
aimed at aligning the image coordinate systems. Pre-processing may further include a smoothing sub-procedure. Smoothing is aimed at removing noise artifacts from the images. Pre-processing may further include distortion correction, enhancement and any other operation required to prepare the images for inconstancy detection. With reference to Figure 3, pre-processor 162 pre processes a plurality of images to be compared for inconstancies.
In procedure 182, a plurality of prepared images, are segmented. Such segmentation is aimed at dividing the images into regions (i.e., segments). The images are segmented a plurality times, each at a different scale. A segment at any scale may represent a meaningful object in the scene. With reference to Figure 3, image segmentor 164 segments the prepared images, at a plurality of different scales.
In procedure 184, segments of interest are selected in the multi-scale segmented images. These segments are selected from any of the segmentation scales. The segments are selected according to the segment characteristics. The segments characteristics may include color, shape, size and texture. With reference to Figure 3, segments identifier 166 selects the segments of interest from a plurality of segmentation scales, according to segment characteristics.
In procedure 186, an attempt is made to identify, for each selected segment of interest in one image, a corresponding group of pixels which can be used to define a substantially similar segment, at essentially the same location, in the other image. The group of pixels may be a segment in the other image. The attempt to identify a corresponding segment is based on location of the segments in the images. The attempt to identify a corresponding segment is further based on the characteristics of the segments, and correlation between the pixels of the segments. With reference to Figure 3, segments identifier 166 attempts to identify for each selected segment of interest in one image, a corresponding selected segment in the other images.
In procedure 188, a selected segment of interest is disregarded in one image, if a corresponding segment is identified in the other images. The segment is disregarded, if the corresponding segment in the other image is identified in essentially the same location with essentially the same segment characteristics as the selected segment. The corresponding segment may be a selected segment of interest. The segment characteristics may include color, size, shape and texture. With reference to Figure 3, inconstancy detector 168 disregards a segment according to the compared segments location and characteristics. In procedure 190, a selected segment in one image is disregarded, if the pixels of the segment are correlated with a group of pixels used to define a substantially similar segment, at essentially the same location, in the other image. A segment is disregarded, if a group of pixels in the other images, correlated with the pixels of the segment, exists in essentially the same location of the segment. With reference to Figure 3, inconstancy detector 168 disregards a segment according to compared segments location and pixel correlation.
In procedure 192, a selected segment is declared inconstant, if a corresponding selected segment, which agrees in location, color, shape and size, was not identified in the other images. A selected segment is further declared inconstant if a group of pixels, correlated with the pixels of the segment was not identified in essentially the same location in the other images. With reference to Figure 3, inconstancy detector 168 declares a segment inconstant. In procedure 194, the inconstant objects are represented. The inconstant objects may represented as a list, marked on an image or a map, alerted for, displayed on a video monitor or saved on a computer memory. With reference to Figure 3, inconstancy detector 168 provides a representation of the inconstant objects. Reference is now made to Figures 5A, 5B, 5C, 6A, 6B, 6C and 7.
Figures 5A and 6A are illustrations of two images, each acquired at a
different time, to be analyzed, according to the disclosed technique. Each of the images of Figures 5A and 6A exhibits objects that are candidates for inconstancy detection. Both images exhibit a highway section with surrounding scene, and some vehicle traffic. Figures 5B and 6B are illustrations of respective segmentations of the images of Figures 5A and 6A, at certain segmentation levels. It is noted that the respective segmentation levels of the images, can be identical or different. Some of the segments represent the vehicles on the highway section. Figures 5C and 6C provide object images which demonstrate the results of inconstancy analysis and detection, according to the disclosed technique. The detection is presented over the images of Figures 5A and 6A. The objects marked (i.e., by circles) in Figure 5C are objects in Figure 5A which exhibit a change, with respect to the objects of Figure 6A. The objects marked (i.e., by circles) in Figure 6C are objects in Figure 6A which exhibit change, with respect to the objects of Figure 5A.
Figure 7 which is the same image as in Figure 6C with the circles from Figure 5C superimposed. Circle 214 represents circle 210 in Figure 5C. Circle 216 represents circle 212 in Figure 6C. Circles 214 and 216 are in close proximity to one another. Therefore, the object marked by the circles 210 and 212 may be declared constant in both images. The rest of the marked objects in Figure 5C are declared inconstant in Figure 6C. Similarly, all the objects in Figure 6C, excluding the object marked by circle 212, are declared inconstant in Figure 5C. It will be appreciated by persons skilled in the art that the disclosed technique is not limited to what has been particularly shown and described hereinabove. Rather the scope of the disclosed technique is defined only by the claims, which follow.