[go: up one dir, main page]

AU2009230796A1 - Location-based brightness transfer function - Google Patents

Location-based brightness transfer function Download PDF

Info

Publication number
AU2009230796A1
AU2009230796A1 AU2009230796A AU2009230796A AU2009230796A1 AU 2009230796 A1 AU2009230796 A1 AU 2009230796A1 AU 2009230796 A AU2009230796 A AU 2009230796A AU 2009230796 A AU2009230796 A AU 2009230796A AU 2009230796 A1 AU2009230796 A1 AU 2009230796A1
Authority
AU
Australia
Prior art keywords
source
signature
target
spatial region
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
AU2009230796A
Inventor
Peter Jan Pakulski
Daniel John Wedge
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to AU2009230796A priority Critical patent/AU2009230796A1/en
Publication of AU2009230796A1 publication Critical patent/AU2009230796A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Description

S&F Ref: 905836 AUSTRALIA PATENTS ACT 1990 COMPLETE SPECIFICATION FOR A STANDARD PATENT Name and Address Canon Kabushiki Kaisha, of 30-2, Shimomaruko 3 of Applicant: chome, Ohta-ku, Tokyo, 146, Japan Actual Inventor(s): Daniel John Wedge, Peter Jan Pakulski Address for Service: Spruson & Ferguson St Martins Tower Level 35 31 Market Street Sydney NSW 2000 (CCN 3710000177) Invention Title: Location-based brightness transfer function The following statement is a full description of this invention, including the best method of performing it known to me/us: 5845c(2360616_1) LOCATION-BASED BRIGHTNESS TRANSFER FUNCTION TECHNICAL FIELD The present disclosure relates generally to video object tracking and, in particular, to tracking video objects under varying conditions of lighting. DF-SCRIPTION OF BACKGROUND ART Many existing approaches for tracking an object operate by matching characteristics of the object over multiple frames of an image sequence, or over frames of multiple image sequences. Some approaches use the position of an object, the size of an object, or both. Performing object tracking using the position and size of an object can lead to incorrect tracking when multiple objects appear in a similar location with a similar size, since an object may be associated with an incorrect track. Using the appearance of an object in addition to the position and size of the object can assist in preventing incorrect associations, if multiple objects in a similar location have different appearances, such as, for example, colour and/or textures. One approach for tracking an object based on the appearance of the object uses a 5 summary of the appearance of the object, known as a signature. In one implementation, the signature includes a histogram of luminance values within an area of a frame corresponding to the object under consideration. The area of the frame corresponding to the object is known as an object mask. From the object mask, it is possible to derive a bounding box for the object, which is a rectangular box, oriented with respect to x and y axes, which encloses the object 20 under consideration. In another implementation, the signature includes a histogram of colour values within the object mask. The signature of an object detected in a current frame can be compared with the signature of an object from a previous frame. If there is a high similarity between the signatures and also between the spatial characteristics of the objects, for example, the locations and sizes of the bounding boxes of the objects, the object in the current frame 25 can be associated with the object in the previous frame. Thus, it is possible to track the object over multiple frames. In one approach for constructing a signature for an object defined by an object mask, the object mask is first used to compute a weight for each visual element within the object mask. A visual element may be, for example, a pixel, a group of pixels, or an 8x8 DCT 2355577 _.DOC IRN: 905836 905836_specification -2 (Discrete Cosine Transform) block, as used in JPEG images in a motion-JPEG stream. The weight is derived from a distance transform performed on the object mask. Thus, the weighting of an image element is related to the distance of the image element from the boundary of the object mask. Examples of distance metrics used in the distance transform include the Euclidean distance and the Manhattan distance. Then, the contribution of each image element to the signature is based on the weight derived from the distance transform value for the image element. Weighting the contributions to the signature according to the value of the distance transform causes the centre of an object to contribute more data to the signature than would otherwise occur without performing the weighting. This provides an advantage, since the centre of an object may be more visually stable than the extremities of the object, particularly for non-rigid objects such as people. A signature is a summary of the visual appearance of an object for a given pose under the lighting conditions under which the object was observed. Thus, if lighting conditions change, it is probable that the signature will change. The lighting conditions may change due i to localised lighting changes, for example as an object moves between a sunny area and a shaded area, its apparent brightness will change. The lighting conditions may also change due to the object being viewed by different cameras with different gain settings or due to other characteristics that affect the brightness. It is possible to compensate for changes in lighting conditions by applying one or > more Brightness Transfer Functions (BTFs), which model the change in the luminance due to a change in lighting conditions, such as those mentioned above. In one approach in which the object signature includes a histogram, the BTF between two signatures is a mapping between corresponding bins of the two histograms. In one approach, a BTF is applied to luminance histogram data. In another approach, independent BTFs are applied to each of the Red, Green, 25 and Blue colour components of a signature. Tracking using multiple cameras involves determining whether an object viewed in a first frame captured by a first camera and an object viewed in a second frame captured by a second camera are views of the same real-world object. One method of applying a BTF in multiple-camera tracking is a Global Compensation Method (GC Method). For each pair of 30 cameras, the GC Method models an expected uniform change in illumination caused by moving from a field of view of one camera to a field of view of another camera. First, the BTF is applied to the signature of an object as captured by one of the cameras, to compensate 2355577 IDOC IRN: 905836 905836_specification -3 for the change in brightness, resulting in a compensated signature. The compensated signature is then compared to the signature of the object as captured by the other camera to determine whether the viewed objects correspond to the same real-world object. The disadvantage of the GC method is that the assumption of a uniform change in illumination does not always hold in practice. In another approach known as the Subspace Method, a subspace of all possible BTFs is determined. A training set contains pairs of signatures of an object, where for each pair, each signature is summarised from an image taken by a different camera. First, for each pair of signatures in the training set, the BTF required to compensate for the observed change in lighting between the two views of the object is determined. Then, by analysing all BTFs from all pairs of objects, a subspace of possible BTFs is estimated. The subspace encapsulates all observed changes in the signatures due to the changes in apparent pose, lighting and camera settings caused by the viewing in a different camera. Then, in online tracking, to determine if an object viewed in a first camera and an object viewed in a second camera correspond to the 5 same real-world object, the BTF between the observed views is computed. The probability that the two viewed objects correspond to the same real-world object can then be derived from the probability that the computed BTF between the objects lies in the subspace of all BTFs learned in the training phase. The Subspace Method is unable to handle changes of brightness that lie outside of D the subspace learned in the training phase. Further, the subspace may cover a wide range of BTFs, only a small portion of which may be relevant to any selected pair of cameras. In yet another approach known as the Exposure Compensation Method, the exposure time, being the period of time during which a frame is captured, is controlled depending on the content of the frame, thus controlling the brightness of the frame. For example, to prevent 25 white saturation where light objects are over-exposed and are imaged as white, the exposure time may be shortened, and to prevent black crushing, where dark objects are under-exposed and are imaged as black, the exposure time may be lengthened. A drawback of using the Exposure Compensation Method is that the brightness of the entire frame changes when the exposure time is modified. An optimal exposure time for an object in a brightly-lit region of a 30 frame may not provide an optimal exposure for an object in a dimly-lit region of the same frame. Thus, the Exposure Compensation Method is not suitable for tracking multiple objects visible in the same frame. 2355577_LDOC IRN: 905836 905836_specification -4 Thus, a need exists to provide an improved system and method for tracking video objects under locally varying lighting conditions. SUMMARY It is an object of the present invention to overcome substantially, or at least ameliorate, one or more disadvantages of existing arrangements. According to a first aspect of the present disclosure, there is provided a method of determining a correspondence between a source object in a first video frame and a target object in a second video frame, the source object being associated with a source signature incorporating at least luminance characteristics of the source object and the target object being associated with a target signature incorporating at least luminance characteristics of the target object. The method includes the step of determining a brightness transfer function between a first spatial region of the first frame corresponding to the source object, the first spatial region being less than the first frame, and a second spatial region of the second frame corresponding to the target object, the second spatial region being less than the second frame. The method s further includes modifying at least one of the source signature and the target signature in accordance with the brightness transfer function to produce a modified source signature and a modified target signature and computing a similarity measure between the modified source signature and the modified target signature. The method then determines a correspondence between the source object and the target object, based on the similarity measure. o According to a second aspect of the present disclosure, there is provided a camera system for determining a correspondence between a source object in a first video frame and a target object in a second video frame, the source object being associated with a source signature incorporating at least luminance characteristics of the source object and the target object being associated with a target signature incorporating at least luminance characteristics 25 of the target object. The camera system includes a first lens system, a first camera module coupled to the first lens system to store the first video frame, a second lens system, and a second camera module coupled to the second lens system to store the second video frame. The camera system also includes a storage device for storing a computer program, and a processor for executing the program. 30 The program includes code for determining a brightness transfer function between a first spatial region of the first frame corresponding to the source object, the first spatial region being less than the first frame, and a second spatial region of the second frame corresponding 2355577 .DOC IRN: 905836 905836 specification -5 to the target object, the second spatial region being less than the second frame. The program also includes code for modifying at least one of the source signature and the target signature in accordance with the brightness transfer function to produce a modified source signature and a modified target signature and code for computing a similarity measure between the modified source signature and the modified target signature. The program further includes code for determining a correspondence between the source object and the target object, based on the similarity measure. According to a third aspect of the present disclosure, there is provided a method of determining a brightness transfer function relating a source spatial region in a first video frame associated with a source signature incorporating at least luminance characteristics of the source object, and a target spatial region in a second video frame associated with a target signature incorporating at least luminance characteristics of the target object. The method includes the step of, in a first instance of relating the source spatial region and the target spatial region, computing a similarity measure between the source signature and the target signature and, if the similarity measure is above a similarity threshold, determining the brightness transfer function relating the source spatial region and the target spatial region as a residual brightness transfer function that minimises a difference between the source signature and the target signature. In later instances of relating the source spatial region and the target spatial region, the method includes the steps of retrieving a previously determined brightness transfer function appropriate for relating the source spatial region and the target spatial region, and applying the previously determined brightness transfer function to at least one of the source signature and the target signature to produce a modified source signature and a modified target signature. If a similarity measure computed from the modified source signature and the modified target 25 signature is above a similarity threshold, the method determines a residual brightness transfer function that minimises the difference between the modified source signature and the modified target signature, and updates the previously determined brightness transfer function to incorporate the residual brightness transfer function. According to a fourth aspect of the present disclosure, there is provided a method of 30 determining a correspondence between a source object in a first video frame and a target object in a second video frame, the source object being associated with a source signature incorporating at least luminance characteristics of the source object and the target object being 2355577 LDOC IRN: 905836 905836_specification -6 associated with a target signature incorporating at least luminance characteristics of the target object. The method includes the step of determining a brightness transfer function between a first spatial region of the first frame corresponding to the source object, the first spatial region being less than the first frame, and a second spatial region of the second frame corresponding to the target object, the second spatial region being less than the second frame. The method also includes the steps of modifying at least one of the first spatial region and the second spatial region in accordance with the brightness transfer function to produce a first modified spatial region and a second modified spatial region, and deriving a modified source signature and a modified target signature from the first modified spatial region and the second modified spatial region. The method then computes a similarity measure between the modified source signature and the modified target signature, and determines a correspondence between the source object and the target object, based on the similarity measure, wherein the correspondence indicates a match between the source object and the target object when the correspondence is less than a predefined threshold. 5 According to a fifth aspect of the present disclosure, there is provided an imaging system for determining a correspondence between a source object in a first video frame and a target object in a second video frame, the source object being associated with a source signature incorporating at least luminance characteristics of the source object and the target object being associated with a target signature incorporating at least luminance characteristics o of the target object. The imaging system includes a storage device for storing a computer program, and a processor for executing the program. The program includes code for determining a brightness transfer function between a first spatial region of the first frame corresponding to the source object, the first spatial region being less than the first frame, and a second spatial region of the second frame corresponding to the target object, the second 25 spatial region being less than the second frame. The program also includes code for modifying at least one of the source signature and the target signature in accordance with the brightness transfer function to produce a modified source signature and a modified target signature, code for computing a similarity measure between the modified source signature and the modified target signature, and code for determining a correspondence between the source 30 object and the target object, based on the similarity measure. According to another aspect of the present disclosure, there is provided an apparatus for implementing any one of the aforementioned methods. 2355577 _DOC IRN: 905836 905836 specification -7 According to another aspect of the present disclosure, there is provided a computer program product including a computer readable medium having recorded thereon a computer program for implementing any one of the aforementioned methods. Other aspects of the invention are also disclosed. BRIEF DESCRIPTION OF THE DRAWINGS One or more embodiments of the invention will now be described with reference to the following drawings, in which: Fig. I is a diagram illustrating a scene captured by two cameras at two time instants, and a bounding box of a person detected within the scene; Fig. 2 is a diagram illustrating histograms derived from the person detected within scenes shown in Fig. 1; Figs 3A and 3B form a schematic block diagram of a general purpose computer system upon which arrangements described can be practised; Fig. 4A is a schematic flow diagram illustrating a method of tracking; 5 Fig. 4B is a schematic flow diagram illustrating in detail one step in a method of tracking; Fig. 5 is a schematic flow diagram illustrating a multiple camera tracking system of determining tracks corresponding to the same real-world object; Fig. 6A is a schematic flow diagram illustrating one arrangement of applying 0 brightness transfer functions; Fig. 6B is a schematic flow diagram illustrating one arrangement of computing a spatio-temporal similarity between two tracks; Fig. 7 shows an electronic system for implementing the disclosed appearance invariant tracking method; and 25 Fig. 8 is a schematic block diagram representation of an imaging system. 23555771 .DOC IRN: 905836 905836_specification -8 DETAILED DESCRIPTION Where reference is made in any one or more of the accompanying drawings to steps and/or features that have the same reference numerals, those steps and/or features have for the purposes of this description the same function(s) or operation(s), unless the contrary intention appears. Overview Disclosed herein are a method and system for determining a correspondence between a source object in a first video frame and a target object in a second video frame. In one implementation, the first and second video frames are taken from at least one sequence of video frames captured by a single camera. In another implementation, the first frame is taken from a first image sequence captured by a first camera and the second frame is taken from a second image sequence captured by a second camera. Determining a correspondence between the source object and the target object enables the source object and target object to be classified as a match relating to the same object if the correspondence is greater than, equal to, or less than a predefined threshold, depending on the particular application. Conversely, the source object and target object can be classified as not a match, and thus not relating to the same object, if the correspondence is less than, equal to, or greater than the predefined threshold. In one implementation, a quality of a match between the source object and the target object can be ascertained by classifying the correspondence into one or more predefined groups or classes, such as percentile bands. In one embodiment, the source object is associated with a source signature including luminance characteristics of the source object and the target object is associated with a target signature including luminance characteristics of the target object. In one implementation, the 25 signatures are luminance histograms that include bins defining a luminance scale. The method determines a brightness transfer function between a first spatial region of the first frame corresponding to the source object and a second spatial region of the second frame corresponding to the target object. In one implementation, the first spatial region and the second spatial region are less than the size of the first and second frames, respectively. In 30 another implementation, the first spatial region is equal to the size of the first frame and the second spatial region is equal to the size of the second frame. In another implementation, the 2355577 LDOC IRN: 905836 905836_specification -9 first and second spatial regions correspond to bounding boxes for the source and target objects, respectively. The method applies the brightness transfer function to one or more of the first spatial region, the second spatial region, the source signature, and the target signature to produce a modified source signature and a modified target signature. Under some circumstances, the modified source signature will be the same as the source signature or the modified target signature will be the same as the target signature. This may occur, for example, when the brightness transfer is applied to the source signature and not to the target signature, and vice versa. The method then computes a similarity measure between the modified source signature and the modified target signature, whereupon the similarity measure is used to determine a correspondence between the source object and the target object. According to another embodiment, a camera system for determining a correspondence between a source object in a first video frame and a target object in a second video frame is provided. In one implementation, multiple cameras are utilised and the camera system includes a first lens system and a first camera module coupled to the first lens system to store the first video frame and a second lens system and a second camera module coupled to the second lens system to store the second video frame. The camera system also includes a storage device for storing a computer program, and a processor for executing the program. The program includes code for performing the steps of the previously described method. According to another embodiment, there is provided a method of determining a brightness transfer function relating a source spatial region in a first video frame and a target spatial region in a second video frame. The source spatial region is associated with a source signature incorporating at least luminance characteristics of a source object and the target 5 spatial region is associated with a target signature incorporating at least luminance characteristics of a target object. In a first instance of relating the source spatial region and the target spatial region, the method computes a similarity measure between the source signature and the target signature and, if the similarity measure is above a similarity threshold, determines the 30 brightness transfer function relating the source spatial region and the target spatial region as a residual brightness transfer function that minimises a difference between the source signature and the target signature. 2355577 LDOC IRN: 905836 905836_specification -10 In later instances of relating the source spatial region and the target spatial region, the method retrieves a previously determined brightness transfer function appropriate for relating the source spatial region and the target spatial region, and applies the previously determined brightness transfer function to at least one of the source signature and the target signature to produce a modified source signature and a modified target signature. There may be one or more brightness transfer functions from which to select, and within those brightness transfer functions there may be more than one brightness transfer function relating the source spatial region to the target spatial region. The brightness transfer function appropriate for a particular scenario will depend on the relevant circumstances. In one arrangement, the brightness transfer function is appropriate if a first spatial similarity score computed between the source spatial region and a first region related to a brightness transfer function and described later, and a second spatial similarity score computed between the target spatial region and a second region related to a brightness transfer function and described later, are each above a predetermined spatial similarity threshold, for example, 0.5. 5 If a similarity measure computed from the modified source signature and the modified target signature is above a similarity threshold, the method determines a residual brightness transfer function that minimises the difference between the modified source signature and the modified target signature, and updates the previously determined brightness transfer function to incorporate the residual brightness transfer function. 0 Object Detection A video is a sequence of images orframes. Thus, each frame is an image in an image sequence. Each frame of the video has an x axis and ay axis. A scene is the information contained in a frame and may include, for example, foreground objects, background objects, 25 or a combination thereof. A scene model is stored information relating to a background. A scene model generally relates to background information derived from an image sequence. A video may be encoded and compressed. Such encoding and compression may be performed intra-frame, such as motion-JPEG (M-JPEG), or inter-frame, such as specified in the H.264 standard. An image is made up of visual elements. The visual elements may be, for example, 30 pixels, or 8x8 DCT (Discrete Cosine Transform) blocks as used in JPEG images in a motion JPEG stream. 2355577 IDOC IRN: 905836 905836_specification - I1 For the detection of real-world objects visible in a video, a foreground separation method is applied to individual frames of the video, resulting in detections. Other methods of detecting real-world objects visible in a video are also known and may equally be practised. Such methods include, for example, image segmentation. In one arrangement, foreground separation is performed by frame differencing. Frame differencing subtracts a current frame from a previous frame. In another arrangement, foreground separation is done by background modelling. That is, a scene model is created by aggregating the visual characteristics of pixels or blocks in the scene over multiple frames spanning a time period. Visual characteristics that have contributed consistently to the model are considered to form the background. Any area where the background model is different from the current frame is then considered to be foreground. A detection has a spatial representation containing at least a height, a width, and a position. In one implementation, the position is provided by both x and y co-ordinates. There may be more characteristics associated with the spatial representation of a detection. Such 5 characteristics can include, for example, one or more of a roundness measure, a principal axis, colour descriptors, or texture descriptors. The characteristics may be based, for example, on a silhouette of the object, or on the original visual content corresponding to the object. In one arrangement, the position of the spatial representation of the detection is the top-left corner of a bounding box (with width and height) of the detection. In another arrangement, the position .o of the spatial representation of the detection is the centroid of the spatial representation of the detection, the width of the spatial representation of the detection is the difference between the greatest and smallest x-coordinate that is part of the detection, and the height is computed in a similar fashion along the y-axis. Figs 1A to ID illustrate frames recorded by two video cameras. Figs IA and 1B 25 show a first frame 110 and a second frame 120, respectively, that have been recorded by a first video camera. Thus, the first frame 110 and the second frame 120 form part of a first image sequence. The first frame 110 is not necessarily the initial frame in the first image sequence. Further, the first frame 110 and the second frame 110 can be any frames from within the first frame sequence and are not necessarily consecutive frames within the first image sequence. 30 The first frame 110 and the second frame 120 may be separated by one or more other frames, not shown, over any period of time. The image shown in the first frame 110 includes a person 111 shown side-on and the second frame 120 includes a corresponding person 121 2355577 .DOC IRN: 905836 905836_specification - 12 shown side-on. The image shown in each of the first frame 110 and the second frame 120 also includes a sun 102, and a building 105 with a roof 101. Mounted within the building 105 is a camera 103. Figs IC and ID show a third frame 130 and a fourth frame 140, respectively, that have been recorded by a second video camera 103. The third frame 130 and the fourth frame 140 form part of a second image sequence. The third frame 130 and fourth frame 140 can be any frames within the second image sequence are not necessarily consecutive frames within the second image sequence. The image shown in each of the third frame 130 and the fourth frame 140 includes a person 131 shown front-on, the person 131 being framed by a doorway of the building 105. As indicated above, the second video camera 103 is visible in the field of view of the first camera and is illustrated in the first frame 110 and the second frame 120. The first frame 110 recorded by the first video camera illustrates a person 111 lit by the sun 102. In the second frame 120 recorded by the first video camera, the person 121 has now moved into the shade of the roof 101. As a result, the shading of the person 113 has 5 become darker. The third frame 130 recorded by the second camera 103 is recorded at approximately the same time as the first frame 110. The first camera and the second camera 103 are contemporaneously recording the same scene from different viewing angles, which can result in the different images captured by the first and second cameras having different content, in o the form of different background and foreground objects, and different lighting characteristics. The third frame 130 illustrates the person 131 lit by the sun. The fourth frame 140 recorded by the second camera 103 is recorded at approximately the same time as the second frame 120. In the fourth frame 140, the person 141 is shaded by the roof 101 and, consequently, the person 141 in the fourth frame 140 appears darker than the person 131 in third frame 130. 25 Signatures An object can be associated with a visual representation, otherwise known as a signature, which is a summary of the appearance of the object. In one arrangement, the signature includes a histogram of colour components. In another arrangement, described in 30 detail below, the signature includes a histogram of luminance values. When attempting to track a single object over multiple frames, a first instance of the object in a first frame can be considered as a source object and a second instance of the object 2355577 _DOC IRN: 905836 905836_specification - 13 in a second frame can be considered as a target object. The first frame and second frame can be taken from a single image sequence or from multiple image sequences. Further, the single image sequence and multiple image sequences can be captured by a single camera or multiple cameras. In one example in which multiple cameras are capturing a single scene, the source object appears in a first frame taken from a first image sequence captured by a first video camera and the target object appears in a second frame taken from a second image sequence captured by a second video camera. The first frame can be any frame within the first image sequence and is not necessarily the initial frame of the first image sequence. Similarly, the second image can be any frame within the second image sequence and is not necessarily the second frame that occurs within that second image sequence. Attributes associated with the source and target objects are compared to determine whether the source and target objects relate to the same object. Figs 1 A to IF and Figs 2A to 2C provide an example of computing a histogram based signature for an object. In Figs IA, I B, and I E, a first view of a person 111 is illustrated, wherein the person 111 has a torso and head that are shaded lightly, arms that are shaded moderately, and legs that are shaded black. The first view of the person 111 in this example is considered to be a source object. Figs IC, ID, and IF show a second view of the same person 131, as captured by the second camera. In the second view of the person 131, the torso and head of the person are shaded white, the arms are shaded moderately, and the legs are shaded black. The second view of the person 131 in this example is considered to be a target object. Fig. IA shows the first view of the person 111 enclosed in an associated first bounding box 112. In one arrangement, shown in Fig. 1 E, the first bounding box 112 is equally divided both vertically and horizontally. Each quarter of the bounding box is known 2s as a grid region. This results in the first bounding box 112 having a top-left grid region 151, a top-right grid region 152, a bottom-left grid region 153, and a bottom-right grid region 154. In another arrangement, not shown, the bounding box 112 is equally divided horizontally into quarters and vertically into quarters, such that there are sixteen equally-sized grid regions. Many other grid arrangements may equally be practised, without departing from the spirit and 30 scope of the present disclosure. A luminance histogram is independently computed for each grid region of the first bounding box 112. The luminance histograms corresponding to the respective grid regions 2355577 _DOC IRN: 905836 905836_specification - 14 are concatenated to form a single luminance histogram for the entire object, which in this example is the first person 111. Fig. 2A shows a single luminance histogram 2100 that represents the signature for the first person 11l, or source object. The luminance histogram 2100 includes a number of bins, wherein each bin represents a level of luminance. Histogram bin 2111 of the histogram 2100 represents shaded black regions within the top-left grid region 151 of the first bounding box of Fig. 1E. Histogram bin 2111 is empty and thus represents that there are no black regions within the top-left grid region 151. Histogram bin 2112 represents shaded dark regions within the top-left grid region 151. Histogram bin 2112 is empty. Histogram bin 2113 represents shaded moderately regions within the top-left grid region 151. Histogram bin 2113 is small due to the arm of the person being visible in the top-left grid region 151. Histogram bin 2114 represents shaded lightly regions within the top-left grid region 151. Histogram bin 2114 is large due to the torso and head of the person being visible within the top-left grid region 151. Histogram bin 2115 represents shaded white regions within the top i left grid region 151. Histogram bin 2115 is empty. Histogram bin 2121 represents shaded black regions within the top-right grid region 152 of the first bounding box 112 of Fig. IE. Histogram bin 2121 is empty. Histogram bin 2122 represents shaded dark regions within the top-right grid region 152. Histogram bin 2122 is empty. Histogram bin 2123 represents shaded moderately regions within the top-right grid region 152. Histogram bin 2123 is small due to the arm of the person being visible in the top right grid region 152. Histogram bin 2124 represents shaded lightly regions within the top right grid region 152. Histogram bin 2124 is small due to some of the torso and head of the person being visible within the top-right grid region 152. Histogram bin 2125 represents shaded white regions within the top-right grid region 152. Histogram bin 2125 is empty. 25 Histogram bin 2131 represents shaded black regions within the bottom-left grid region 153 of the first bounding box 112 of Fig. IE. Histogram bin 2131 is large due to the shaded black legs of the person being visible in the bottom-left grid region 153. Histogram bin 2132 represents shaded dark regions within the bottom-left grid region 153. Histogram bin 2132 is empty. Histogram bin 2133 represents shaded moderately regions within the 30 bottom-left grid region 153. Histogram bin 2133 is very small due to some of the arm of the person being visible in the bottom-left grid region 153. Histogram bin 2134 represents shaded lightly regions within the bottom-left grid region 153. Histogram bin 2134 is small due to the 2355577 LDOC IRN: 905836 905836_specification - 15 bottom of the torso of the person being visible within the bottom-left grid region 153. Histogram bin 2135, representing shaded white regions within the bottom-left grid region 153, is empty. Histogram bin 2141 represents shaded black regions within the bottom-right grid region 154 of the first bounding box 112 of Fig. 1E. Histogram bin 2141 is large due to the shaded black legs of the person being visible in the bottom-right grid region 154. Histogram bin 2142 represents shaded dark regions within the bottom-right grid region 154. Histogram bin 2142 is empty. Histogram bin 2143 represents shaded moderately regions within the bottom-right grid region 154. Histogram bin 2143 is empty. Histogram bin 2144 represents shaded lightly regions within the bottom-right grid region 154. Histogram bin 2144 is small due to the bottom of the torso of the person being visible within the bottom-right grid region 154. Histogram bin 2145 represents shaded white regions within the bottom-right grid region 154. Histogram bin 2145 is empty. Now, consider the second view of the person 131 of Fig. IC, which is shown enclosed in a second bounding box 132. Fig. IF shows the second bounding box 132 divided into quarters: a top-left grid region 161, a top-right grid region 162, a bottom-left grid region 163, and a bottom-right grid region 164. A luminance histogram is independently computed for each grid region of the second bounding box 132. The luminance histograms are concatenated to form a single luminance histogram for the entire object corresponding to the second person 131. Fig. 2B shows a single luminance histogram 2200 that represents a signature for the second person 131, or target object. The luminance histogram 2200 includes a number of bins, wherein each bin represents a level of luminance. Histogram bin 2211 of the histogram 2200 represents shaded black regions within the top-left grid region 161 of the second bounding box 132 of Fig. IF. Histogram bin 2211 is Z5 empty. Histogram bin 2212 represents shaded dark regions within the top-left grid region 161. Histogram bin 2212 is empty. Histogram bin 2213 represents shaded moderately regions within the top-left grid region 161. Histogram bin 2213 is empty. Histogram bin 2214 represents shaded lightly regions within the top-left grid region 161. Histogram bin 2214 is small due to the arm of the person being visible in the top-left grid region 161. 30 Histogram bin 2215 represents shaded white regions within the top-left grid region 161. Histogram bin 2215 is large due to the torso and head of the person being visible within the top-left grid region 161. 2355577 LDOC IRN: 905836 905836_specification -16 Histogram bin 2221 represents shaded black regions within the top-right grid region 162 of the second bounding box 132 of Fig. IF. Histogram bin 2221 is empty. Histogram bin 2222 represents shaded dark regions within the top-right grid region 162. Histogram bin 2222 is empty. Histogram bin 2223 represents shaded moderately regions within the top-right grid region 162. Histogram bin 2223 is empty. Histogram bin 2224 represents shaded lightly regions within the top-right grid region 162. Histogram bin 2224 is small due to the arm of the person being visible in the top-right grid region 162. Histogram bin 2225 represents shaded white regions within the top-right grid region 162. Histogram bin 2225 is large due to the torso and head of the person being visible within the top-right grid region 162. Histogram bin 2231 represents shaded black regions within the bottom-left grid region 163 of the second bounding box 132 of Fig. IF. Histogram bin 2231 is large due to the shaded black legs of the person being visible in the bottom-left grid region 163. Histogram bin 2232 represents shaded dark regions within the bottom-left grid region 163. Histogram bin 2232 is empty. Histogram bin 2233 represents shaded moderately regions within the bottom-left grid region 163. Histogram bin 2233 is empty. Histogram bin 2234 represents shaded lightly regions within the bottom-left grid region 163. Histogram bin 2234 is small due to some of tie arm of the person being visible in the bottom-left grid region 163. Histogram bin 2235 represents shaded white regions within the bottom-left grid region 163. Histogram bin 2235 is small due to the bottom of torso of the person being visible within the bottom-left grid region 163. Histogram bin 2241 represents shaded black regions within the bottom-right grid region 164 of the second bounding box 132 of Fig. IF. Histogram bin 2241 is large due to the shaded black legs of the person being visible in the bottom-right grid region 164. Histogram bin 2242 represents shaded dark regions within the bottom-right grid region 164. Histogram 25 bin 2242 is empty. Histogram bin 2243 represents shaded moderately regions within the bottom-right grid region 164. Histogram bin 2243 is empty. Histogram bin 2244 represents shaded lightly regions within the bottom-right grid region 164. Histogram bin 2244 is small due to some of the arm of the person being visible in the bottom-right grid region 164. Histogram bin 2245 represents shaded white regions within the bottom-right grid region 164. 30 Histogram bin 2245 is small due to the bottom of the torso of the person being visible within the bottom-right grid region 164. 2355577 .DOC IRN: 905836 905836_specification - 17 In this example, signatures for the first person 111 of Fig. 1A and the second person 131 of Fig. IC were determined by utilising the first bounding box 112 associated with the first person 111 and the second bounding box 132 associated with the second person 131. The method for determining a signature for an object in a frame can utilise any portion of the s frame, wherein the portion is less than the frame itself, and is not limited to utilising a bounding box associated with the object. For example, a signature for an object may be determined solely from foreground information of a spatial region associated with that object. Location-based brightness transfer functions (LBBTFs) 0 In one arrangement, the brightness transfer function is a Location-Based Brightness Transfer Function. Fig. 2C shows a histogram 2300 that illustrates the result of applying a Location-Based Brightness Transfer Function (LBBTF) to the histogram 2200 of Fig. 2B. In one arrangement, the LBBTF operates on an input histogram and produces an output histogram, by assigning an output histogram bin to each input histogram bin. For example, an 5 LBBTF is applied to the input histogram 2200 to produce the output histogram 2300. This represents a change in the luminance characteristics represented by the input histogram 2200 and the output histogram 2300. In this example, the histogram 2100 and histogram 2200 are compared to a set of stored histogram pairs. If the histogram 2100 and histogram 2200 match a stored histogram to pair, within a predefined tolerance threshold, a LBBTF associated with the matched stored histogram pair is selected to be applied to either one or both of the histogram 2100 and the histogram 2200. Input histogram bin 2211, representing shaded black regions in top-left grid region 161 of Fig. IF, is assigned output histogram bin 2311 which also represents shaded black 25 regions. Input histogram bin 2212 representing shaded dark regions in top-left grid region 161 is also assigned output histogram bin 2311. Input histogram bin 2213 representing shaded moderately regions in top-left grid region 161 is assigned output histogram bin 2312 representing shaded dark regions in top-left grid region 161. Input histogram bin 2214 representing shaded light regions in top-left grid region 161 is assigned output histogram bin 30 2313 representing shaded moderately regions top-left grid region 161. Input histogram bin 2215 representing shaded white regions in top-left grid region 161 is assigned output histogram bin 2314 representing shaded light regions in top-left grid region 161. No bin is 2355577 LDOC IRN: 905836 905836_specification - 18 assigned output histogram bin 2315, representing shaded white regions in top-left grid region 161 and, accordingly, output histogram bin 2315 is empty. The LBBTF is applied similarly to histogram bins corresponding to other grid regions of the second bounding box 132 of Fig. IF. In top-right grid region 162, input histogram bins 2221 and 2222 are assigned output histogram bin 2321. Input histogram bin 2223 is assigned output histogram bin 2322. Input histogram bin 2224 is assigned output histogram bin 2323. Input histogram bin 2225 is assigned output histogram bin 2324. Output histogram bin 2325 is empty. In bottom-left grid region 163, input histogram bins 2231 and 2232 are assigned > output histogram bin 2331. Input histogram bin 2233 is assigned output histogram bin 2332. Input histogram bin 2234 is assigned output histogram bin 2333. Input histogram bin 2235 is assigned output histogram bin 2334. Output histogram bin 2335 is empty. In bottom-right grid region 164, input histogram bins 2241 and 2242 are assigned output histogram bin 2341. Input histogram bin 2243 is assigned output histogram bin 2342. 5 Input histogram bin 2244 is assigned output histogram bin 2343. Input histogram bin 2245 is assigned output histogram bin 2344, Output histogram bin 2345 is empty. The LBBTF in this example produces a darkening effect, as if the output signature 2300 was computed from an image recorded by a camera with a darker setting than was actually the case. The resulting output signature 2300 is more similar to the signature of the o person in the sunlight 2100 than the original signature 2200 recorded with different camera settings. Therefore, a system comparing LBBTF-applied signature 2300 to the signature 2100 provides more robustness than comparing original signature 2220 to signature 2100. In the above arrangement, the LBBTF is a mapping between histogram bins. In another arrangement, the LBBTF is a scalar s that indicates a scaling of brightness, for 25 example, the brightness scale s = 0.75. Also, let the histogram bins for each grid region be indexed in terms of brightness. For example, histogram bin 2211 is index 0, histogram bin 2212 is index 1, histogram bin 2213 is index 2, histogram bin 2214 is index 3, and histogram bin 2215 is index 4. Then, for a source bin of index i, the corresponding output bin is equal to the input bin multiplied by the brightness scale o = i * s. 30 Where the resulting value of o is an integer, for example, if i = 4 and s = 0.75 then o = 3, the contents of the input bin with index i are assigned to the output bin with index o. Where the resulting value of o is not an integer, for example, using i = 3 and s = 0.75 gives 2355577 _DOC IRN: 905836 905836_specification -19 o = 2.25, the contents of the input bin are distributed amongst two output histogram bins. The first output histogram bin has an index equal to the floor of o, for example, floor(2.25) = 2. The second output histogram bin has an index equal to the ceiling of o, for example, ceil(2.25) = 3. The content of the input histogram bin assigned to the first output histogram bin is equal to (ceil(o) - o) of the content of the input histogram bin, which in this example is (3 - 2.25) = 0.75 of the content of the input histogram bin. The content of the input histogram bin assigned to the second output histogram bin is equal to (o - floor(o)) of the content of the input histogram bin, which in this example is (2.25 - 2) = 0.25 of the content of the input histogram bin. If the index of the output histogram bin is greater than the maximum index, all of the content of the input histogram bin is assigned to the output histogram bin with the maximum index. If the index of the output histogram bin is less than the minimum index, all of the content of the input histogram bin is assigned to the output histogram bin with the minimum index. In the example described above with reference to Figs 2A to 2C, a single LBBTF was used to operate on the histogram bins for all grid regions. In another arrangement, a different LBBTF is used to operate on each grid region, or a plurality of grid regions, or combinations thereof. That is to say, in one embodiment a first LBBTF operates on the histogram bins 2211, 2212, 2213, 2214 and 2215 belonging to top-left grid region 161, a second LBBTF operates on the histogram bins 2221, 2222, 2223, 2224 and 2225 belonging to top-right grid region 162, a third LBBTF operates on the histogram bins 2231, 2232, 2233, 2234 and 2235 belonging to bottom-left grid region 163, and a fourth LBBTF operates on the histogram bins 2241, 2242, 2243, 2244 and 2245 belonging to bottom-right grid region 164. Applying different LBBTFs to each grid region can provide more accuracy when a lighting change 5 affects only part of an object. Depending on the particular application, one or more of the first, second, third, and fourth LBBTF in the preceding embodiment may be the same LBBTF. In contrast to the previous embodiments, in which one or more LBBTFs were applied to the histograms, an alternative approach is to modify the source images to produce brightness-adjusted images and then compute brightness-adjusted signatures from those 30 brightness-adjusted images. Accordingly, in one arrangement, the LBBTF is applied by directly modifying the luminance values in the original images. Then, the signatures of the objects are computed from the brightness-adjusted images. Although in this arrangement the 2355577 L.DOC IRN: 905836 905836_specification - 20 BTF is applied globally, the selection of the BTF to apply is dependent on the location of the objects within the original images. One method of determining whether a source object and a target object relate to a single object is to compute a visual difference between signatures associated with the source object and the target object. To compute the visual difference between two signatures, the histograms representing each signature are considered as vectors. The visual difference is then the Euclidean distance between the vectors. In other words, given two histograms each with n bins, if corresponding bins from each histogram are denoted pi and qi, the visual difference is: visual difference = (p, -qi) 2 In one arrangement, the vectors are firstly normalised to be unit vectors according to the Manhattan norm. As a result, the visual difference is then given by: visual difference = ... (2) Ep Eqj j=1 j=1 In yet another arrangement, rather than applying the LBBTF to the signatures and s then computing the visual difference, the LBBTF is applied within the step of computing the visual difference. In yet another arrangement, where the LBBTF is a scalar s, to determine the contribution of the binj to the LBBTF-adjusted bin i, a weighting function w(ij) is used: visual differencee = $ p - w(i, j )q , ... (3) i=1 j=1f 1-(j-s*i) , j>s*i and j-s*i<1 where(i,j)= 1-(s*i-j) , s*i>j and s*i-j<1. 0 otherwise 20 Tracks and tracker A track is an ordered sequence of identifiers of a real-world object over time, derived from the detections of the real-world object that have been extracted from frames of one or 2355577 IDOC IRN: 905836 905836_specification - 21 more frame sequences. In one arrangement, the identifier of a real-world object is comprised of the frame number and the one or more identifiers of the detections in that frame corresponding to the real-world object. In another arrangement, the identifiers are the positions of the spatial representations of the detections in a list of detections. In another arrangement, the identifiers are comprised of the frame numbers in which the object is visible as it moves through the video and the corresponding detection data. In another arrangement, the identifiers are the detection data. In another arrangement, the identifiers are comprised of the positions of the detections comprising the track. In one arrangement, the identifier comprises a frame number and the identifiers of the detections in each frame comprise the track. For example, in Fig. 1, the detection identifiers 111 and 121 corresponding to the frame identifiers 110 and 120, respectively, constitute the track of the person. A tracker maintains a collection of tracks. In each frame, the tracker creates an expected spatial representation, which will be referred to as an expectation, for each track based on previous attributes associated with the track. In one arrangement, the attributes of the expectation are the size, the velocity, and the position of the tracked object. Given a track's expectation, and a set of detections in a frame, the tracker can compute a spatial difference for pairs of expectations and detections. The computation of the spatial difference is described in more detail later. In one arrangement, predetermined variances must be provided in order to compute the spatial difference. The predetermined variances are computed prior to performing the tracking method by firstly generating detections from pre-recorded image sequences that together form a training set. Associations are manually formed between detections from consecutive frames of the training set. These associations are joined together temporally to form tracks. Then, for each track beginning from the third frame, an expectation is produced, 5 for example, based on the velocity of the tracked object in the two previous frames. Each expectation is compared to the corresponding detection in the same frame of the training set to determine the difference of each component, e.g., the differences in horizontal location, vertical location, width and height. From these differences, statistical variances can be computed representing the error in each component. The statistical variance i is the o horizontal difference between the centre of the detection and the centre of the expectation. In one arrangement, i is computed by first determining the difference between the horizontal location of the expectation and the horizontal location of the detection. This step is repeated 2355577 .DOC IRN: 905836 905836_specification - 22 for multiple associated detections and expectations. Then, each difference is squared, and the squares are summed. Finally, the sum of the squares is divided by the number of differences. The statistical variance 5 of the vertical difference is computed in a similar manner, using the difference in the vertical locations. The statistical variance 'v of the difference in the width is computed in a similar manner, using the difference in widths. The statistical variance h of the difference in the height is computed in a similar manner, using the difference in heights. Then, given the predetermined variances, the spatial difference s may be computed using the Kalman gating function: (x _detection - x _expectation) 2 (y _ detection - y _expectation) 2 s = + (w __detection - w expectations 2 (h _ detection - h expectations 2 + + The spatial difference is small if the detection and the expectation are similar spatially, and large if the detection and expectation are dissimilar spatially. The spatial difference has some important properties. Statistically, the difference between the expectation and the corresponding detection should be within approximately one standard deviation. Dividing each component's square of the difference by the variance scales the error such that 5 the contribution to the spatial difference is 1.0 unit for each component. The calculated spatial difference should be less than the number of measured components if the detection corresponds to the expectation. In this arrangement, the number of measured components is 4.0 and is a predetermined spatial difference threshold. The visual difference between the track's signature and the detection's signature is 20 scaled such that the visual difference should be less than a predetermined visual difference threshold for a valid combination of the track's signature and the detection's signature. In one arrangement, the predetermined visual difference threshold is 1.0. The visual difference is computed after applying the LBBTF. To compute a combined difference between a detection and the expectation of a 25 track, the spatial difference is added to the visual difference. That is,: combined difference = spatial di ference + visual-difference ... (5) The combined difference should be less than a predetermined combined difference threshold, for example 5.0, for a valid combination of a track and a detection. 2355577 LDOC IRN: 905836 905836_specification -23 Computer implementation Figs 3A and 3B collectively form a schematic block diagram of a general purpose computer system 300, upon which the various arrangements described can be practised. As seen in Fig. 3A, the computer system 300 is formed by a computer module 301, input devices such as a keyboard 302, a mouse pointer device 303, a scanner 326, a camera 327, and a microphone 380, and output devices including a printer 315, a display device 314 and loudspeakers 317. An external Modulator-Demodulator (Modem) transceiver device 316 may be used by the computer module 301 for communicating to and from a communications network 320 via a connection 321. The network 320 may be a wide-area I network (WAN), such as the Internet, or a private WAN. Where the connection 321 is a telephone line, the modem 316 may be a traditional "dial-up" modem. Alternatively, where the connection 321 is a high capacity (e.g., cable) connection, the modem 316 may be a broadband modem. A wireless modem may also be used for wireless connection to the network 320. The computer module 301 typically includes at least one processor unit 305, and a memory unit 306 for example formed from semiconductor random access memory (RAM) and semiconductor read only memory (ROM). The module 301 also includes an number of input/output (1/0) interfaces including an audio-video interface 307 that couples to the video display 314, loudspeakers 317 and microphone 380, an 1/0 interface 313 for the o keyboard 302, mouse 303, scanner 326, camera 327 and optionally a joystick (not illustrated), and an interface 308 for the external modem 316 and printer 315. In some implementations, the modem 316 may be incorporated within the computer module 301, for example within the interface 308. The computer module 301 also has a local network interface 311 which, via a connection 323, permits coupling of the computer system 300 to a local computer network 25 322, known as a Local Area Network (LAN). As also illustrated, the local network 322 may also couple to the network 320 via a connection 324, which would typically include a so called "firewall" device or device of similar functionality. The interface 311 may be formed by an EthernetTM circuit card, a Bluetoothm wireless arrangement or an IEEE 802.11 wireless arrangement. 30 The interfaces 308 and 313 may afford either or both of serial and parallel connectivity, the former typically being implemented according to the Universal Serial Bus (USB) standards and having corresponding USB connectors (not illustrated). Storage 2355577 LDOC IRN: 905836 905836_specification - 24 devices 309 are provided and typically include a hard disk drive (HDD) 310. Other storage devices such as a floppy disk drive and a magnetic tape drive (not illustrated) may also be used. An optical disk drive 312 is typically provided to act as a non-volatile source of data. Portable memory devices, such optical disks (e.g., CD-ROM, DVD), USB-RAM, and floppy disks, for example, may then be used as appropriate sources of data to the system 300. The components 305 to 313 of the computer module 301 typically communicate via an interconnected bus 304 and in a manner which results in a conventional mode of operation of the computer system 300 known to those in the relevant art. Examples of computers on which the described arrangements can be practised include IBM-PCs and compatibles, Sun Sparcstations, Apple MacTM, or alike computer systems evolved therefrom. The method of determining a correspondence between a source object in a first video frame and a target object in a second video frame may be implemented using the computer system 300 wherein the processes of Figs 1, 2, and 4 to 7 may be implemented as one or more software application programs 333 executable within the computer system 300. In particular, s the steps of the method of determining a correspondence between a source object in a first video frame and a target object in a second video frame are effected by instructions 331 in the software 333 that are carried out within the computer system 300. The software instructions 331 may be formed as one or more code modules, each for performing one or more particular tasks. The software may also be divided into two separate parts, in which a o first part and the corresponding code modules perform the method for determining a correspondence between a source object in a first video frame and a target object in a second video frame, and a second part and the corresponding code modules manage a user interface between the first part and the user. The software 333 is generally loaded into the computer system 300 from a computer 25 readable medium, and is then typically stored in the HDD 310, as illustrated in Fig. 3A, or the memory 306, after which the software 333 can be executed by the computer system 300. In some instances, the application programs 333 may be supplied to the user encoded on one or more CD-ROMs 325 and read via the corresponding drive 312 prior to storage in the memory 310 or 306. Alternatively, the software 333 may be read by the computer system 300 30 from the networks 320 or 322 or loaded into the computer system 300 from other computer readable media. Computer readable storage media refers to any storage medium that participates in providing instructions and/or data to the computer system 300 for execution 2355577_LDOC IRN: 905836 905836_specification - 25 and/or processing. Examples of such storage media include floppy disks, magnetic tape, CD ROM, a hard disk drive, a ROM or integrated circuit, USB memory, a magneto-optical disk, or a computer readable card such as a PCMCIA card and the like, whether or not such devices are internal or external of the computer module 301. Examples of computer readable transmission media that may also participate in the provision of software, application programs, instructions and/or data to the computer module 301 include radio or infra-red transmission channels, as well as a network connection to another computer or networked device, and the Internet or Intranets including e-mail transmissions and information recorded on Websites and the like. The second part of the application programs 333 and the corresponding code modules mentioned above may be executed to implement one or more graphical user interfaces (GUIs) to be rendered or otherwise represented upon the display 314. Through manipulation of typically the keyboard 302 and the mouse 303, a user of the computer system 300 and the application may manipulate the interface in a functionally adaptable manner to provide 5 controlling commands and/or input to the applications associated with the GUI(s). Other forms of functionally adaptable user interfaces may also be implemented, such as an audio interface utilizing speech prompts output via the loudspeakers 317 and user voice commands input via the microphone 380. Fig. 3B is a detailed schematic block diagram of the processor 305 and a 0 "memory" 334. The memory 334 represents a logical aggregation of all the memory devices (including the HDD 310 and semiconductor memory 306) that can be accessed by the computer module 301 in Fig.3A. In one implementation, the processor 305 and memory 334 form part of a camera system for capturing a first video frame and a second video frame and then determining a correspondence between a source object in the first video frame and a 25 target object in the second video frame. When the computer module 301 is initially powered up, a power-on self-test (POST) program 350 executes. The POST program 350 is typically stored in a ROM 349 of the semiconductor memory 306. A program permanently stored in a hardware device such as the ROM 349 is sometimes referred to as firmware. The POST program 350 examines hardware 30 within the computer module 301 to ensure proper functioning, and typically checks the processor 305, the memory (309, 306), and a basic input-output systems software (BIOS) module 351, also typically stored in the ROM 349, for correct operation. Once the POST 2355577 LDOC IRN: 905836 905836_specification - 26 program 350 has run successfully, the BIOS 351 activates the hard disk drive 310. Activation of the hard disk drive 310 causes a bootstrap loader program 352 that is resident on the hard disk drive 310 to execute via the processor 305. This loads an operating system 353 into the RAM memory 306 upon which the operating system 353 commences operation. The operating system 353 is a system level application, executable by the processor 305, to fulfil various high level functions, including processor management, memory management, device management, storage management, software application interface, and generic user interface. The operating system 353 manages the memory (309, 306) in order to ensure that each process or application running on the computer module 301 has sufficient memory in which to execute without colliding with memory allocated to another process. Furthermore, the different types of memory available in the system 300 must be used properly so that each process can run effectively. Accordingly, the aggregated memory 334 is not intended to illustrate how particular segments of memory are allocated (unless otherwise stated), but rather to provide a general view of the memory accessible by the computer system 300 and how such is used. The processor 305 includes a number of functional modules including a control unit 339, an arithmetic logic unit (ALU) 340, and a local or internal memory 348, sometimes called a cache memory. The cache memory 348 typically includes a number of storage registers 344 - 346 in a register section. One or more internal buses 341 functionally interconnect these functional modules. The processor 305 typically also has one or more interfaces 342 for communicating with external devices via the system bus 304, using a connection 318. The application program 333 includes a sequence of instructions 331 that may include conditional branch and loop instructions. The program 333 may also include data 332 5 which is used in execution of the program 333. The instructions 331 and the data 332 are stored in memory locations 328-330 and 335-337 respectively. Depending upon the relative size of the instructions 331 and the memory locations 328-330, a particular instruction may be stored in a single memory location as depicted by the instruction shown in the memory location 330. Alternately, an instruction may be segmented into a number of parts each of 3o which is stored in a separate memory location, as depicted by the instruction segments shown in the memory locations 328-329. 2355577_LDOC IRN: 905836 905836_specification - 27 In general, the processor 305 is given a set of instructions which are executed therein. The processor 305 then waits for a subsequent input, to which it reacts to by executing another set of instructions. Each input may be provided from one or more of a number of sources, including data generated by one or more of the input devices 302, 303, data received from an external source across one of the networks 320, 322, data retrieved from one of the storage devices 306, 309 or data retrieved from a storage medium 325 inserted into the corresponding reader 312. In one implementation, the input is provided by one or more camera sensors, each of which is associated with a corresponding lens system. In one embodiment, the lens system is external to the camera sensor. In an alternative embodiment, the lens system is integral to the camera system. The execution of a set of the instructions may in some cases result in output of data. Execution may also involve storing data or variables to the memory 334. The video object fragmentation detection arrangements disclosed herein use input variables 354, that are stored in the memory 334 in corresponding memory locations 355-358. The video object fragmentation detection arrangements produce output variables 361, that are stored in the memory 334 in corresponding memory locations 362-365. Intermediate variables may be stored in memory locations 359, 360, 366 and 367. The register section 344-346, the arithmetic logic unit (ALU) 340, and the control unit 339 of the processor 305 work together to perform sequences of micro-operations needed to perform "fetch, decode, and execute" cycles for every instruction in the instruction set making up the program 333. Each fetch, decode, and execute cycle comprises: (a) a fetch operation, which fetches or reads an instruction 331 from a memory location 328; (b) a decode operation in which the control unit 339 determines which instruction has been fetched; and 25 (c) an execute operation in which the control unit 339 and/or the ALU 340 execute the instruction. Thereafter, a further fetch, decode, and execute cycle for the next instruction may be executed. Similarly, a store cycle may be performed by which the control unit 339 stores or writes a value to a memory location 332. 30 Each step or sub-process in the processes of Figs 1, 2, and 4 to 7 is associated with one or more segments of the program 333, and is performed by the register section 344-347, the ALU 340, and the control unit 339 in the processor 305 working together to perform the 2355577_ .DOC IRN: 905836 905836_specification - 28 fetch, decode, and execute cycles for every instruction in the instruction set for the noted segments of the program 333. The method of determining a correspondence between a source object in a first video frame and a target object in a second video frame may alternatively be implemented in dedicated hardware such as one or more integrated circuits performing the functions or sub functions of determining a brightness transfer function, modifying at least one of a source signature and a target signature to produce a modified source signature and a modified target signature, computing a similarity measure between the modified source signature and the modified target signature, and determining a correspondence between the source object and the target object, based on the similarity measure. Such dedicated hardware may include graphic processors, digital signal processors, or one or more microprocessors and associated memories, or a camera incorporating one or more of these components. Tracking within a single camera 5 Fig. 4A is a flow diagram 400 that illustrates an object tracking process. The process 400 begins with start module 405. Control passes to detecting step 410, in which an object detection module receives a video frame and detects foreground objects within the video frame. In one arrangement, the object detection module stores and updates a background model, and foreground objects are detected using background subtraction. o Next, control passes to hypotheses forming step 420, in which an association hypotheses forming module compares the detections to existing tracks, and creates and outputs association hypotheses describing possible matches between detections and tracks. Each association hypothesis has a combined difference score, computed as described previously. Fig. 4B illustrates in detail the functionality of association hypotheses forming 25 module of hypotheses forming step 420 and will be described later. The association hypotheses formed in step 420 are received in data association step 430 by a data association module, which acts upon the hypotheses to perform data association. In one arrangement, the Global Nearest Neighbour (GNN) approach is used to reduce the set of association hypotheses. Global Nearest Neighbour is an iterative, greedy algorithm that, in 30 this application, selects the association hypothesis with the lowest combined difference from the input set and places the selected association hypothesis in an optimal set. All other association hypotheses that contain the same track or the same detection represented by the 2355577 [.DOC IRN: 905836 905836_specification -29 selected association hypothesis are then deleted from the input set of association hypotheses. This is because selecting at a later time those association hypotheses that contain the same track or the same detection represented by the selected association hypothesis would create contradictions by matching multiple tracks to a single detection, or multiple detections to a single track. An alternative approach is to evaluate every possible combination of association hypotheses to find procedurally an optimal non-contradictory subset (according to the combined difference). Evaluating every possible combination of association hypotheses can be computationally expensive. Data association is performed on a non-contradictory subset of association hypotheses that is a subset of the association hypotheses resulting from the association hypothesis forming module in hypotheses forming step 420. In the non contradictory subset of association hypotheses, each detection appears in at most one association hypothesis and each track appears in at most one association hypothesis. The data association module in data association step 430 then operates on each association hypothesis 5 in the non-contradictory subset in turn. For each association hypothesis, the detection stored within the association hypothesis is associated with the track stored within the association hypothesis. Data association module in data association step 430 also processes tracks and detections that were not stored in any association hypothesis. For each detection not stored in o an association hypothesis, a new track is formed that is associated with the detection. The new track is processed normally in the next frame. For each track not stored in an association hypothesis, the data association module in step 430 examines how many frames have passed since a detection was associated with the track. If a number of frames have passed since a detection was associated with the track, say 5 frames, the track is killed and is not processed 25 in further frames. If the expectation was outside of the frame boundaries and no detection was associated with the track, the track is killed and is not processed in further frames. Otherwise, a prediction is applied to the track and the track is eligible for processing in the next frame. Upon the termination of the data association module in step 430, the object tracking process for one frame is complete and the process 400 moves to an end state 499. 30 Fig. 4B is a flow diagram that illustrates in detail the functionality performed by the association hypotheses forming module in step 420 of Fig. 4. A track selection module in track selecting step 421 selects a track 422 not yet processed in this instance of executing 2355577 LDOC IRN: 905836 905836_specification -30 association hypotheses forming module 420. In detection selecting step 423, the process utilises a detection selection module to select a detection 424 that has not yet processed in combination with the track 422. A combined difference computing module in computing step 425 computes a combined difference between the track 422 and the detection 424. A combined difference thresholder in decision step 426 receives the combined difference and determines whether the combined difference is greater than the predetermined combined difference threshold. If the combined difference is greater than the predetermined combined difference threshold and there is no valid difference, No, the detection 424 cannot be matched to the track 422 and control passes to a further detection decision module in checking step 428. If the combined difference is less than or equal to the combined difference threshold and there is a valid difference, Yes, control passes from decision step 426 to a hypothesis adding module in adding step 427. The hypothesis adding module in step 427 forms a new hypothesis to be passed to a data association module in data association step 430. The new hypothesis s represents a potential association between the track 422 and the detection 424. The hypothesis adding module 427 passes control to the further detection decision module in step 428. The combined difference computing module in computing step 425 returns a combined difference, incorporating the spatial difference and the visual difference. In one D arrangement, the visual difference incorporates applying a LBBTF. In one arrangement, the LBBTF is derived directly from the spatial parameters of the detection 424 and the expectation of the track 422. For example, a predetermined lighting map may store expected brightness values for each location. Given the detection 424 and the expectation of the track 422, the relative change in brightness, and hence the LBBTF, may be determined from the 25 lighting map. Further detection decision module in step 428 determines whether there are any detections yet to be processed in combination with the track 422. If there are more detections to be processed in combination with the track 422, Yes, control returns to detection selection module 423. Otherwise, if there are no more detections to be processed in combination with 30 the track 422, No, control passes to a further track detection module in step 429. The further track decision module in step 429 determines whether there are any tracks yet to be processed by association hypotheses forming module 420. If there are more 2355577 LDOC IRN: 905836 905836_specification -31 tracks to be processed in this frame, Yes, control returns to the track selection module in step 421. Otherwise, if there are no more tracks to be processed in this frame, No, the processing by the association hypotheses forming module 420 is complete and control passes to data association module in step 430. Tracking with multiple cameras Tracking systems that operate over multiple cameras aim to recognise the same object as viewed by different cameras. In one arrangement, for each object a single track is maintained in a common coordinate system and inputs from all cameras are integrated to maintain this single track. In another arrangement, each camera maintains its own set of tracks and corresponding tracks from each camera are determined by a central track linker. Fig. 5 is a flow diagram 500 that illustrates a process used by a track linker for determining corresponding tracks from each camera in a system with multiple cameras. Start module 505 is triggered immediately after the object tracking process illustrated in Fig. 4A 5 has been completed for each camera of the tracking system. Within the track linker, a source track selection module in step 510 selects a source track 511. The source track 511 is a currently-active track that was updated in the current frame by the object tracking process. The source track 511 may be maintained by any camera. Next, the source track 511 is presented to a historical track selection module in step > 520, which selects an historical track 521. The historical track is a track maintained by any camera. The historical track 521 cannot be the same track as the selected source track. The historical track 521 may be a currently-active track or a track that was killed previously. In one arrangement, the historical track 521 must have been an active track within a maximum number of preceding dormant frames. In one arrangement, the maximum number of dormant 25 frames is a constant, for example, 500 frames. In another arrangement, the maximum number of dormant frames is dependent on the frame rate, for example, the number of frames recorded in 20 seconds. In one arrangement, the frame rate is constant. In another arrangement, the frame rate is variable. The source track 511 and the historical track 521 are then provided to a visual 30 similarity computing module in step 530. The visual similarity computing module 530 is illustrated in detail in Fig. 6A and is described later. The visual similarity computing module 2355577 LDOC IRN: 905836 905836_specification - 32 530 outputs a visual similarity score 531 based on the similarity of the visual appearance of the objects represented by the source track 511 and the historical track 521. The source track 511 and the historical track 521 are also provided from step 520 to a spatio-temporal similarity computing module in step 535. The spatio-temporal similarity computing module 535 is illustrated in detail in Fig. 6B and is described later. The spatio-temporal similarity computing module 535 outputs a spatio-temporal similarity score 536 based on the similarity of the location of the objects and the temporal difference between observations stored in the source track 511 and the historical track 521. A combined similarity decision module in step 540 receives the visual similarity score 531 and the spatio-temporal similarity score 536. In one arrangement, a combined similarity score is computed by summing the visual similarity score and the spatio-temporal similarity score. In another arrangement, a combined similarity score is computed by multiplying the visual similarity score and the spatio-temporal similarity score. If the combined similarity score is greater than a predetermined combined similarity threshold, Yes, 5 the source track 511 is considered to be a potential match to the historical track 521 and control passes to an evidence accumulation module in step 550. If the combined similarity score is less than or equal to the combined similarity threshold, No, control passes to a remaining historical track determination module in step 580. In one arrangement, the predetermined combined similarity threshold is the multiplicative inverse of the combined D difference threshold, for example, 1/5. Larger similarity scores indicate a high similarity between the source track 511 and the historical track 520. The evidence accumulation module in step 550 receives the combined similarity score. An accumulated similarity score is stored for each processed combination of a source track 511 and a historical track 521. If a combined similarity score has not previously been 25 computed for the source track 511 and the historical track 521, the received combined similarity score is used as the initial value of an accumulated similarity score. If the accumulated similarity score for the selected source track 511 and the selected historical track 521 was initialised in a previous frame, the received combined similarity score is added to the existing accumulated similarity score. 30 An accumulated evidence thresholder in step 560 receives the score from the evidence accumulation module in step 550. If the accumulated similarity score is not greater than a predetermined linking threshold, for example 5.0, No, insufficient evidence has been 2355577 1.DOC IRN: 905836 905836_specification - 33 observed to determine that the source track 511 corresponds to the historical track 521. The process then moves to remaining historical tracks decision in step 580. If the accumulated similarity score is greater than the linking threshold at step 560, Yes, sufficient evidence has been observed to determine that the source track 511 corresponds to the historical track 521, and control passes to a permanently linking tracks module in step 570. The permanently linking tracks module in step 570 marks the source track 511 and the historical track 521 as corresponding to the same real-world object. In future frames, the historical track selection module 520 will not select this historical track 521 as a potential match for source track 511, because it is already known to be a match. The permanently linking tracks module in step 570 then passes control to the remaining historical tracks decision in step 580. The remaining historical tracks decision 580 examines whether there are remaining historical tracks yet to be combined with the selected source track 511 in this frame. If there are remaining tracks, Yes, control returns to the historical track selection module in step 520. If there are no more remaining tracks, No, control passes to a remaining source tracks decision in step 590. The remaining source tracks decision 590 examines whether there are remaining source tracks yet to be processed in this frame. If there are remaining source tracks to be processed, Yes, control returns to the source track selection module in step 510. If there are no more remaining source tracks to be processed, No, the track linker terminates at End step 599. Fig. 6A is a flow diagram that illustrates the functionality performed by the visual similarity computing module of step 530 of Fig. 5. The signatures of a first object 601 derived from the source track 511, and a second object 602 derived from the historical track 521 are compared. The first object 601 is presented to each of a first spatial region 5 determining process 611 and a first signature determining process 631. The first spatial region determining process 611 determines a first spatial region 611 approximating the location and size of the first object 601. The first signature determining process 631 determines a first signature summarising the appearance of the first object 601. In one arrangement, the first spatial region 611 is a bounding box of the first object 601. In another 0 arrangement, the first spatial region 611 is a bounding box of another object viewed by the same camera in either the same or earlier frames. The first spatial region 611 exhibits significant spatial similarity to the bounding box of the first object 601 according to an area 2355577 LDOC IRN: 905836 905836_specification -34 ratio threshold, to be described in detail later. In one arrangement, the ratio of the area of the first spatial region 611 to the bounding box of the first object 601 must be greater than a predetermined first area threshold, for example 0.5. For example, in first frame 110 in Fig. 1 A, the spatial region 113 is similar to the bounding box 112 of the person 111. Similarly, in third frame 130 of Fig. IC, the spatial region 133 is similar to the bounding box 132 of the person 131. The second object 602 is presented to each of a second spatial region determining process in step 612 and a second signature determining process in step 632. The second spatial region determining process 612 determines a second spatial region approximating the location and size of the second object 602. The second signature determining process 632 determines a second signature 632 summarising the appearance of the second object. The derivation of the second spatial region 612 from the second object 602 is performed in the same manner described above for deriving the first spatial region 611 from the first object 601. Each of the first spatial region and the second spatial region, determined in steps 611 and 612, respectively, is presented to a region mapping lookup module in step 620. The region mapping lookup module examines whether the first and second spatial regions have previously been received in combination, and outputs an LBBTF 621. In one arrangement, if the regions have previously been received in combination, the LBBTF 621 is the LBBTF corresponding to those regions. If the regions have not previously been received in combination, the LBBTF 621 is a default LBBTF. In one arrangement, the default LBBTF is a scalar indicating a linear scaling of brightness values. In one arrangement, the scalar is 1.0. In another arrangement, the scalar is derived from the ratio of the mean brightness values of the first signature 631 and the second signature 632. In another arrangement, the default 25 LBBTF modifies the brightness values non-linearly. In one arrangement, the LBBTF incorporates a temporal aspect. For example, the lighting model of an outdoor camera may incorporate the position of the sun in the sky. Hence, in addition to considering the location and size of the first and second spatial regions 611 and 612, the LBBTF must also account for changes in the LBBTF due to the time passed since the LBBTF relating the first and second 30 spatial regions 611 and 612 was last updated. In one arrangement, an LBBTF incorporating a temporal aspect is an array of scalars. In one arrangement, each scalar in the array contains one LBBTF to be used for a fixed period 2355577 _DOC IRN: 905836 905836_specification - 35 of time each, for example, for the same one-hour timeslot each day. Thus, modelling of a fixed lighting timing pattern in an office is achieved. Alternatively, in an outdoor environment, modelling of the movement of the sun throughout the day is achieved. In another arrangement, the time of day represented by each scalar in the array is dependent on 5 the day of the year. Thus, modelling of the movement of the sun in different seasons of the year is achieved. The first signature 631, the second signature 632, and the LBBTF 621 are received by an LBBTF applicator in step 640. In one arrangement, the LBBTF applicator 640 applies the LBBTF 621 to the first signature 631 only. In another arrangement, the LBBTF applicator o 640 applies the LBBTF 621 to the second signature 632 only. In yet another arrangement, the LBBTF applicator 640 applies the LBBTF to the first signature 631 and to the second signature 632, where the net effect of applying the LBBTF to both the first signature 631 and the second signature 632 is not greater than applying the LBBTF to one signature. The selection of which arrangement to use is made based on the values stored within the signature. 5 Selecting an appropriate method of applying the LBBTF is desired to avoid merging too many bins into a single bin which causes loss of data. The LBBTF applicator 640 outputs a modified first signature 641 and a modified second signature 642. Note that due to the selection of how the LBBTF applicator 640 applied the LBBTF 621, the modified first signature 641 may be identical to the first signature 631. Similarly, the modified second !o signature 642 may be identical to the second signature 632. The modified first signature 641 and the modified second signature 642 are presented to a classification module in step 650. Classification module 650 computes a visual difference between the modified first signature 641 and the modified second signature 642. Computing the visual difference between the modified first signature 641 and the modified 25 second signature 642 rather than between the first signature 631 and the second signature 632 produces a visual difference more robust to illumination changes. If the visual difference is greater than the predetermined visual difference threshold, the first object 601 is not a match for the second object 602. If the visual difference is less than or equal to the predetermined visual difference threshold, the first object 601 is classified as a match for the second object 30 602. The predetermined visual difference threshold will vary, depending on the particular application. In one arrangement, a similarity measure is used in place of a visual difference. Larger values of the similarity measure represent a greater likelihood of a match. In one 2355577 _.DOC IRN: 905836 905836_specification -36 arrangement, the similarity measure is the multiplicative inverse of the visual difference given in Equation (1), Equation (2), or Equation (3). A predetermined similarity threshold is used in determining a match. In one arrangement, the predetermined similarity threshold is the multiplicative inverse of the visual difference threshold, for example, 1.0. If classified as a match, the LBBTF 621 relating the first spatial region 611 and the second spatial region 612 is updated. First, the residual LBBTF between the modified first signature 641 and the modified second signature 642 is determined. In one arrangement, the residual LBBTF is the ratio of the mean brightness of the modified first signature 641 to the mean brightness of the modified second signature 642. That is, the residual LBBTF encapsulates the change in brightness remaining after applying the LBBTF applicator 640. Next, the residual LBBTF is used to update the LBBTF 621. In one arrangement, the residual LBBTF is added to the LBBTF 621. In another arrangement, the residual LBBTF is added to the history of LBBTFs, and the LBBTF 621 is recomputed as the average of all historical LBBTFs. In yet another arrangement, the residual LBBTF is added to the fixed-size array of 5 historical LBBTFs and the LBBTF 621 is recomputed as the average of all LBBTFs in the fixed-size array. In one arrangement, there are 10 LBBTFs in the fixed-size array. After classification module 650 is complete, if first object 601 was classified as a match for the second object 602, the processing of visual similarity computation module 530 is complete. The visual similarity returned by the visual similarity computation module 530 is o given by: (1 -visual difference, visual diffrne< visual -similarity = -, _ fference <... (6). If first object 601 was not classified as a match for second object 602, a different second object 602 is selected from a different frame of historical track 521 and the visual similarity computing module 530 is executed again. If all frames of historical track 521 have 25 been processed and no matching object was found for first object 601, a visual similarity of 0 is returned by the visual similarity computation module 530. Fig. 6B illustrates in detail the functionality of the spatio-temporal similarity computing module 535 from Fig. 5. The visual appearances of a first object 661 derived from the source track 511, and a second object 662 derived from the historical track 521 are 30 compared. From the first object 661, a first spatial region 663 approximating the location and size of the first object is derived. In one arrangement, the first spatial region 663 has at least a 2355577 LDOC IRN: 905836 905836_specification -37 centre position (x,y), a width and a height. The derivation of the first spatial region 663 from the first object 661 is performed in the same manner described above for deriving the first spatial region 611 from the first object 601 in the visual similarity computing module 530. In one arrangement, the first spatial region 663 is the same region as the first spatial region 611. In another arrangement, the first spatial region 663 is different from the first spatial region 611. From the second object 662, a second spatial region 664 approximating the location and size of the second object is derived. In one arrangement, the second spatial region 664 has at least a centre position (x,y), a width and a height. The derivation of the second spatial region 664 from the second object 662 is performed in the same manner described above for deriving the first spatial region 663 from the first object 661. In one arrangement, the second spatial region 664 is the same region as the second spatial region 612. In another arrangement, the second spatial region 664 is different to the second spatial region 612. A region mapping 670 is created from the first spatial region 663 and the second spatial region 664. The region mapping 670 encapsulates the positions and sizes of the first spatial region 663 and the second spatial region 664, and the time elapsed between the first object 661 and the second object 662 being observed. In one arrangement, the region mapping 670 is the same as the region mapping returned by region mapping lookup module 620 within the visual similarity computing module 530. In another arrangement, the region mapping 670 is independent of the region mapping returned by region mapping lookup module 620. Historical mapping comparator 680 compares the region mapping 670 to all historical region mappings 671, in turn. Each historical region mapping contains an accumulated historical spatial similarity for a historical first spatial region and a historical 25 second spatial region. The accumulated historical spatial similarity encapsulates the number of matches and quality of matches to which the selected historical region mapping has contributed. As further matches are found and the accumulated historical spatial similarity increases, the reliability of the contributions from the selected historical region mapping increases. Each historical region mapping also maintains a set of transit times. The transit 30 time is the time elapsed since the first object 661 was observed and the second object 662 was observed. In one arrangement, the transit time is measured in frames. In another arrangement, where the frame rate varies over time, the transit time is measured in seconds. 2355577 _.DOC IRN: 905836 905836_specification -38 In one arrangement, the set of transit times contains all observed transit times for the selected historical region mapping. In another arrangement, the set of transit times is a limited number of observed transit times for the selected historical region mapping, for example 10. Historical mapping comparator 680 maintains an accumulated similarity score which is initialised to zero. For each historical region mapping 671, an individual similarity score is computed. The individual similarity score is then added to the similarity score maintained by the historical mapping comparator 680. In one arrangement, the individual similarity score is computed as the product of a first spatial similarity score, a second similarity score and a temporal similarity score. In one arrangement, the first spatial similarity score is computed by: (Ax 1
)
2 + A; 2 ) first _ spatial _ similarity = exp(-( 2 + 2 )/2) ... (7), where Ax, is the difference in the x-coordinate of the centres of the first spatial region 663 and the historical first spatial region, Ay, is the difference in the y-coordinate of the centres of the first spatial region 663 and the historical first spatial region, wI is the width of the historical first spatial region and hi is the height of the historical first spatial region. The first spatial similarity score measures the overlap between the first spatial region 663 and the historical first spatial region using a Gaussian drop-off. In one arrangement, the second spatial similarity score is computed by: (Ax 2
)
2 (Ay2) 2 second spatial similarity = exp(-( 2 + 2 )/2) ... (8), w2 h2 0 where Ax 2 is the difference in the x-coordinate of the centres of the second spatial region 664 and the historical second spatial region and similarly Ay 2 is the difference in the y-coordinate, w2 is the width of the historical second spatial region and h 2 is the height of the historical second spatial region. In one arrangement, the temporal similarity score is computed by: 1, o-, = 0 z5 temporal _ similarity = (t - )2 ), otherwise ... (9), a-,' where t is the transit time between the first object 661 and the second object 662, and I and ay are the mean and standard deviation respectively of the set of transit times for the selected historical region mapping. The temporal similarity measures how consistently the 2355577 IDOC IRN: 905836 905836_specification -39 observed transit time between the first object 661 and the second object 662 matches the historically observed transit times. The historical region comparator 680 then adds the individual similarity score to the accumulated similarity score. After the process is performed for each pairing of the region mapping 670 to each historical region mapping in turn, the spatio-temporal similarity computation module 535 is complete. The returned spatio-temporal similarity 536 is the accumulated spatio-temporal similarity and is passed to the combined similarity decision module 540 of Fig. 5. This process is repeated for various instances of the second object 602, where each instance of the second object 602 is selected from a different frame of the historical track 521. For each instance, the accumulated spatio-temporal similarity is reinitialised. The spatio-temporal similarity returned by spatio-temporal similarity computation module 535 is the greatest accumulated similarity value observed for an instance of the second object 602. The final step in spatio-temporal similarity computing module 535 is to update the 5 region mapping 670 used in computing the accumulated similarity value observed for an instance of the second object 662. In one arrangement, the region mapping 670 used in computing the accumulated similarity value observed for an instance of the second object 662 is added to the historical region mappings 671. In another arrangement, if the region mapping 670 used in computing the accumulated similarity value observed for an instance of the D second object 662 already exists in the historical region mappings 671, the existing region mapping is updated. In one aspect of updating, the transit time between the first object 661 and the second object 662 is added to a set of transit times maintained by the existing region mapping. In one arrangement, the set of transit times maintained by the existing region mapping is of a limited size and the oldest transit time is replaced by the transit time between 25 the first object 661 and the second object 662. In another aspect of updating, a similarity score is added to the accumulated historical spatial similarity of the existing region mapping. In one arrangement, the similarity score to be added is the greatest accumulated similarity value observed for an instance of the second object 662. 30 System implementation Fig. 7 shows an electronic system 705 for effecting the disclosed Location-Based Brightness Transfer Function method. Sensors 700 and 701 are used to obtain the images of 2355577 _DOC IRN: 905836 905836_specification - 40 the image sequence. Each sensor may represent a stand alone sensor device (i.e., detector or a security camera) or be part of an imaging device, such as camera, mobile phone etc. In one implementation, the electronic system 705 is a camera system and each sensor 700 and 701 includes a lens system and an associated camera module coupled to the lens system, wherein the camera module stores images captured by the lens system. In one arrangement, the pan and tilt angles and the zoom of each sensor are controlled by a pan-tilt-zoom controller 703. The remaining electronic elements 710 to 768 may also be part of the imaging device comprising sensors 700 and 701, as indicated by dotted line 799. The electronic elements 710 to 768 may also be part of a computer system that is located either locally or remotely with respect to sensors 700 and 701. In the case indicated by dotted line 798, electronic elements form a part of a personal computer 780. The transmission of the images from the sensors 700 and 701 to the processing electronics 720 to 768 is facilitated by an input/output interface 710, which could be a serial bus compliant with Universal Serial Bus (USB) standards and having corresponding USB 5 connectors. Alternatively, the image sequence may be retrieved from camera sensors 700 and 701 via Local Area Network 790 or Wide Area Network 795. The image sequence may also be downloaded from a local storage device (e.g., 770), that can include SIM card, SD card, USB memory card etc. The sensors 700 and 701 are able to communicate directly with each other via sensor communication link 702. One example of sensor 700 communicating directly with sensor 701 via sensor communication link 702 is when sensor 700 maintains its own database of spatial regions and corresponding brightness values; sensor 700 can then communicate this information directly to sensor 701, or vice versa. The images are obtained by input/output interface 710 and sent to the memory 750 or 25 another of the processing elements 720 to 768 via a system bus 730. The processor 720 is arranged to retrieve the sequence of images from sensors 700 and 701 or from memory 750. The processor 720 is also arranged to fetch, decode and execute all steps of the disclosed method. The processor 720 then records the results from the respective operations to memory 750, again using system bus 730. Apart from memory 750, the output could also be stored 30 more permanently on a storage device 770, via an input/output interface 760. The same output may also be sent, via network interface 764, either to a remote server which may be part of the network 790 or 795, or to personal computer 780, using input/output interface 710. 2355577 L.DOC IRN: 905836 905836_specification - 41 The output may also be displayed for human viewing, using AV interface 768, on a monitor 785. Alternatively, the output may be processed further. One example of further processing may include using the output data, written back to memory 750, memory 770 or computer 780, as the input to a background modelling system. As was discussed above and indicated in Fig. 7, the above method may be embodied in various forms. In the particular form, indicated by rectangle 799, the method is implemented in an imaging device, such as a camera, a camera system having multiple cameras, a network camera, or a mobile phone with a camera. In this case, all the processing electronics 710 to 768 will be part of the imaging device, as indicated by rectangle 799. As already mentioned in the above description, such an imaging device for capturing a sequence of images and tracking objects through the captured images will comprise: one of or both sensors 700 and 701, memory 750, a processor 720, an input/output interface 710 and a system bus 730. The sensors 700 and 701 are arranged for capturing the sequence of images in which objects will be tracked. The memory 750 is used for storing the sequence of images, 5 the objects detected within the images, the track data of the tracked objects and the signatures of the tracks. The processor 720 is arranged for receiving, from the sensors 700 and 701 or from the memory 750, the sequence of images, the objects detected within the images, the track data of the tracked objects and the signatures of the tracks. The processor 720 also detects the objects within the images of the image sequences and associates the detected 0 objects with tracks. The input/output interface 710 facilitates the transmitting of the image sequences from the sensors 700 and 701 to the memory 750 and to the processor 720. The input/output interface 710 also facilitates the transmitting of pan-tilt-zoom commands from the PTZ controller 703 to the sensors 700 and 701. The system bus 730 transmits data between the 25 input/output interface 710 and the processor 720. In one implementation, a single sensor 700 is utilised and the source and target frames are selected from a sequence of video frames captured by the single sensor 700. Fig. 8 is a schematic block diagram representation of an alternative imaging system 800 on which an embodiment of the present disclosure may be practised. The imaging system 30 800 includes a memory 850 coupled to a processor 820 by a system bus 830. The imaging system receives video frames on the system bus 830 from one or more lens systems 801, 802 via an object detection module 810. Each lens system 801, 802 is adapted to capture a 2355577 _DOC IRN: 905836 905836_specification - 42 sequence of video frames and to transmit the captured video frames to the imaging system 800. In one implementation, each lens system 801, 802 is a video camera, a network camera, or a camera on a mobile telephone handset. In another implementation, each lens system 801, 802 includes a lens arrangement and a camera module coupled to the lens arrangement to store video frames captured by the lens arrangement. In one implementation, indicated by the dotted line 899, the imaging system 800, at least one lens system 801, 802, and the object detection module 810 form an integrated camera system 899. In another implementation, the imaging system 800 is remotely located from each lens system 801, 802 and the object detection module. D Each lens system 801, 802 captures a sequence of video frames and passes the frames to the object detection module 810. The object detection module 810 processes each frame to detect any objects within the frame. For each detected object, the object detection module determines an associated signature. In one embodiment, each signature includes luminance characteristics for the corresponding detected object. Depending on the application, the object 5 detection module 810 may also determine tracking data for each detected object. The object detection module 810 passes the frames and information relating to the detected objects and their associated signatures to the imaging system 800. The memory 850 can be used to store the received video frames and for storing one or more brightness transfer functions. Further, the memory 850 stores a computer program for o processing received video frames in accordance with the present disclosure. The computer program is executable on the processor 820 to process received image frames to determine a correspondence between a source object in a first frame and a target object in a second frame, wherein the source object is associated with a source signature and the target object is associated with a target signature. In one embodiment, the program includes code for 25 determining a brightness transfer function between a first spatial region of the first frame corresponding to the source object and a second spatial region of the second frame corresponding to the target object. The program also includes code for modifying at least one of the source signature and the target signature in accordance with the brightness transfer function to produce a modified source signature and a modified target signature. The program 30 further includes code for computing a similarity measure between the modified source signature and the modified target signature, and code for determining a correspondence between the source object and the target object, based on the similarity measure. 2355577 _DOC IRN: 905836 905836_specification -43 Conclusion LBBTFs can be applied to the scenario in which a person moves between a sunlit area and a shaded area, such as the example presented in Fig. 1. However, the applicability of LBBTFs is not limited to such a scenario. LBBTFs can also be applied exclusively in indoor scenes. For example, lighting provided in an office environment consisting of a grid of overhead lights can result in the appearance of an object directly beneath a light source being different from the appearance of an object located between light sources. Similarly, LBBTFs can be applied exclusively to outdoor scenes. Global Compensation Method (GC Method) brightness transfer functions apply an expected uniform change in illumination caused by moving from the field of view of one camera to the field of view of another camera. Uniformly applying a change in illumination is not always ideal, because lighting within a scene is often localised. For example, indoor scenes feature well-spaced overhead lights so that a person is well-lit when standing directly 5 underneath a light, but the same person will appear darker when standing between the lights. A single uniform change in illumination cannot model these changes. An advantage provided by using LBBTFs in accordance with the present disclosure is that the lighting of the spatial region associated with each detection is considered independently. This results in more correct changes in lighting being modelled and applied. o Subspace Method approaches determine a subspace of all possible BTFs from a training set. The subspace encapsulates all observed changes in signatures due to the change in apparent pose, lighting and camera settings caused by the viewing in a different camera. Then, to determine if an object viewed in a first camera and an object viewed in a second camera correspond to the same real-world object, the BTF between the observed views is 25 computed. The probability that the two viewed objects correspond to the same real-world object can then be derived from the probability that the computed BTF between the objects lies in the subspace of all BTFs learned in the training phase. Because the location of the object is not considered when computing the BTF, the Subspace Method may consider a BTF that is valid in one part of an image, as also valid in another part of an image. This 30 consideration may not always be correct. The LBBTF approach considers whether the BTF is valid for the specific spatial regions being considered and does not maintain a global subspace of BTFs or even consider the BTFs of nearby regions, which may or may not be related. 2355577 _.DOC IRN: 905836 905836 specification - 44 Thus, the LBBTF method provides more correct results in applying BTFs than Subspace Method approaches. Exposure Compensation Method approaches adjust the exposure time of a frame, thus globally modifying the brightness of a frame. When multiple objects are visible in the same frame, and are viewed under different lighting conditions, it is not possible to adjust the exposure time of the frame to be optimal for both objects. Because LBBTFs provide the relative change in brightness between a pair of locations, an absolute optimal exposure of each object is not necessary, and only the relative difference in the exposure is required. As a result, localised changes in lighting can be handled, and multiple objects can be tracked throughout a scene. INDUSTRIAL APPLICABILITY The arrangements described are applicable to the computer and data processing industries and particularly for the imaging and security industries. The foregoing describes only some embodiments of the present invention, and modifications and/or changes can be made thereto without departing from the scope and spirit of the invention, the embodiments being illustrative and not restrictive. In the context of this specification, the word "comprising" means "including principally but not necessarily solely" or "having" or "including", and not "consisting only of'. Variations of the word "comprising", such as "comprise" and "comprises", have correspondingly varied meanings. 2355577_LDOC IRN: 905836 905836_specification

Claims (20)

1. A method of determining a correspondence between a source object in a first video frame and a target object in a second video frame, said source object being associated with a source signature incorporating at least luminance characteristics of the source object and the target object being associated with a target signature incorporating at least luminance characteristics of the target object, the method comprising the steps of: (a) determining a brightness transfer function between a first spatial region of said first frame corresponding to the source object, said first spatial region being less than said first frame, and a second spatial region of said second frame corresponding to the target object, said second spatial region being less than said second frame; (b) modifying at least one of the source signature and the target signature in accordance with the brightness transfer function to produce a modified source signature and a modified target signature; (c) computing a similarity measure between the modified source signature and the modified target signature; and (d) determining a correspondence between the source object and the target object, based on said similarity measure.
2. The method according to claim 1, wherein said first video frame is taken from a first 20 image sequence recorded by a first camera and said second video frame is taken from a second image sequence recorded by a second camera.
3. The method according to claim 1, wherein said first video frame and said second video frame belong to a single video frame sequence recorded by a single camera. 25 2355577_[.DOC IRN: 905836 905836_specification - 46
4. The method according to claim 1, wherein the first spatial region corresponding to the source object is a bounding box of the source object and the second spatial region corresponding to the target object is a bounding box of the target object.
5. The method according to claim 1, wherein said source signature includes a histogram of luminance characteristics of the source object, and further wherein said target signature includes a histogram of luminance characteristics of the target object.
> 6. The method according to claim 1, wherein the modifying is performed within the step of computing a similarity measure.
7. The method according to claim 1, wherein the brightness transfer function applies an independent change in brightness to each portion of the source signature and each portion of 5 the target signature.
8. The method according to claim 1, wherein said brightness transfer function is determined using a set of stored histogram pairs. 20
9. The method according to claim 1, wherein the brightness transfer function applies an independent change in brightness to each portion of the source signature and each portion of the target signature, and 2355577 LDOC IRN: 905836 905836_specification -47 further wherein each independent change in brightness is dependent on a portion of the source signature and the target signature to which the brightness transfer function is being applied.
10. The method according to claim 1, wherein the brightness transfer function incorporates a temporal component.
11. The method according to claim 1, wherein said correspondence between said source object and said target object indicates a match if said similarity measure is less than a D predefined threshold.
12. A camera system for determining a correspondence between a source object in a first video frame and a target object in a second video frame, said source object being associated with a source signature incorporating at least luminance characteristics of the source object 5 and the target object being associated with a target signature incorporating at least luminance characteristics of the target object, said camera system comprising: a first lens system; a first camera module coupled to said first lens system to store said first video frame; 20 a second lens system; a second camera module coupled to said second lens system to store said second video frame; a storage device for storing a computer program; and a processor for executing the program, said program comprising: 2355577 LDOC IRN. 905836 905836_specification -48 code for determining a brightness transfer function between a first spatial region of said first frame corresponding to the source object, said first spatial region being less than said first frame, and a second spatial region of said second frame corresponding to the target object, said second spatial region being less than said second frame; code for modifying at least one of the source signature and the target signature in accordance with the brightness transfer function to produce a modified source signature and a modified target signature; code for computing a similarity measure between the modified source signature and the modified target signature; and code for determining a correspondence between the source object and the target object, based on said similarity measure.
13. A method of determining a brightness transfer function relating a source spatial region 5 in a first video frame associated with a source signature incorporating at least luminance characteristics of a source object, and a target spatial region in a second video frame associated with a target signature incorporating at least luminance characteristics of a target object, the method comprising the steps of: (a) in a first instance of relating the source spatial region and the target spatial 20 region: (i) computing a similarity measure between the source signature and the target signature and, if said similarity measure is above a similarity threshold: determining the brightness transfer function relating the source spatial region and the target spatial region as a residual brightness 2355577 LDOC IRN: 905836 905836_specification - 49 transfer function that minimises a difference between the source signature and the target signature; (b) in later instances of relating the source spatial region and the target spatial region: (i) retrieving a previously determined brightness transfer function appropriate for relating the source spatial region and the target spatial region, (ii) applying the previously determined brightness transfer function to at least one of the source signature and the target signature to produce a modified source signature and a modified target signature, and (iii) if a similarity measure computed from the modified source signature and the modified target signature is above a similarity threshold: determining a residual brightness transfer function that minimises the difference between the modified source signature and the modified target signature, and s updating the previously determined brightness transfer function to incorporate the residual brightness transfer function.
14. A method of determining a correspondence between a source object in a first video frame and a target object in a second video frame, said source object being associated with a 20 source signature incorporating at least luminance characteristics of the source object and the target object being associated with a target signature incorporating at least luminance characteristics of the target object, comprising the steps of: (a) determining a brightness transfer function between a first spatial region of said first frame corresponding to the source object, said first spatial region being less than or 2355577 LDOC IRN: 905836 905836_specification -50 equal to said first frame, and a second spatial region of said second frame corresponding to the target object, said second spatial region being less than or equal to said second frame; (b) modifying at least one of the first spatial region and the second spatial region in accordance with the brightness transfer function to produce a first modified spatial region and a second modified spatial region; (c) deriving a modified source signature and a modified target signature from said first modified spatial region and said second modified spatial region; (d) computing a similarity measure between the modified source signature and the modified target signature; and (e) determining a correspondence between the source object and the target object, based on said similarity measure, wherein said correspondence indicates a match between said source object and said target object when said correspondence is less than a predefined threshold. 5
15. An imaging system for determining a correspondence between a source object in a first video frame and a target object in a second video frame, said source object being associated with a source signature incorporating at least luminance characteristics of the source object and the target object being associated with a target signature incorporating at least luminance 20 characteristics of the target object, said imaging system comprising: a storage device for storing a computer program; and a processor for executing the program, said program comprising: code for determining a brightness transfer function between a first spatial region of said first frame corresponding to the source object, said first spatial 2355577 .DOC IRN: 905836 905836_spccification - 51 region being less than said first frame, and a second spatial region of said second frame corresponding to the target object, said second spatial region being less than said second frame; code for modifying at least one of the source signature and the target signature in accordance with the brightness transfer function to produce a modified source signature and a modified target signature; code for computing a similarity measure between the modified source signature and the modified target signature; and code for determining a correspondence between the source object and the target object, based on said similarity measure.
16. The imaging system according to claim 15, wherein said first video frame and said second video frame are derived from at least one lens system. s
17. A method of determining a correspondence between a source object in a first video frame and a target object in a second video frame, said first object being associated with a source signature incorporating at least luminance characteristics of the source object and the target object being associated with a target signature incorporating at least luminance characteristics of the target object, the method being substantially as described herein with 20 reference to the accompanying drawings.
18. A camera system for determining a correspondence between a source object in a first video frame and a target object in a second video frame, said first object being associated with a source signature incorporating at least luminance characteristics of the source object and the 2355577_LDOC IRN: 905836 905836_specification - 52 target object being associated with a target signature incorporating at least luminance characteristics of the target object, said camera system being substantially as described herein with reference to the accompanying drawings.
19. An imaging system for determining a correspondence between a source object in a first video frame and a target object in a second video frame, said source object being associated with a source signature incorporating at least luminance characteristics of the source object and the target object being associated with a target signature incorporating at least luminance characteristics of the target object, said imaging system being substantially as described herein with reference to the accompanying drawings.
20. A method of determining a brightness transfer function relating a source spatial region in a first video frame being associated with a source signature incorporating at least luminance characteristics of the source object, and a target spatial region in a second video frame being 5 associated with a target signature incorporating at least luminance characteristics of the target object, the method being substantially as described herein with reference to the accompanying drawings. DATED this Twenty-seventh Day of October, 2009 20 Canon Kabushiki Kaisha Patent Attorneys for the Applicant SPRUSON & FERGUSON 2355577_[.DOC IRN: 905836 905836_specification
AU2009230796A 2009-10-28 2009-10-28 Location-based brightness transfer function Abandoned AU2009230796A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2009230796A AU2009230796A1 (en) 2009-10-28 2009-10-28 Location-based brightness transfer function

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2009230796A AU2009230796A1 (en) 2009-10-28 2009-10-28 Location-based brightness transfer function

Publications (1)

Publication Number Publication Date
AU2009230796A1 true AU2009230796A1 (en) 2011-05-12

Family

ID=43971592

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2009230796A Abandoned AU2009230796A1 (en) 2009-10-28 2009-10-28 Location-based brightness transfer function

Country Status (1)

Country Link
AU (1) AU2009230796A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015031113A1 (en) * 2013-08-27 2015-03-05 Qualcomm Incorporated Systems, devices and methods for tracking objects on a display
EP2975576A1 (en) * 2014-07-15 2016-01-20 Thomson Licensing Method of determination of stable zones within an image stream, and portable device for implementing the method
CN109964278A (en) * 2017-03-30 2019-07-02 艾腾怀斯股份有限公司 System and method for correcting errors in a first classifier by evaluating classifier outputs in parallel

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015031113A1 (en) * 2013-08-27 2015-03-05 Qualcomm Incorporated Systems, devices and methods for tracking objects on a display
CN105493147A (en) * 2013-08-27 2016-04-13 高通股份有限公司 Systems, devices and methods for tracking objects on a display
US9454827B2 (en) 2013-08-27 2016-09-27 Qualcomm Incorporated Systems, devices and methods for tracking objects on a display
EP2975576A1 (en) * 2014-07-15 2016-01-20 Thomson Licensing Method of determination of stable zones within an image stream, and portable device for implementing the method
WO2016008759A1 (en) * 2014-07-15 2016-01-21 Thomson Licensing Method of determination of stable zones within an image stream, and portable device for implementing the method.
CN109964278A (en) * 2017-03-30 2019-07-02 艾腾怀斯股份有限公司 System and method for correcting errors in a first classifier by evaluating classifier outputs in parallel
CN109964278B (en) * 2017-03-30 2023-06-27 艾腾怀斯股份有限公司 Correct errors in the first classifier by evaluating classifier outputs in parallel

Similar Documents

Publication Publication Date Title
AU2009243528B2 (en) Location-based signature selection for multi-camera object tracking
AU2008264232B2 (en) Multi-modal object signature
US10282617B2 (en) Methods and systems for performing sleeping object detection and tracking in video analytics
US8089515B2 (en) Method and device for controlling auto focusing of a video camera by tracking a region-of-interest
AU2010241260B2 (en) Foreground background separation in a scene with unstable textures
CN109035304B (en) Target tracking method, medium, computing device and apparatus
AU2013242830B2 (en) A method for improving tracking in crowded situations using rival compensation
AU2009251048B2 (en) Background image and mask estimation for accurate shift-estimation for video object detection in presence of misalignment
AU2010238543B2 (en) Method for video object detection
US10096117B2 (en) Video segmentation method
US20130063556A1 (en) Extracting depth information from video from a single camera
CN110111364B (en) Motion detection method and device, electronic equipment and storage medium
WO2009105812A1 (en) Spatio-activity based mode matching field of the invention
JP2020149641A (en) Object tracking device and object tracking method
US9609233B2 (en) Method and system for luminance adjustment of images in an image sequence
AU2009230796A1 (en) Location-based brightness transfer function
JP2002312795A (en) Image processing apparatus, image processing method, recording medium, and program
JP2002312787A (en) Image processing apparatus, image processing method, recording medium, and program
AU2008261195B2 (en) Video object fragmentation detection and management
JP2002312792A (en) Image processing apparatus, image processing method, recording medium, and program
Guthier et al. Histogram-based image registration for real-time high dynamic range videos
AU2017265110A1 (en) Method for segmenting video using background model learned with pixelwise adaptive learning rate
CN116311351A (en) Method and system for indoor human target recognition and monocular distance measurement
AU2008261196A1 (en) Backdating object splitting
JP2002312786A (en) Image processing apparatus, image processing method, recording medium, and program

Legal Events

Date Code Title Description
MK4 Application lapsed section 142(2)(d) - no continuation fee paid for the application