[go: up one dir, main page]

WO2014038924A2 - A method for producing a background model - Google Patents

A method for producing a background model Download PDF

Info

Publication number
WO2014038924A2
WO2014038924A2 PCT/MY2013/000154 MY2013000154W WO2014038924A2 WO 2014038924 A2 WO2014038924 A2 WO 2014038924A2 MY 2013000154 W MY2013000154 W MY 2013000154W WO 2014038924 A2 WO2014038924 A2 WO 2014038924A2
Authority
WO
WIPO (PCT)
Prior art keywords
background
model
pixel
scene
intensity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/MY2013/000154
Other languages
French (fr)
Other versions
WO2014038924A3 (en
Inventor
Binti Kadim Zulaikha
Binti Samudin Norshuhada
Hon Hock Woon Dr.
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mimos Bhd
Original Assignee
Mimos Bhd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mimos Bhd filed Critical Mimos Bhd
Publication of WO2014038924A2 publication Critical patent/WO2014038924A2/en
Publication of WO2014038924A3 publication Critical patent/WO2014038924A3/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Definitions

  • the present invention relates to a method for producing a background model based on the images acquired from a non-static camera.
  • the conventional video analytics is used to detect suspicious event using static camera.
  • background of the scene to be in a static condition as illustrated in Figure 1. This is due to the background pixels are stationary (variation of the intensity distribution for each background pixel is small) and any significant change in the intensity is corresponded to the foreground object.
  • Common way of representing the background scene is using a single scalar value to represent each pixel. Scalar value represents a moving average of the pixel intensity over time. Despite of the simplicity of this model, it will not be able to correctly represent the dynamic background scene.
  • Dynamic background scene also refers to the background scene that is having high variability of intensity over time, for. example waving trees and ocean waves. Using single scalar value for each pixel as the model, the variability of the background scene is not able to be captured. Thus, a dynamic background will be falsely identified as foreground object.
  • a method for modeling the dynamic background is presented in the present invention that improves the performance of the modeling approach using the adaptive region-based background model.
  • this invention also provides a solution to handle dynamic background due to changes of the camera view when a non-static camera is used.
  • the present invention provides a considerable reduction of materials with even greater efficiency and economically during operation.
  • the present invention provides a method for producing a background model from images acquired from a camera comprising acquiring one or more images from the camera; providing a model for each current background scene according to the images acquired; storing the model of each background scene as a layer in a background model; determining a neighboring layer for each background scene based on the images acquired during the camera movement; storing linked indexes of each neighboring layer for each background scene; and forming a final background model based on the linked indexes of neighboring layers.
  • the step for providing a model for each current background scene according to the images acquired further comprising acquiring a time series intensity value for each pixel; updating a frequency for each intensity value for each pixel; computing an average intensity and an intensity variance for each pixel; determining whether the intensity variance for each pixel is more than predetermined threshold value; if the intensity variance of each pixel is more than a predetermined threshold value; grouping all connected dynamic pixels as a same region; merging a density distribution for each pixel to a same group; extracting background feature from the model for a current scene; storing background feature corresponding to the current scene as a model; if the intensity variance of each pixel is less than a predetermined threshold value; storing an average intensity as a model for the pixel; extracting background feature from the model for a current scene; and storing background feature corresponding to the current scene as a model.
  • a statistical model representation is used to determine whether average intensity belongs to background or foreground.
  • a non-dynamic region consists of image pixels which correspond to static background pixels and a dynamic region corresponds to the dynamic background pixels.
  • adjacent dynamic background pixels having similar background models are grouped together.
  • a system for producing a background model for images acquired from a camera comprising a PTZ camera set to monitor wide area and capture images; an image processing unit to analyze the images and extract information from the images; a display unit to display the images captured and analyze output images; and a post event detection unit to trigger an alarm for post action.
  • the camera view of PTZ camera is a multiple camera view.
  • the camera view of the PTZ at a current position is a current camera view and a surrounding scene as the other view.
  • the camera is a non-static camera.
  • the background model is a dynamic background model.
  • Figure l illustratesof background of the scene to be in a static condition in a static camera and moving camera in a conventional video analytics.
  • Figure 2 illustrates method for effectively model the dynamic scene within a camera view - adaptive region-based modeland method for modeling the dynamic scene due to camera movement (modeling multiple non-overlapping background scene) in accordance of an embodiment of the present invention.
  • Figure 3 illustrates a flow chart for a method for producing a dynamic background model based on the images acquired from a non-static camera in accordance of an embodiment of the present invention.
  • Figure 4 illustrates a flow chart for a step of modeling current background scene in accordance of an embodiment of the present invention.
  • Figure 5 illustrates a modeling background scene using adaptive based region model in accordance of an embodiment of the present invention.
  • Figure 6 illustrates a flow chart for a step of modeling multiple non-overlapping background scenes in accordance of an embodiment of the present invention.
  • Figure 2 illustrates an effectively model for the dynamic scene within a camera view - adaptive region-based modeland method for modeling the dynamic scene due to camera movement (modeling multiple non-overlapping background scene)of the present invention.
  • This present invention relates to a method for background modeling taking into consideration that the camera is moving.
  • Background model is the best representation of the background scenes, which captures the changes happened in the background over time.
  • the model is then be used to extract moving object in the scene by comparing the image captured by the camera at any time with the corresponding background model.
  • the method of the present invention can be used in surveillance system as part of the processing unit to represent the background scenes.
  • the surveillance system consists of a pan-tilt-zoom (PTZ)camera set to monitor wide area and capture images, an image processing unit to analyze the images and extract information from the images such as the presence of the moving objects or any events(e.g.
  • PTZ pan-tilt-zoom
  • a display unit to display the images captured and analyzed output images
  • a post event detection unit to trigger the alarm for post action when event is detected by the image processing unit.
  • the alarm can be in any form of sound alarm or message send to the users' mobile phone or email.
  • the PTZ camera is set to monitor a wide area, which requires combination of multiple single camera view. For example, it has to cover areas from pan angle varies from 10 to 100 degrees. However, in the present invention, the area needs to be covered by multiple individual camera view. It is denoted that the camera view of the PTZ at current position as the current camera view and the surrounding scene as the overall view that is set to be covered by the camera. These notations are illustrated in Figure 2.
  • the area to be covered by the PTZ might not be overlapped to each other.
  • the PTZ is used to monitor two different pre-set areas which are not overlapped.
  • Dynamic scene refers to the background scene that is having high variability of intensity over time. In this case, this part of the background area has to be model effectively to capture this variability feature, so that it is not mistakenly assumed as a moving object.
  • the present invention provides a method to represent the dynamically changed background scenes using adaptive region-based model and a method to model multiple scenes without the scenes have to be overlapped to each other.
  • the present invention relates to a method for modeling dynamic scene using region-based adaptive statistical learning to model dynamic background within one camera view scene- based modeling to model multiple non-overlapping regions of the background image scene as illustrated in Figure 3.
  • To form a model for dynamic scene all the images in sequence acquired from a non-static camera. These images are then undergoing a method for modeling current background scene. In the event some of these images are not covered by the step in modeling of current background scene, the current view of the camera also (known as non-overlapping view) will be changed until all the images are covered by the camera.
  • a method for modeling for multiple non-overlapping background scenes is performed until an overall background model is formed.
  • Figure 4 illustrates a flow chart for a step of modeling current background scene in accordance of an embodiment of the present invention.
  • a time series intensity value for each pixel is acquired from the images captured by the non-static camera. During this period, a frequency for each intensity value for each pixel for each image is updated. An average intensity and an intensity variance for each pixel for each image are computed. The intensity variance for each pixel is determined whether this value is more than predetermined threshold value or not. If the variance for any pixel is less than a threshold value, it is identified as a static pixel. This pixel is modeled as a single scalar value which is the average intensityof the pixel over a period of time.
  • the pixel is a dynamic pixel.
  • the neighbouring pixels are identified for any similar dynamic properties. All adjacent dynamic pixels which are having similar dynamic properties are grouped together and the region is represented using a single complex model.
  • a density distribution is compared. If the density distribution is overlapped to certain conditions, then the two pixels are belonged to the same dynamic group and a density distribution for each pixel is merged in a same group.
  • the significant feature points are extracted from the background image. The feature points are stored as part of the background model.
  • Figure 5 illustrates a modeling background scene using adaptive based region model in accordance of an embodiment of the present invention.
  • Figure 5 illustrates the output of the process flow shown in Figure 4.
  • the output is the image pixels segmented into two different regions, either dynamic regions or non-dynamic region.
  • the non-dynamic regions consist of image pixels which correspond to static background pixels.
  • each of the pixels in this region is represented using a scalar value indicating the average intensity of the pixel over a period of time.
  • the dynamic regions are corresponded to the dynamic background pixels. Adjacent dynamic background pixels having similar background models are grouped together. Since pixels in dynamic background regions exhibit high variability in intensity, it is represented using a statistical model representation. For instance, to use Gaussian mixture model distribution as illustrated in Figure 5a.
  • FIG. 6 illustrates a flow chart for a step of modeling multiple non-overlapping background scenes in accordance of an embodiment of the present invention.
  • each of the non - overlapping background scenes is modeled in accordance to the method as described in Figure 4.
  • Each background scene is stored as a layer. Based on the movement of the camera, the link between one layer to the adjacent layers are constructed. This link defines and determines which background scene is adjacent to another background scene.
  • the linked indexes of each neighboring layer for each background scene are stored and a final background model based on the linked indexes of neighboring layers is formed.
  • One of the advantages of the a method for modeling dynamic scene using region-based adaptive statistical learning to model dynamic background within one camera view and scene-based modeling to model multiple non-overlapping regions of the background image scene is that it provides better background representation as compared to background modeling using pixel-wise scalar value.
  • Another advantage of the method of the present invention is that a less computational cost is used as compared to background modeling using pixel-wise statistical or kernel density.
  • the method for the present invention has higher sensitive foreground detection as compared to privacy mask concept (masking out dynamic region from being modeled).

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Description

A METHOD FOR PRODUCING A BACKGROUND MODEL
FIELD OF THE INVENTION The present invention relates to a method for producing a background model based on the images acquired from a non-static camera.
BACKGROUND OF THE INVENTION In object-based video compression, as well as in other types of object-oriented video processing, the input video is separated into two streams.
The conventional video analytics is used to detect suspicious event using static camera. However, such detection requires background of the scene to be in a static condition as illustrated in Figure 1. This is due to the background pixels are stationary (variation of the intensity distribution for each background pixel is small) and any significant change in the intensity is corresponded to the foreground object. Common way of representing the background scene is using a single scalar value to represent each pixel. Scalar value represents a moving average of the pixel intensity over time. Despite of the simplicity of this model, it will not be able to correctly represent the dynamic background scene. Dynamic background scene also refers to the background scene that is having high variability of intensity over time, for. example waving trees and ocean waves. Using single scalar value for each pixel as the model, the variability of the background scene is not able to be captured. Thus, a dynamic background will be falsely identified as foreground object.
There are several solutions for modeling the dynamic background scene. Among others, early initiative is to model the background using normal distribution, multi models, mixture of Gaussian and kernel density. These models are able to robustly represent dynamic background scene. Unfortunately, these models are complex and are used to represent each pixel in the image. Thus, this has increased and contributed to high computational cost to maintain and update the model.
There is, thus, a need for a method for producing a background model based on the images which containing dynamic background. A method for modeling the dynamic background is presented in the present invention that improves the performance of the modeling approach using the adaptive region-based background model. Apart from dynamic background introduced by high variability of intensity distribution for certain dynamic background region, this invention also provides a solution to handle dynamic background due to changes of the camera view when a non-static camera is used. The present invention provides a considerable reduction of materials with even greater efficiency and economically during operation.
SUMMARY OF THE INVENTION
The present invention provides a method for producing a background model from images acquired from a camera comprising acquiring one or more images from the camera; providing a model for each current background scene according to the images acquired; storing the model of each background scene as a layer in a background model; determining a neighboring layer for each background scene based on the images acquired during the camera movement; storing linked indexes of each neighboring layer for each background scene; and forming a final background model based on the linked indexes of neighboring layers.
In one embodiment of the present invention, the step for providing a model for each current background scene according to the images acquired further comprising acquiring a time series intensity value for each pixel; updating a frequency for each intensity value for each pixel; computing an average intensity and an intensity variance for each pixel; determining whether the intensity variance for each pixel is more than predetermined threshold value; if the intensity variance of each pixel is more than a predetermined threshold value; grouping all connected dynamic pixels as a same region; merging a density distribution for each pixel to a same group; extracting background feature from the model for a current scene; storing background feature corresponding to the current scene as a model; if the intensity variance of each pixel is less than a predetermined threshold value; storing an average intensity as a model for the pixel; extracting background feature from the model for a current scene; and storing background feature corresponding to the current scene as a model.
In yet another embodiment of the present invention, a statistical model representation is used to determine whether average intensity belongs to background or foreground. In yet another embodiment of the present invention, a non-dynamic region consists of image pixels which correspond to static background pixels and a dynamic region corresponds to the dynamic background pixels.
In another embodiment of the present invention, adjacent dynamic background pixels having similar background models are grouped together.
A system for producing a background model for images acquired from a camera comprising a PTZ camera set to monitor wide area and capture images; an image processing unit to analyze the images and extract information from the images; a display unit to display the images captured and analyze output images; and a post event detection unit to trigger an alarm for post action. In one embodiment of the present invention, the camera view of PTZ camera is a multiple camera view.
In yet another embodiment of the present invention, the camera view of the PTZ at a current position is a current camera view and a surrounding scene as the other view.
In one embodiment of the present invention, the camera is a non-static camera.
In yet another embodiment of the present invention, the background model is a dynamic background model.
BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments of the invention and, together with the description, serve to explain the principles of the invention.
Figure lillustratesof background of the scene to be in a static condition in a static camera and moving camera in a conventional video analytics. Figure 2 illustrates method for effectively model the dynamic scene within a camera view - adaptive region-based modeland method for modeling the dynamic scene due to camera movement (modeling multiple non-overlapping background scene) in accordance of an embodiment of the present invention.
Figure 3 illustrates a flow chart for a method for producing a dynamic background model based on the images acquired from a non-static camera in accordance of an embodiment of the present invention. Figure 4 illustrates a flow chart for a step of modeling current background scene in accordance of an embodiment of the present invention.
Figure 5 illustrates a modeling background scene using adaptive based region model in accordance of an embodiment of the present invention.
Figure 6 illustrates a flow chart for a step of modeling multiple non-overlapping background scenes in accordance of an embodiment of the present invention.
DETAILED DESCRIPTIONS OF THE INVENTION
Figure 2 illustrates an effectively model for the dynamic scene within a camera view - adaptive region-based modeland method for modeling the dynamic scene due to camera movement (modeling multiple non-overlapping background scene)of the present invention.
This present invention relates to a method for background modeling taking into consideration that the camera is moving. Background model is the best representation of the background scenes, which captures the changes happened in the background over time. The model is then be used to extract moving object in the scene by comparing the image captured by the camera at any time with the corresponding background model. The method of the present invention can be used in surveillance system as part of the processing unit to represent the background scenes. The surveillance system consists of a pan-tilt-zoom (PTZ)camera set to monitor wide area and capture images, an image processing unit to analyze the images and extract information from the images such as the presence of the moving objects or any events(e.g. intrusion, loitering etc.) happening in the scene, a display unit to display the images captured and analyzed output images, and a post event detection unit to trigger the alarm for post action when event is detected by the image processing unit. The alarm can be in any form of sound alarm or message send to the users' mobile phone or email.
In most occasions, the PTZ camera is set to monitor a wide area, which requires combination of multiple single camera view. For example, it has to cover areas from pan angle varies from 10 to 100 degrees. However, in the present invention, the area needs to be covered by multiple individual camera view. It is denoted that the camera view of the PTZ at current position as the current camera view and the surrounding scene as the overall view that is set to be covered by the camera. These notations are illustrated in Figure 2.
In some cases, the area to be covered by the PTZ might not be overlapped to each other. For example, the PTZ is used to monitor two different pre-set areas which are not overlapped. Thus, in this present invention, a method to model the background for this scenario is provided.
At any current camera view, there is a possibility that the background scene is dynamic. Generally, background has to be static, only foreground is moving in the image. However there are background areas that change dynamically such as the area corresponds to water fall, moving leaves etc. Dynamic scene refers to the background scene that is having high variability of intensity over time. In this case, this part of the background area has to be model effectively to capture this variability feature, so that it is not mistakenly assumed as a moving object.
Thus, the present invention provides a method to represent the dynamically changed background scenes using adaptive region-based model and a method to model multiple scenes without the scenes have to be overlapped to each other. The present invention relates to a method for modeling dynamic scene using region-based adaptive statistical learning to model dynamic background within one camera view scene- based modeling to model multiple non-overlapping regions of the background image scene as illustrated in Figure 3. To form a model for dynamic scene, all the images in sequence acquired from a non-static camera. These images are then undergoing a method for modeling current background scene. In the event some of these images are not covered by the step in modeling of current background scene, the current view of the camera also (known as non-overlapping view) will be changed until all the images are covered by the camera. A method for modeling for multiple non-overlapping background scenes is performed until an overall background model is formed.
Figure 4 illustrates a flow chart for a step of modeling current background scene in accordance of an embodiment of the present invention. A time series intensity value for each pixel is acquired from the images captured by the non-static camera. During this period, a frequency for each intensity value for each pixel for each image is updated. An average intensity and an intensity variance for each pixel for each image are computed. The intensity variance for each pixel is determined whether this value is more than predetermined threshold value or not. If the variance for any pixel is less than a threshold value, it is identified as a static pixel. This pixel is modeled as a single scalar value which is the average intensityof the pixel over a period of time.
However, if the intensity variance of each pixel is more than a predetermined threshold value, the pixel is a dynamic pixel. For each dynamic pixel, the neighbouring pixels are identified for any similar dynamic properties. All adjacent dynamic pixels which are having similar dynamic properties are grouped together and the region is represented using a single complex model. To determine whether two pixels are having the similar dynamic property, a density distribution is compared. If the density distribution is overlapped to certain conditions, then the two pixels are belonged to the same dynamic group and a density distribution for each pixel is merged in a same group. After the model has been constructed, the significant feature points are extracted from the background image. The feature points are stored as part of the background model.
Figure 5 illustrates a modeling background scene using adaptive based region model in accordance of an embodiment of the present invention. Figure 5 illustrates the output of the process flow shown in Figure 4. The output is the image pixels segmented into two different regions, either dynamic regions or non-dynamic region. The non-dynamic regions consist of image pixels which correspond to static background pixels. In this case, each of the pixels in this region is represented using a scalar value indicating the average intensity of the pixel over a period of time. The dynamic regions are corresponded to the dynamic background pixels. Adjacent dynamic background pixels having similar background models are grouped together. Since pixels in dynamic background regions exhibit high variability in intensity, it is represented using a statistical model representation. For instance, to use Gaussian mixture model distribution as illustrated in Figure 5a. From the graph, it shows that there are two components of intensity distributions, with average intensity for each component are τ and μ2 respectively. It means that if at any time, if the pixel in current frame is having value either iOr μ2, then the pixel belongs to the background. On contrary, if the value of pixel in current frame is outside from these two values, then it concludes that the pixel belongs to foreground pixel. Figure 6 illustrates a flow chart for a step of modeling multiple non-overlapping background scenes in accordance of an embodiment of the present invention. First, each of the non - overlapping background scenes is modeled in accordance to the method as described in Figure 4. Each background scene is stored as a layer. Based on the movement of the camera, the link between one layer to the adjacent layers are constructed. This link defines and determines which background scene is adjacent to another background scene. The linked indexes of each neighboring layer for each background scene are stored and a final background model based on the linked indexes of neighboring layers is formed.
One of the advantages of the a method for modeling dynamic scene using region-based adaptive statistical learning to model dynamic background within one camera view and scene-based modeling to model multiple non-overlapping regions of the background image scene is that it provides better background representation as compared to background modeling using pixel-wise scalar value. Another advantage of the method of the present invention is that a less computational cost is used as compared to background modeling using pixel-wise statistical or kernel density. Furthermore, the method for the present invention has higher sensitive foreground detection as compared to privacy mask concept (masking out dynamic region from being modeled).
The foregoing embodiment and advantages are merely exemplary and are not to be construed as limiting the present invention. The description of the embodiments of the present invention is intended to be illustrative and not to limit the scope of the claims and many alternatives, modifications and variations will be apparent to those skilled in the art.

Claims

1. A method for producing a background model from acquired images camera comprising: acquiring one or more images from the camera;
providing a model for each current background scene according to the images acquired;
storing the model of each background scene as a layer in a background model; determining a neighboring layer for each background scene based on the images acquired during the camera movement;
storing linked indexes of each neighboring layer for each background scene; and forming a final background model based on the linked indexes of neighboring layers.
2. The method as claimed in Claim 1 wherein providing a model for each current background scene according to the images acquired further comprising acquiring a time series intensity value for each pixel;
updating a frequency for each intensity value for each pixel;
computing an average intensity and an intensity variance for each pixel; determining whether the intensity variance for each pixel is more than predetermined threshold value; if the intensity variance of each pixel is more than a predetermined threshold value;
grouping all connected dynamic pixels as a same region;
merging a density distribution for each pixel to a same group;
extracting background feature from the model for a current scene; storing background feature corresponding to the current scene as a model;
wherein a statistical model representation is used to determine whether average intensity belongs to background or foreground.
3. The method as claimed in Claim 1 wherein providing a model for each current background scene according to the images acquired further comprising acquiring a time series intensity value for each pixel;
updating a frequency for each intensity value for each pixel;
computing an average intensity and an intensity variance for each pixel;
determining whether the intensity variance for each pixel is more than predetermined threshold value;
if the intensity variance of each pixel is less than a predetermined threshold value;
storing an average intensity as a model for the pixel;
extracting background feature from the model for a current scene; and storing background feature corresponding to the current scene as a model;
wherein a statistical model representation is used to determine whether average intensity belongs to background or foreground.
The method as claimed in Claim 2 wherein adjacent dynamic pixels having similar background models are grouped together.
A system for producing a background model for images acquired from a camera comprising a pan-tilt-zoom (PTZ)camera set to monitor wide area and capture images;
an image processing unit to analyze the images and extract information from the images;
a display unit to display the images captured and analyze output images; and a post event detection unit to trigger an alarm for post action.
6. The system as claimed in Claim 5 wherein the camera view of pan-tilt-zoom (PTZ) camera is a combination of multiple single camera view.
7. The method as claimed in as claimed in Claim 1 wherein the background model is a dynamic background model.
PCT/MY2013/000154 2012-09-06 2013-09-05 A method for producing a background model Ceased WO2014038924A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
MYPI2012003979 2012-09-06
MYPI2012003979 2012-09-06

Publications (2)

Publication Number Publication Date
WO2014038924A2 true WO2014038924A2 (en) 2014-03-13
WO2014038924A3 WO2014038924A3 (en) 2014-06-26

Family

ID=50231471

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/MY2013/000154 Ceased WO2014038924A2 (en) 2012-09-06 2013-09-05 A method for producing a background model

Country Status (1)

Country Link
WO (1) WO2014038924A2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150296170A1 (en) * 2014-04-11 2015-10-15 International Business Machines Corporation System and method for fine-grained control of privacy from image and video recording devices
CN113508395A (en) * 2019-04-24 2021-10-15 赛峰电子与防务公司 Method for detecting an object

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105894530A (en) * 2014-12-11 2016-08-24 深圳市阿图姆科技有限公司 Detection and tracking solution scheme aiming at motion target in video

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008008045A1 (en) * 2006-07-11 2008-01-17 Agency For Science, Technology And Research Method and system for context-controlled background updating

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
None

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150296170A1 (en) * 2014-04-11 2015-10-15 International Business Machines Corporation System and method for fine-grained control of privacy from image and video recording devices
US9571785B2 (en) * 2014-04-11 2017-02-14 International Business Machines Corporation System and method for fine-grained control of privacy from image and video recording devices
CN113508395A (en) * 2019-04-24 2021-10-15 赛峰电子与防务公司 Method for detecting an object
CN113508395B (en) * 2019-04-24 2022-06-21 赛峰电子与防务公司 Method and device for detecting objects in an image composed of pixels

Also Published As

Publication number Publication date
WO2014038924A3 (en) 2014-06-26

Similar Documents

Publication Publication Date Title
US9396400B1 (en) Computer-vision based security system using a depth camera
US10713798B2 (en) Low-complexity motion detection based on image edges
KR101223424B1 (en) Video motion detection
JP2023526207A (en) Maintaining a constant size of the target object in the frame
US10269123B2 (en) Methods and apparatus for video background subtraction
US20180048894A1 (en) Methods and systems of performing lighting condition change compensation in video analytics
US10657783B2 (en) Video surveillance method based on object detection and system thereof
US20200184227A1 (en) Improved generation of alert events based on a detection of objects from camera images
US20180144476A1 (en) Cascaded-time-scale background modeling
US10223590B2 (en) Methods and systems of performing adaptive morphology operations in video analytics
US10140718B2 (en) Methods and systems of maintaining object trackers in video analytics
US10360456B2 (en) Methods and systems of maintaining lost object trackers in video analytics
WO2019089441A1 (en) Exclusion zone in video analytics
WO2018031096A1 (en) Methods and systems of performing blob filtering in video analytics
WO2018032270A1 (en) Low complexity tamper detection in video analytics
CN115909215B (en) An edge intrusion early warning method and system based on target detection
CN109564686A (en) Method and system for updating motion models for object trackers in video analytics
CN113920585A (en) Behavior recognition method and device, equipment and storage medium
KR102159954B1 (en) Method for establishing region of interest in intelligent video analytics and video analysis apparatus using the same
Khan et al. Violence detection from industrial surveillance videos using deep learning
Desurmont et al. Image analysis architectures and techniques for intelligent surveillance systems
WO2014038924A2 (en) A method for producing a background model
WO2017204897A1 (en) Methods and systems of determining costs for object tracking in video analytics
CN111225178A (en) Video monitoring method and system based on object detection
Sikandar et al. A review on human motion detection techniques for ATM-CCTV surveillance system

Legal Events

Date Code Title Description
122 Ep: pct application non-entry in european phase

Ref document number: 13834375

Country of ref document: EP

Kind code of ref document: A2