US20220189266A1 - System and method for real-time multi-person threat tracking and re-identification - Google Patents
System and method for real-time multi-person threat tracking and re-identification Download PDFInfo
- Publication number
- US20220189266A1 US20220189266A1 US17/548,494 US202117548494A US2022189266A1 US 20220189266 A1 US20220189266 A1 US 20220189266A1 US 202117548494 A US202117548494 A US 202117548494A US 2022189266 A1 US2022189266 A1 US 2022189266A1
- Authority
- US
- United States
- Prior art keywords
- person
- interest
- image
- security
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/53—Recognition of crowd images, e.g. recognition of crowd congestion
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/103—Static body considered as a whole, e.g. static pedestrian or occupant recognition
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
- G08B13/19602—Image analysis to detect motion of the intruder, e.g. by frame subtraction
- G08B13/19608—Tracking movement of a target, e.g. by detecting an object predefined as a target, using target direction and or velocity to predict its new position
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
- G08B13/19602—Image analysis to detect motion of the intruder, e.g. by frame subtraction
- G08B13/19613—Recognition of a predetermined image pattern or behaviour pattern indicating theft or intrusion
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
- G08B13/19678—User interface
- G08B13/19682—Graphic User Interface [GUI] presenting system data to the user, e.g. information on a screen helping a user interacting with an alarm system
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
- G08B13/19678—User interface
- G08B13/19684—Portable terminal, e.g. mobile phone, used for viewing video remotely
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
- G08B13/19678—User interface
- G08B13/19691—Signalling events for better perception by user, e.g. indicating alarms by making display brighter, adding text, creating a sound
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/695—Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/90—Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
-
- H04N5/23219—
-
- H04N5/23299—
Definitions
- the embodiments described herein relate to security and surveillance, in particular, technologies related to video recognition threat detection.
- the current solution is for security or the security team to comb an area on foot and/or manually view various closed caption television (CCTV) cameras in order to locate the perpetrator.
- CCTV closed caption television
- This is a time consuming and possibly ineffective method when time is of the essence.
- human identification of a person of interest with multiple lighting, viewpoint, and other possible changes like removal of a hat, mask or coat is error-prone.
- the person of interest may be manually selected by the user or automatically by computer software and algorithms.
- FIG. 1 is a diagram illustrating an embodiment of an exemplary threat detection system.
- FIG. 2 is a diagram illustrating a further embodiment of an exemplary threat detection system.
- FIG. 3 is a diagram illustrating a threat detection system using a screening feature.
- FIG. 4 is a diagram illustrating a tracking management interface of threat detection system.
- FIG. 5A and FIG. 5B are screenshots illustrating video feeds of screen tracking.
- FIG. 6 is a block diagram illustrating an exemplary process or method for real-time multi-person threat tracking and re-identification.
- a multi-sensor covert threat detection system utilizes software, artificial intelligence and integrated layers of diverse sensor technologies (i.e., cameras, etc.) to deter, detect and defend against active threats to health and human safety (i.e., detection of guns, knives or fights, or potential health and safety non-compliance) before these events occur.
- covert threat detection system utilizes software, artificial intelligence and integrated layers of diverse sensor technologies (i.e., cameras, etc.) to deter, detect and defend against active threats to health and human safety (i.e., detection of guns, knives or fights, or potential health and safety non-compliance) before these events occur.
- This software platform may use camera and/or closed circuit televisions (CCTVs), or other technologies to detect perpetrators and concealed weapons such as guns and knives and alert security officers to these perpetrators.
- CCTVs closed circuit televisions
- security officers or threat detection system users confirms they want to track perpetrator or people in a video feed scene.
- the user selects-these person(s) of interest whereby the system is triggered to begin tracking the person(s) of interest.
- the system will then present the feeds of the location of the person of interest is located in, in order to allow the security team to track and catch the person(s) of interest.
- FIG. 1 is a diagram illustrating an embodiment of an exemplary threat detection system. According to FIG. 1 , the threat detection system enables the following:
- FIG. 2 is a diagram illustrating a further embodiment of an exemplary threat detection system.
- the threat detection system allows for multiple sensor view (i.e., multiple CCTVs) where all cameras of interest can be tracked on a single dashboard screen view. A timeline of threats is also tracked chronologically.
- the threat detection system further enables the following:
- FIG. 3 is a diagram illustrating a threat detection system using a screening feature.
- FIG. 3 shows a user using a screening feature of a threat detection system.
- the screening feature can be used to detect objects in real time that may not generate alerts, such as missing face masks. Whenever a person is detected a dashed box (or another shape) is be drawn around them.
- the identification box indicates to the user that a person of suspect (i.e. perpetrator) has been identified and that the system is now able to track them. This satisfies a use case of tracking a person of interest through a facility, not necessarily coupled with an associated alert which is the initial entry point into our tracking feature. In both cases, the system is receiving an input to start tracking, that is either an alert generated or generated by a user selection of a person of interest.
- FIG. 4 is a diagram illustrating a tracking management interface of threat detection system. Due to resource limitations, security officer and/or users of the threat detection system may not be able to track everyone in a video feed or scene.
- a management interface for a threat detection system can be used to disable tracking of a person (i.e., person is no longer of interest or has been apprehended for instance).
- the management interface can also show a history of alerts for that person along a timeline. The user will click on the user interface and those detections will show any collected evidence from that moment (i.e., weapon detected).
- FIG. 5A and FIG. 5B are screenshots illustrating video feeds of screen tracking.
- a person is tagged leaving the scene (i.e., boxed person on right) from the security video feed.
- the same person i.e., boxed person on right
- the threat detection system tags this boxed person with the same label despite other people in the video feed and entering the frame before them.
- FIG. 6 is a block diagram illustrating an exemplary process or method for real-time multi-person threat tracking and re-identification.
- system 600 start with cameras or CCTV cameras 602 and 604 .
- Cameras 602 and 604 enable image acquisition at 606 .
- a person detection module or algorithm 608 would identify images of people in an image.
- a person is detected at 608 , it is sent to a module for person identification at 610 and/or person re-identification at 612 .
- Person identification 610 will also check with a database store for person hash store at 614 .
- the information is then sent to API 616 for processing and the output.
- API 616 is used as an endpoint for one or more user interfaces (UI) 618 for display or notification.
- UI user interfaces
- User interface 618 may include a computer display, a mobile phone, an email, text message (e.g., SMS) or a voice message.
- re-identification will be extended across multiple cameras in a fashion similar to what is shown in assist tracking. This feature can be extended to pull up video feeds as a weapon is shown in multiple cameras and to re-identify people or weapons across multiple camera feeds.
- a key feature of this disclosure is the ability for the security team to leverage all cameras at one time automatically.
- the location of person(s) of interest can be tracked across a location without violating the privacy of the person(s) of interest.
- a signature representing their clothes, body type, skin tone, etc.
- the signature can be generated through known mechanisms such as perceptual hashing, and more advanced algorithms that provide unique identification of individual attributes by hashing subsections of the frame representing attribute markers, for example clothes color.
- movement probability algorithms can also be employed, noting that the a person in a frame is probably close to the place where that person was last identified. As other people are seen in other cameras, their signatures are compared. If a signature is found that is close to the perpetrator, then security is notified.
- a system for using CCTV cameras simultaneously to find person of interest in real-time comprising a camera detection system to capture videos, a computer processor to process the video images, a software module to analyze frames of the video images and a means to identify a person of interest and a notification module to send a notification.
- the video image may also be an optical video image, infrared image, LIDAR image, doppler image, an image based on RF scanning, a magnetic signature image, thermal image, or a multiple image composed of combinations of these imaging technologies.
- the notification module sends the notification to a security team to provide confirmation of tracking the person of interest in a video feed scene. Furthermore, upon confirmation by the security team, enabling the system to continuously track the person of interest in the video feed scene.
- the camera detection system further comprises CCTV cameras and the means to identify a person of interest is done manually by user or automatically by computer software or software algorithms.
- the software algorithm is executed only if there is a notification event for which the person of interest alert is triggered.
- the notification event is selected from a list consisting of detection of a weapon, pulling out a weapon, high velocity movements associated with fighting or escaping, abandonment of parcels, participation in unusual crowd activity such as threatening or fighting, throwing objects, proximity to sensitive areas such as restricted access doors, entering restricted areas, and similar.
- the notification module includes sending email, text message (SMS), instant message, voice call, security center user interface and mobile application.
- a computer-implemented method for using CCTV cameras simultaneously to find person of interest in real-time comprising the steps of receiving a video dataset from a camera detection system, analyzing image frames of the video dataset by a computer processor, identifying a person of interest in the video dataset image frames, sending a notification to a security team, receiving a confirmation from the security team to track the person of interest in a video feed scenes and enabling the system to continuously track the person of interest in the video feed scenes.
- step of identifying a person of interest is conducted manually by a user or automatically through supplemental computer software.
- Implementations disclosed herein provide systems, methods and apparatus for generating or augmenting training data sets for machine learning training.
- the functions described herein may be stored as one or more instructions on a processor-readable or computer-readable medium.
- the term “computer-readable medium” refers to any available medium that can be accessed by a computer or processor.
- a medium may comprise RAM, ROM, EEPROM, flash memory, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer.
- a computer-readable medium may be tangible and non-transitory.
- the term “code” may refer to software, instructions, code or data that is/are executable by a computing device or processor.
- a “module” can be considered as a processor executing computer-readable code.
- a processor as described herein can be a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein.
- a general purpose processor can be a microprocessor, but in the alternative, the processor can be a controller, or microcontroller, combinations of the same, or the like.
- a processor can also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
- a processor may also include primarily analog components.
- any of the signal processing algorithms described herein may be implemented in analog circuitry.
- a processor can be a graphics processing unit (GPU).
- the parallel processing capabilities of GPUs can reduce the amount of time for training and using neural networks (and other machine learning models) compared to central processing units (CPUs).
- a processor can be an ASIC including dedicated machine learning circuitry custom-build for one or both of model training and model inference.
- the disclosed or illustrated tasks can be distributed across multiple processors or computing devices of a computer system, including computing devices that are geographically distributed.
- the methods disclosed herein comprise one or more steps or actions for achieving the described method.
- the method steps and/or actions may be interchanged with one another without departing from the scope of the claims.
- the order and/or use of specific steps and/or actions may be modified without departing from the scope of the claims.
- the term “plurality” denotes two or more. For example, a plurality of components indicates two or more components.
- the term “determining” encompasses a wide variety of actions and, therefore, “determining” can include calculating, computing, processing, deriving, investigating, looking up (e.g., looking up in a table, a database or another data structure), ascertaining and the like. Also, “determining” can include receiving (e.g., receiving information), accessing (e.g., accessing data in a memory) and the like. Also, “determining” can include resolving, selecting, choosing, establishing and the like.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Signal Processing (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Alarm Systems (AREA)
Abstract
Description
- The application claims priority to and the benefit of U.S. Provisional Patent Application Ser. No. 63/124,108, entitled “SYSTEM AND METHOD FOR REAL-TIME MULTI-PERSON THREAT TRACKING AND RE-IDENTIFICATION”, filed on Dec. 11, 2020, the disclosure of which is incorporated herein by reference in its entirety.
- The embodiments described herein relate to security and surveillance, in particular, technologies related to video recognition threat detection.
- After one or many perpetrators commit an offense, how can security find the person(s) of interest after they run away? As an example, if a perpetrator brandishes a weapon or assaults another person and the perpetrator disappears into a crowd, how can a security officer find them?
- The current solution is for security or the security team to comb an area on foot and/or manually view various closed caption television (CCTV) cameras in order to locate the perpetrator. This is a time consuming and possibly ineffective method when time is of the essence. In addition, human identification of a person of interest with multiple lighting, viewpoint, and other possible changes like removal of a hat, mask or coat is error-prone.
- A system and method of at using all CCTV cameras simultaneously to find any person of interest in real time and alert security to their location. The person of interest may be manually selected by the user or automatically by computer software and algorithms.
-
FIG. 1 is a diagram illustrating an embodiment of an exemplary threat detection system. -
FIG. 2 is a diagram illustrating a further embodiment of an exemplary threat detection system. -
FIG. 3 is a diagram illustrating a threat detection system using a screening feature. -
FIG. 4 is a diagram illustrating a tracking management interface of threat detection system. -
FIG. 5A andFIG. 5B are screenshots illustrating video feeds of screen tracking. -
FIG. 6 is a block diagram illustrating an exemplary process or method for real-time multi-person threat tracking and re-identification. - In a preferred embodiment, a multi-sensor covert threat detection system is disclosed. This covert threat detection system utilizes software, artificial intelligence and integrated layers of diverse sensor technologies (i.e., cameras, etc.) to deter, detect and defend against active threats to health and human safety (i.e., detection of guns, knives or fights, or potential health and safety non-compliance) before these events occur.
- A software platform for threat detection solutions is envisioned. This software platform may use camera and/or closed circuit televisions (CCTVs), or other technologies to detect perpetrators and concealed weapons such as guns and knives and alert security officers to these perpetrators.
- In a preferred embodiment, security officers or threat detection system users (i.e., security team) confirms they want to track perpetrator or people in a video feed scene. The user selects-these person(s) of interest whereby the system is triggered to begin tracking the person(s) of interest. The system will then present the feeds of the location of the person of interest is located in, in order to allow the security team to track and catch the person(s) of interest.
-
FIG. 1 is a diagram illustrating an embodiment of an exemplary threat detection system. According toFIG. 1 , the threat detection system enables the following: -
- Enable security personnel to quickly monitor situations as they unfold
- Provide full frame rate video with sensor outputs (i.e., CCTV) overlaid for context
- Escalate to full incident at the click of a button
-
FIG. 2 is a diagram illustrating a further embodiment of an exemplary threat detection system. According toFIG. 2 , the threat detection system allows for multiple sensor view (i.e., multiple CCTVs) where all cameras of interest can be tracked on a single dashboard screen view. A timeline of threats is also tracked chronologically. - According to
FIG. 2 , the threat detection system further enables the following: -
- Notify security personnel of emerging threats within their environment
- Augment situational awareness by adding addition sensors to be monitored
- Support identification and re-identification of a threat and tracking through the environment
-
FIG. 3 is a diagram illustrating a threat detection system using a screening feature.FIG. 3 shows a user using a screening feature of a threat detection system. The screening feature can be used to detect objects in real time that may not generate alerts, such as missing face masks. Whenever a person is detected a dashed box (or another shape) is be drawn around them. - The identification box indicates to the user that a person of suspect (i.e. perpetrator) has been identified and that the system is now able to track them. This satisfies a use case of tracking a person of interest through a facility, not necessarily coupled with an associated alert which is the initial entry point into our tracking feature. In both cases, the system is receiving an input to start tracking, that is either an alert generated or generated by a user selection of a person of interest.
-
FIG. 4 is a diagram illustrating a tracking management interface of threat detection system. Due to resource limitations, security officer and/or users of the threat detection system may not be able to track everyone in a video feed or scene. - According to
FIG. 4 , a management interface for a threat detection system can be used to disable tracking of a person (i.e., person is no longer of interest or has been apprehended for instance). The management interface can also show a history of alerts for that person along a timeline. The user will click on the user interface and those detections will show any collected evidence from that moment (i.e., weapon detected). -
FIG. 5A andFIG. 5B are screenshots illustrating video feeds of screen tracking. According toFIG. 5A , a person is tagged leaving the scene (i.e., boxed person on right) from the security video feed. InFIG. 5B , the same person (i.e., boxed person on right) returns to the scene at a later time. The threat detection system tags this boxed person with the same label despite other people in the video feed and entering the frame before them. -
FIG. 6 is a block diagram illustrating an exemplary process or method for real-time multi-person threat tracking and re-identification. According toFIG. 6 ,system 600 start with cameras or 602 and 604.CCTV cameras 602 and 604 enable image acquisition at 606. A person detection module orCameras algorithm 608 would identify images of people in an image. - According to
FIG. 6 , once a person is detected at 608, it is sent to a module for person identification at 610 and/or person re-identification at 612.Person identification 610 will also check with a database store for person hash store at 614. The information is then sent toAPI 616 for processing and the output.API 616 is used as an endpoint for one or more user interfaces (UI) 618 for display or notification.User interface 618 may include a computer display, a mobile phone, an email, text message (e.g., SMS) or a voice message. - According to further disclosure, re-identification will be extended across multiple cameras in a fashion similar to what is shown in assist tracking. This feature can be extended to pull up video feeds as a weapon is shown in multiple cameras and to re-identify people or weapons across multiple camera feeds.
- A key feature of this disclosure is the ability for the security team to leverage all cameras at one time automatically. The location of person(s) of interest can be tracked across a location without violating the privacy of the person(s) of interest.
- This is traditionally known as person tracking/person re-identification. After persons are found in frame, a signature, representing their clothes, body type, skin tone, etc., is created. When a person becomes a perpetrator their signature is saved. The signature can be generated through known mechanisms such as perceptual hashing, and more advanced algorithms that provide unique identification of individual attributes by hashing subsections of the frame representing attribute markers, for example clothes color. To further enhance the ability to track persons moving through space, movement probability algorithms can also be employed, noting that the a person in a frame is probably close to the place where that person was last identified. As other people are seen in other cameras, their signatures are compared. If a signature is found that is close to the perpetrator, then security is notified.
- According to embodiments of this disclosure, a system for using CCTV cameras simultaneously to find person of interest in real-time comprising a camera detection system to capture videos, a computer processor to process the video images, a software module to analyze frames of the video images and a means to identify a person of interest and a notification module to send a notification. Note that in practice, the video image may also be an optical video image, infrared image, LIDAR image, doppler image, an image based on RF scanning, a magnetic signature image, thermal image, or a multiple image composed of combinations of these imaging technologies.
- According to the disclosure, the notification module sends the notification to a security team to provide confirmation of tracking the person of interest in a video feed scene. Furthermore, upon confirmation by the security team, enabling the system to continuously track the person of interest in the video feed scene.
- The camera detection system further comprises CCTV cameras and the means to identify a person of interest is done manually by user or automatically by computer software or software algorithms. The software algorithm is executed only if there is a notification event for which the person of interest alert is triggered. The notification event is selected from a list consisting of detection of a weapon, pulling out a weapon, high velocity movements associated with fighting or escaping, abandonment of parcels, participation in unusual crowd activity such as threatening or fighting, throwing objects, proximity to sensitive areas such as restricted access doors, entering restricted areas, and similar. The notification module includes sending email, text message (SMS), instant message, voice call, security center user interface and mobile application.
- According to further embodiments, a computer-implemented method for using CCTV cameras simultaneously to find person of interest in real-time, the method comprising the steps of receiving a video dataset from a camera detection system, analyzing image frames of the video dataset by a computer processor, identifying a person of interest in the video dataset image frames, sending a notification to a security team, receiving a confirmation from the security team to track the person of interest in a video feed scenes and enabling the system to continuously track the person of interest in the video feed scenes. According to the method, step of identifying a person of interest is conducted manually by a user or automatically through supplemental computer software.
- Implementations disclosed herein provide systems, methods and apparatus for generating or augmenting training data sets for machine learning training. The functions described herein may be stored as one or more instructions on a processor-readable or computer-readable medium. The term “computer-readable medium” refers to any available medium that can be accessed by a computer or processor. By way of example, and not limitation, such a medium may comprise RAM, ROM, EEPROM, flash memory, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. It should be noted that a computer-readable medium may be tangible and non-transitory. As used herein, the term “code” may refer to software, instructions, code or data that is/are executable by a computing device or processor. A “module” can be considered as a processor executing computer-readable code.
- A processor as described herein can be a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor can be a microprocessor, but in the alternative, the processor can be a controller, or microcontroller, combinations of the same, or the like. A processor can also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Although described herein primarily with respect to digital technology, a processor may also include primarily analog components. For example, any of the signal processing algorithms described herein may be implemented in analog circuitry. In some embodiments, a processor can be a graphics processing unit (GPU). The parallel processing capabilities of GPUs can reduce the amount of time for training and using neural networks (and other machine learning models) compared to central processing units (CPUs). In some embodiments, a processor can be an ASIC including dedicated machine learning circuitry custom-build for one or both of model training and model inference.
- The disclosed or illustrated tasks can be distributed across multiple processors or computing devices of a computer system, including computing devices that are geographically distributed. The methods disclosed herein comprise one or more steps or actions for achieving the described method. The method steps and/or actions may be interchanged with one another without departing from the scope of the claims. In other words, unless a specific order of steps or actions is required for proper operation of the method that is being described, the order and/or use of specific steps and/or actions may be modified without departing from the scope of the claims.
- As used herein, the term “plurality” denotes two or more. For example, a plurality of components indicates two or more components. The term “determining” encompasses a wide variety of actions and, therefore, “determining” can include calculating, computing, processing, deriving, investigating, looking up (e.g., looking up in a table, a database or another data structure), ascertaining and the like. Also, “determining” can include receiving (e.g., receiving information), accessing (e.g., accessing data in a memory) and the like. Also, “determining” can include resolving, selecting, choosing, establishing and the like.
- The phrase “based on” does not mean “based only on,” unless expressly specified otherwise. In other words, the phrase “based on” describes both “based only on” and “based at least on.” While the foregoing written description of the system enables one of ordinary skill to make and use what is considered presently to be the best mode thereof, those of ordinary skill will understand and appreciate the existence of variations, combinations, and equivalents of the specific embodiment, method, and examples herein. The system should therefore not be limited by the above described embodiment, method, and examples, but by all embodiments and methods within the scope and spirit of the system. Thus, the present disclosure is not intended to be limited to the implementations shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (16)
Priority Applications (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US17/548,494 US20220189266A1 (en) | 2020-12-11 | 2021-12-11 | System and method for real-time multi-person threat tracking and re-identification |
| US17/652,943 US11881024B2 (en) | 2020-12-11 | 2022-03-01 | System and method for utilizing heat maps for traffic and compliance reporting |
| CA3150803A CA3150803A1 (en) | 2020-12-11 | 2022-03-01 | System and method for utilizing heat maps for traffic and compliance reporting |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202063124108P | 2020-12-11 | 2020-12-11 | |
| US17/548,494 US20220189266A1 (en) | 2020-12-11 | 2021-12-11 | System and method for real-time multi-person threat tracking and re-identification |
Related Parent Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US17/330,971 Continuation US12051241B2 (en) | 2020-05-26 | 2021-05-26 | Sensor systems and methods for facility operation management |
Related Child Applications (2)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US17/330,971 Continuation US12051241B2 (en) | 2020-05-26 | 2021-05-26 | Sensor systems and methods for facility operation management |
| US17/652,943 Continuation US11881024B2 (en) | 2020-12-11 | 2022-03-01 | System and method for utilizing heat maps for traffic and compliance reporting |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| US20220189266A1 true US20220189266A1 (en) | 2022-06-16 |
Family
ID=81927031
Family Applications (2)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US17/548,494 Abandoned US20220189266A1 (en) | 2020-12-11 | 2021-12-11 | System and method for real-time multi-person threat tracking and re-identification |
| US17/652,943 Active US11881024B2 (en) | 2020-12-11 | 2022-03-01 | System and method for utilizing heat maps for traffic and compliance reporting |
Family Applications After (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US17/652,943 Active US11881024B2 (en) | 2020-12-11 | 2022-03-01 | System and method for utilizing heat maps for traffic and compliance reporting |
Country Status (2)
| Country | Link |
|---|---|
| US (2) | US20220189266A1 (en) |
| CA (2) | CA3141974A1 (en) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20240143645A1 (en) * | 2022-11-01 | 2024-05-02 | Getac Technology Corporation | Item analysis and linking across multiple multimedia files |
| US20250124712A1 (en) * | 2023-10-16 | 2025-04-17 | Motorola Solutions, Inc. | System and method for reconfiguring a second camera based on a first camera |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN115565322A (en) * | 2022-09-16 | 2023-01-03 | 泰安万川电器设备有限公司 | Inclined roadway transportation system and method for preventing people from entering into inclined roadway transportation system by mistake |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111263114A (en) * | 2020-02-14 | 2020-06-09 | 北京百度网讯科技有限公司 | Abnormal event alarm method and device |
| US11082668B2 (en) * | 2008-05-05 | 2021-08-03 | Iomniscient Pty Ltd | System and method for electronic surveillance |
Family Cites Families (14)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP4865557B2 (en) * | 2003-08-15 | 2012-02-01 | スカーペ テクノロジーズ アクティーゼルスカブ | Computer vision system for classification and spatial localization of bounded 3D objects |
| US20140132767A1 (en) * | 2010-07-31 | 2014-05-15 | Eric Sonnabend | Parking Information Collection System and Method |
| EP2632322A1 (en) * | 2010-10-29 | 2013-09-04 | IT-universitetet I København | Method of determining reflections of light |
| US10027952B2 (en) * | 2011-08-04 | 2018-07-17 | Trx Systems, Inc. | Mapping and tracking system with features in three-dimensional space |
| AU2015220463B2 (en) * | 2014-02-10 | 2017-06-01 | Geenee Gmbh | Systems and methods for image-Feature-Based Recognition |
| US10157372B2 (en) * | 2015-06-26 | 2018-12-18 | Amazon Technologies, Inc. | Detection and interpretation of visual indicators |
| US10311409B1 (en) * | 2015-06-26 | 2019-06-04 | Amazon Technologies, Inc. | Detection and interpretation of visual indicators |
| WO2019136404A1 (en) * | 2018-01-05 | 2019-07-11 | Todd Petersen | Transportation systems and related methods |
| US20210158430A1 (en) * | 2018-07-16 | 2021-05-27 | Accel Robotics Corporation | System that performs selective manual review of shopping carts in an automated store |
| US10997858B2 (en) * | 2019-01-08 | 2021-05-04 | Continental Automotive Systems, Inc. | System and method for determining parking occupancy detection using a heat map |
| US11288842B2 (en) * | 2019-02-15 | 2022-03-29 | Interaptix Inc. | Method and system for re-projecting and combining sensor data for visualization |
| US10796567B1 (en) * | 2019-04-17 | 2020-10-06 | Capital One Services, Llc | Vehicle identification based on machine-readable optical marker |
| US11783658B2 (en) * | 2020-06-15 | 2023-10-10 | Honeywell International Inc. | Methods and systems for maintaining a healthy building |
| US20220034542A1 (en) * | 2020-08-03 | 2022-02-03 | Trane International Inc. | Systems and methods for indoor air quality based on dynamic people modeling to simulate or monitor airflow impact on pathogen spread in an indoor space and to model an indoor space with pathogen killing technology, and systems and methods to control administration of a pathogen killing technology |
-
2021
- 2021-12-11 CA CA3141974A patent/CA3141974A1/en active Pending
- 2021-12-11 US US17/548,494 patent/US20220189266A1/en not_active Abandoned
-
2022
- 2022-03-01 CA CA3150803A patent/CA3150803A1/en active Pending
- 2022-03-01 US US17/652,943 patent/US11881024B2/en active Active
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US11082668B2 (en) * | 2008-05-05 | 2021-08-03 | Iomniscient Pty Ltd | System and method for electronic surveillance |
| CN111263114A (en) * | 2020-02-14 | 2020-06-09 | 北京百度网讯科技有限公司 | Abnormal event alarm method and device |
Non-Patent Citations (3)
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20240143645A1 (en) * | 2022-11-01 | 2024-05-02 | Getac Technology Corporation | Item analysis and linking across multiple multimedia files |
| US20250124712A1 (en) * | 2023-10-16 | 2025-04-17 | Motorola Solutions, Inc. | System and method for reconfiguring a second camera based on a first camera |
Also Published As
| Publication number | Publication date |
|---|---|
| US20220189169A1 (en) | 2022-06-16 |
| US11881024B2 (en) | 2024-01-23 |
| CA3150803A1 (en) | 2022-09-01 |
| CA3141974A1 (en) | 2022-06-11 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN110933955B (en) | Improved generation of alarm events based on detection of objects from camera images | |
| Mishra et al. | An intelligent motion detection using OpenCV | |
| US11935303B2 (en) | System and method for mitigating crowd panic detection | |
| US20180314897A1 (en) | Surveillance System with Human Behavior Prediction by Human Action Recognition | |
| US20220189266A1 (en) | System and method for real-time multi-person threat tracking and re-identification | |
| US20160019427A1 (en) | Video surveillence system for detecting firearms | |
| US12253605B2 (en) | Individual identification and tracking via combined video and LiDAR systems | |
| Satybaldina et al. | Development of an algorithm for abnormal human behavior detection in intelligent video surveillance system | |
| Bushra et al. | Predicting anomalous and consigning apprise during heists | |
| US20210366072A1 (en) | System and method for situational awareness assist view | |
| Frejlichowski et al. | SmartMonitor: An approach to simple, intelligent and affordable visual surveillance system | |
| Narayanan et al. | Real-time video surveillance system for detecting malicious actions and weapons in public spaces | |
| Saketh et al. | Intelligent surveillance support system | |
| Hanief Wani et al. | Deep learning-based video surveillance system for suspicious activity detection | |
| Aishwarya et al. | A Deep Learning Strategy for Abnormal Object Detection by YOLOv8 Model | |
| Kapoor et al. | Real-Time Casualty Detection System Using CCTV Surveillance: A Deep Learning Approach | |
| GB2602790A (en) | A method and apparatus for controlling actions of a monitoring system | |
| Hunain et al. | A framework using artificial intelligence for vision-based automated firearm detection and reporting in smart cities | |
| Mallikharjuna Rao et al. | Computer Vision-Based Self-inflicted Violence Detection in High-Rise Environments Using Deep Learning | |
| Shukla et al. | Human Suspicious Activity Recognition | |
| Dijk et al. | Intelligent sensor networks for surveillance | |
| Saleem et al. | AI-Empowered Intruder Alert System for Enhanced Security Surveillance | |
| Kotal et al. | Nayantara: Crime Analysis from CCTV Footage Using MobileNet-V2 and Transfer Learning | |
| Dev et al. | Holographic Video Surveillance Systems: A Comprehensive Analysis | |
| Rajah et al. | CCTV armed robbery detection with YOLOv8 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| AS | Assignment |
Owner name: PATRIOT ONE TECHNOLOGIES INC., CANADA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MILLAR, JONATHAN;CAMERON, JAMES ALLAN DOUGLAS;MUNZ, PHILLIP KONRAD;AND OTHERS;REEL/FRAME:059265/0447 Effective date: 20211221 |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION MAILED |
|
| STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |