GB2631286A - A system for monitoring vehicular traffic - Google Patents
A system for monitoring vehicular traffic Download PDFInfo
- Publication number
- GB2631286A GB2631286A GB2309541.7A GB202309541A GB2631286A GB 2631286 A GB2631286 A GB 2631286A GB 202309541 A GB202309541 A GB 202309541A GB 2631286 A GB2631286 A GB 2631286A
- Authority
- GB
- United Kingdom
- Prior art keywords
- vehicle
- determined
- camera
- bounding box
- controller
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/22—Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/54—Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/62—Text, e.g. of license plates, overlay texts or captions on TV images
- G06V20/625—License plates
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0108—Measuring and analyzing of parameters relative to traffic conditions based on the source of data
- G08G1/0116—Measuring and analyzing of parameters relative to traffic conditions based on the source of data from roadside infrastructure, e.g. beacons
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/017—Detecting movement of traffic to be counted or controlled identifying vehicles
- G08G1/0175—Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/04—Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/69—Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/695—Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Signal Processing (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
Abstract
Method for monitoring traffic V1-4, comprising: monitoring images from a pan-tilt-zoom (PTZ) camera 110 in a wide-angle (broad) field of view mode and detecting vehicles in the images (Fig.7A); determining when a vehicle has performed a predetermined action; and on detection of the action narrowing the field of view (zooming in) of the PTZ camera on the vehicle (Fig.7B). Predetermined actions may be entering or stopping in a predetermined or restricted area 10. Bounding boxes (11, Fig.4) may be placed around vehicles and movement of the bounding boxes tracked. Determining a vehicle has performed an illegal action may comprise detecting an overlap (12, Fig.4A) between bounding box and pre-determined area and either: defining an outline of the vehicle (13, Fig.4B) and detecting an overlap (O, Fig.4C) between the outline and pre-determined area; or determining position of the wheels (W1-4, Fig.6B); and determining whether a lowest point of the wheels (P1-4, Fig.6C) is within the area. The system may locate and identify a vehicle number plate. A second embodiment defines a method for monitoring objects, comprising: defining a bounding box around a detected object; tracking movement of the bounding box; determining the bounding box is within a pre-determined area; defining an object outline overlaps the pre-determined area.
Description
A System for Monitoring Vehicular Traffic Technical Field The present disclosure relates to a system for monitoring vehicular traffic, and a computer-implemented method for monitoring an object in a field of view of a camera relative to a predetermined area within the field of view.
Background
Cameras for monitoring vehicular traffic, also known as traffic enforcement cameras, are cameras which are mounted beside or over a road to monitor the vehicular traffic on the road and detect traffic violations committed by a particular vehicle. These traffic enforcement cameras can monitor a wide range of traffic situations, including, for example, special lane enforcement, red light enforcement, speed limit enforcement or box junction monitoring.
In some cases, the traffic enforcement cameras include automatic number plate recognition (ANPR) systems which can automatically identify and read number plates in images captured by a traffic enforcement camera. However, in order for an ANPR system to accurately and effectively read a number plate, the quality of the images must be sufficiently high to allow the individual characters of the number plate to be recognised. This usually requires a high-resolution camera which is more expensive and will have greater bandwidth and memory requirements for processing the images, making the system more expensive to purchase, maintain and operate.
Multi-camera systems are also known which have multiple fixed cameras including at least a wide-angle camera for providing an overview and to capture video of a predetermined traffic environment and a narrow-focus camera directed to capture still images zoomed in on a predetermined point where a traffic violation is anticipated to occur. However, such systems typically require multiple narrow-focus cameras to capture potential violations from different possible angles and are expensive to purchase and maintain. Such systems are normally manually calibrated and have to be adjusted on-site by trained and authorised technicians if the camera becomes dislodged or loses focus for any reason. Such systems also require the multiple images taken from the multiple cameras to be manually searched and combined in order to present evidence of a potential traffic violation. Such searching can be time consuming and prone to images being lost or irretrievable and prone to human error.
In view of this, there is a need in the art for a cost-effective solution for a system for monitoring vehicular traffic which can accurately and effectively identify vehicles with a lower-resolution camera having associated lower operational bandwidth and memory requirements and reduced capital and maintenance costs.
It is also desirable to provide a system that can be readily adapted to many different use scenarios and that can use a single camera for monitoring multiple types of traffic violations, and/or for monitoring traffic violations in multiple variable positions not limited to a single fixed field of view.
Summary
In a first aspect of the present disclosure, there is provided a system for monitoring vehicular traffic. The system comprises a pan-tilt-zoom (PTZ) camera having a wide-angle configuration and a zoomed-in configuration; and a controller for operating the PTZ camera. The controller is configured to: monitor images from the PTZ camera in the wide-angle configuration and detect a vehicle in the images using a vehicle detection algorithm; determine when the vehicle has performed a pre-determined action; and when it is determined that the vehicle has performed the pre-determined action, focus the PTZ camera on the vehicle in the zoomed-in configuration.
In some embodiments, this may result in a system for monitoring vehicular traffic which can accurately and effectively identify vehicles with a lower-resolution camera requiring lower operational bandwidth and memory and having reduced capital and maintenance costs.
Throughout this disclosure, the term pan-tilt-zoom (PTZ) camera is used to describe a camera which has the capability to pan horizontally, tilt vertically and optically zoom in and out by adjusting the focal length of the camera.
Embodiments of the system may also be readily adapted to many different use scenarios. A system with a single PTZ camera can be set for monitoring multiple types of traffic violations simultaneously or selectively. A system with a single PTZ camera can also/alternatively be used for monitoring traffic violations in multiple variable positions within its wide-angle field cf view and is not limited to a single fixed field of view for the zoomed-in configuration.
The controller may determine that the vehicle has Performed a pre-determined action when the vehicle has entered a predetermined area or when the vehicle has stopped within a predetermined area.
In some embodiments, this may allow the system to accurately and effectively monitor vehicles in a specific pre-determined area.
The controller may be further configured to track movement of the vehicle.
In some embodiments, this may allow the system to more effectively determine different pre-determined actions performed by the vehicle.
Tracking movement of the vehicle may comprise defining a bounding box around the detected vehicle in the monitored images and tracking movement of the bounding box.
In some embodiments, this may provide an effective way to track movement of the vehicle.
Throughout this disclosure, the term bounding box is used to refer to the smallest rectangle with vertical and horizontal sides that completely surrounds a detected object in images of the camera.
Determining when the vehicle has performed a pre-determined action may comprise determining when the bounding box is at least partially within the pre-determined area.
In some embodiments, this may allow for a quick and efficient determination of whether the vehicle is positioned within the pre-determined area.
Determining when the vehicle has performed a pre-determined action may comprise determining when the bounding box is at least partially within the pre-determined area and is stationary.
In some embodiments, this may allow for a quick and efficient determination of whether the vehicle is positioned and stationary within the pre-determined area.
Determining when the vehicle has performed a pre-determined action may further comprise defining an outline of the detected vehicle in the captured image when it is determined that the bounding box is at least partially within the predetermined area; and determining whether a threshold amount of the area of the outlined vehicle overlaps with the predetermined area.
In some embodiments, this may allow for an accurate determination of whether the vehicle is at least partially positioned within the pre-determined area.
Determining when the vehicle has performed a pre-determined action may further comprise determining a position of one or more of the wheels of the detected vehicle in the captured image when it is determined that the bounding box is at least partially within the pre-determined area; and determining whether a lowest point of any of the one or more wheels of the detected vehicle is positioned within the pre-determined area.
In some embodiments, this may allow for an accurate determination of whether the vehicle is at least partially positioned within the pre-determined area.
The controller may be further configured to identify the position of a number plate within the bounding box.
The controller may be further configured to identify a number plate of the vehicle with an automatic number plate recognition algorithm in the zoomed-in configuration.
In some embodiments, this may allow for an accurate determination of the vehicle number plate which reduces bandwidth requirements.
The controller may be further configured to verify that the number plate is stationary.
In some embodiments, this may result in a more accurate determination of whether the vehicle is stationary.
The camera may be calibrated at a plurality of different zoom levels.
In some embodiments, this may result in improved image quality in the zoomed-in configuration and improved identification of a vehicle.
The PTZ camera may have a set of pan, tilt and zoom values for the wide-angle configuration. Focusing the PTZ camera on the vehicle in the zoomed-in configuration may comprise calculating an adjusted set of pan, tilt and zoom values for the PTZ camera and adjusting the camera to the adjusted set of pan, tilt and zoom values.
In some embodiments, this may facilitate identifying a traffic violation across a continuum of potential vehicle positions and not limited to a particular anticipated vehicle position within a fixed narrow field of view.
Calculating the adjusted pan and tilt values may comprise determining an error between the centre of the image of the wide-configuration and a centre of a bounding box placed around the detected vehicle.
In some embodiments, this may allow for quick and accurate zooming in on the vehicle.
Calculating the adjusted zoom value may comprise determining the smallest rectangle that fully contains a bounding box placed around the detected vehicle whilst maintaining the same aspect ratio of the image of the wide-angle configuration.
In some embodiments, this may allow for quick and accurate zooming in on the vehicle.
The controller may be further configured to determine a speed and/or acceleration of the detected vehicle.
The vehicle detection algorithm may include or access an Artificial Neural Network. The vehicle detection algorithm may include or access a Convolutional Neural Network.
In some embodiments, this may allow for an accurate and effective detection of the vehicle.
The controller may be configured to store a still image from the camera in the wide-angle configuration after detection of a vehicle using the vehicle detection algorithm.
In some embodiments, this may allow an image of the vehicle to be stored for later access and identification.
The controller may be configured to store video footage from the camera after detection of the vehicle using the vehicle detection algorithm.
The controller may be configured to store video footage from the camera after detection of the vehicle using the vehicle detection algorithm.
In some embodiments, this may allow a video of the vehicle performing a pre-determined action to be stored for later access and identification.
The controller may be configured not to perform any number plate recognition when the camera is in the wide-angle configuration.
In some embodiments, this may reduce the memory and processor requirements of the system.
The controller may be configured to detect a plurality of vehicles using the vehicle detection algorithm when monitoring images from the PT1 camera in the wide-angle configuration.
In some embodiments, this may allow the system to monitor multiple vehicles simultaneously.
In a second aspect of the present disclosure, there is provided a computer-implemented method for monitoring an object in a field of view of a camera relative to a predetermined area within the field of view. The method comprises: detecting an object in images captured by the camera; defining a bounding box around the detected object in the captured images; tracking movement of the bounding box across the captured images; determining when the bounding box is at least partially within the pre-determined area; defining an outline of the detected object in the captured image when it is determined that the bounding box is at least partially within the pre-determined area; determining whether the outline of the object is at least partially within the pre-determined area.
In some embodiments, this may provide a method for efficiently and accurately determining whether an object is positioned within a pre-determined area.
The object may be a vehicle.
In some embodiments, this may allow a system to efficiently and accurately monitor vehicular traffic.
The method may further comprise determining that the bounding box is stationary prior to defining an outline of the detected object in the captured image.
In some embodiments, this may provide a method for efficiently and accurately determining whether an object is positioned and stationary within a pre-determined area The method may further comprise calculating a set of pan, tilt and zoom values for a pan-tilt zoom camera to move from a wide-angle configuration to a zoomed in configuration focused on the object after it has been determined that the outline of the object is at least partially within the predetermined area.
In some embodiments, this may allow for accurate and effective identification of vehicles with lower bandwidth and memory requirements.
The method may further comprise controlling the camera to move from a wide-angle configuration to a zoomed in configuration focused on the object after it has been determined that the outline of the object is at least partially within the pre-determined area.
In some embodiments, this may allow for accurate and effective identification of vehicles with lower bandwidth and memory requirements.
Brief Description of the Drawings
To enable better understanding of the present disclosure, and to show how the same may be carried into effect, reference will now be made, by way of example only, to the accompanying drawings, in which: FIG. 1 shows a system for monitoring vehicular traffic in accordance with one or more embodiments described herein.
FIG. 2A shows a flow diagram for a method of monitoring vehicular traffic.
FIG. 2B shows a flow diagram for a method of monitoring vehicular traffic.
FIG. 3 shows a flow diagram for a method of determining when a vehicle has performed a pre-determined action in the method of monitoring vehicular traffic of FIG. 2.
FIGS. 4A to 4C illustrate a method of determining when a vehicle has performed a pre-determined action.
FIG. 5 shows a flow diagram for a method of determining when a vehicle has performed a pre-determined action in the method of monitoring vehicular traffic of FIG. 2.
FIG. 6A to 6C illustrate a method of determining when a vehicle has performed a pre-determined action.
FIGS. 7A and 7B illustrate the step of focusing a PTZ camera on a vehicle in a zoomed in configuration in the method of monitoring vehicular traffic of FIG. 2.
FIGS. 8A to 8D illustrate a method of determining the location of a number plate on a vehicle.
FIG. 9 shows a flow diagram for a method of monitoring an object in a field of view of a camera relative to a predetermined area within the field of view.
Detailed Description
FIG. 1 shows a system 100 for monitoring vehicular traffic. The system 100 comprises a pan-tilt-zoom (PTZ) camera 110 and a controller 120 for operating the PTZ camera 110. The PTZ camera 110 and the controller 120 may be integrated into a single package, as shown in FIG. 1, and installed on a pole 130.
The PTZ camera 110 may be configured to monitor the vehicular traffic in a pre-determined area 10, such as a box junction B shown in FIG. 1. In the embodiment of FIG. 1, the predetermined area 10 corresponds to the box junction B, however, the pre-determined area 10 may in some cases be larger or smaller than the box junction B and does not necessarily correspond exactly to the box junction B. The PTZ camera 110 may be installed such that it can observe the entire pre-determined area 10, for example, by mounting the PTZ camera 110 on a pole 130 overlooking the box junction B. The PTZ camera 110 is then able to pan horizontally, tilt vertically and zoom in and out to monitor the entire predetermined area 10. The PTZ camera 100 can operate in a wide-angle configuration where the PTZ camera 110 is zoomed out such that the entire pre-determined area 10 is visible, and a zoomed in configuration where the PTZ camera 110 is zoomed in on a specific vehicle. This allows the PTZ camera 110 to observe and monitor when any vehicles enter the predetermined area 10 as well as determine when a vehicle stops in the pre-determined area 10. The system 100 may be configured to monitor a wide range of vehicles such as cars V1 and V2, buses V3 and motorcycles V4.
The controller 120 is in communication with the PTZ camera 110 and configured to monitor images captured by the PTZ camera 110 and control operations of the PTZ camera 110. The controller 120 may be in communication with the PTZ camera 110 through a wired connection or a wireless connection such as Wireless LAN. The controller 120 may include a processor 121, for example, a mobile CPU with an optional hardware accelerator such as a GPU, and memory 122. The memory 122 may have a vehicle detection algorithm stored thereon which can be accessed and executed by the processor 121 in order to detect a vehicle in the images captured by the PTZ camera 110. The vehicle detection algorithm may include or access an Artificial Neural Network such as a Convolutional Neural Network. Examples of vehicle detection algorithms which may be used include Region-based Convolutional Neural Networks (R-CNN), Fast R-CNN, Region-based Fully Convolutional Networks (R-FCN) and You Only Look Once (YOLO). The controller 120 may also include or have access to an automatic number plate recognition (ANPR) algorithm. For example, the ANPR algorithm may be stored within memory 122 and accessed by controller 121 to be applied to images from the PTZ camera 110. This may allow the ANPR to operate in as close to real-time as possible. Alternatively, the ANPR algorithm may be stored outside of the controller 120, for example on an external memory or server, and accessed by the processor 121 to be applied to images from the PTZ camera 110. The controller 120 may also have a power supply 123 for providing power to the controller 120 and the PTZ camera 110. The power supply 123 may include a battery, a connection to an external source of power such as a power grid, or both.
The operation of the system 100 in order to monitor vehicular traffic is explained with reference to FIG. 2A which shows a flow chart for a method 200A of monitoring vehicular traffic.
In step 220, the PTZ camera 110 is operated in a wide-angle mode. In the wide-angle mode, the PTZ camera 110 is zoomed out such that it can capture images of a pre-determined area 10, such as box junction B. While the PTZ camera 110 is operated in the wide-angle mode, it captures images which are sent to the controller 120 and monitored in real-time by the controller 120.
In step 230, the controller 120 detects a vehicle in the wide-angle images of the PTZ camera 110 using a vehicle detection algorithm. As noted above, the vehicle detection algorithm may be stored within the memory 122 of controller 120 or may be stored externally and accessed by the controller 120. The vehicle detection algorithm may include or access an Artificial Neural Network such as a Convolutional Neural Network. Once a vehicle has been detected, a bounding box may be placed around the vehicle in the images to identify the vehicle and allow movement of the vehicle to be tracked. The vehicle detection algorithm may be trained to identify a number of different vehicles such as cars, buses, trucks, or motorcycles.
The vehicle detection algorithm may operate in the entire field of view of the PTZ camera 110 and is not limited to only the pre-determined area 10. As soon as a new vehicle enters the field of view of the PTZ camera 110, it is detected by the vehicle detection algorithm. Multiple vehicles may be detected by the vehicle detection algorithm in the same image and the movement of the vehicles may be tracked simultaneously across different images.
As soon as a new vehicle is detected, the controller 120 may start to record video footage from the PTZ camera 110 and store this in memory 122. The controller 120 may also store a pre-recording of video footage of 10 to 20 seconds prior to detection of the new vehicle and store this in memory 122. The controller 120 will continue to record and store the video footage until the vehicle has performed the predetermined action. If the vehicle does not perform the predetermined action, then the video footage is deleted.
As soon as a new vehicle is detected by the vehicle detection algorithm in the images captured by the PTZ camera 110, the controller 120 also stores the still image in which that vehicle was first detected in memory 122. The controller 120 may also store a frame counter of the PTZ camera 110 or a timestamp together with the still image. The still image and the frame counter or time stamp may be used later to provide an image of the first observation of the vehicle and may help to extract the relevant segment of the video footage.
While the PTZ camera 110 is operating in the wide-angle configuration, no automatic number plate recognition (ANPR) need be performed on the images captured by the PTZ camera 110. This allows the resolution of the PTZ camera to be lower as the number plates do not need to be legible and recognizable in the wide-angle configuration. This reduces the memory and bandwidth requirements for the system 100 and also reduces the cost of the PTZ camera 110.
In step 240, the controller 120 determines when a vehicle has performed a pre-determined action. The system 100 may be applied in a number of different settings and circumstances to monitor vehicular traffic. The pre-determined action may be a traffic violation committed by the vehicle such as, for example, driving faster than the speed limit, failing to stop at a red light, driving in a prohibited lane or stopping in a pre-determined area 10 such as box junction B. The controller 120 may be configured to determine a speed and/or an acceleration of the detected vehicle to determine if the predetermined action has been performed, for example, if the pre-determined action includes driving faster than the speed limit or failing to stop at a red light. For the case where the pre-determined action includes determining whether a vehicle has stopped in a pre-determined area, a more detailed description of this method step is provided with reference to FIG. 3 below.
Once it has been determined that the vehicle has performed the pre-determined action, video footage of the vehicle performing the pre-determined action may be stored by the controller in memory 122 for later use.
In step 250, once it has been determined that a vehicle has performed a pre-determined action, the controller 120 controls the PTZ camera 110 to focus the PTZ camera on the vehicle in a zoomed-in configuration. In order to move the PTZ camera 110 from the wide-angle configuration to the zoomed-in configuration, the controller 120 calculates an adjusted set of pan, tilt and zoom values.
The calculation of the adjusted pan and tilt values allows the PTZ camera 110 to centre the new field of view on the vehicle. Firstly, the coordinates of the centre of the bounding box C,, C7 of the vehicle are calculated. The current values of pan, tilt and zoom are retrieved from the PTZ camera 110 and the pan and tilt values are converted into angles in degrees. The current zoom value may be used to determine the current field of view of the PTZ camera 110. This may be done using the results of a calibration previously performed on the PTZ camera 110. The error in pixels err.,, errs between the centre of the bounding box C, C, and the centre of the image Ox, 0, is then calculated using the following equations: (1) errx = Cr -Or (2) errs = Cy -Oy From the current field of view of the PTZ camera 110, which may be obtained from the results of the calibration, it is then possible to convert the error in pixels into an error in degrees. The adjusted pan and tilt values are then calculated by subtracting the error in degrees from the current pan and tilt values obtained from the PTZ camera 110.
The calculation of the adjusted zoom value corresponds to the calculation of the level of magnification. The coordinates of the new field of view in pixels are calculated by finding the smallest rectangle that fully contains the bounding box of the vehicle whilst maintaining the same aspect ratio as the image in the wide-angle configuration. The adjusted zoom value may then be calculated from the ratio of the new field of view (i.e. the zoomed in configuration) and the current field of view (i.e. the wide-angle configuration) and the parameters of the calibration. This results in a determination of the largest zoom value resulting in an image which still fully contains the bounding box of the vehicle. The adjusted zoom value is then applied to the PTZ camera 110 by the controller 120 to move the PTZ camera from the wide-angle configuration to the zoomed-in configuration. The calculation of the pan, tilt and zoom values is illustrated in further detail with respect to FIG. 5 below.
FIG. 2B shows another method 200B of monitoring vehicular traffic. The method steps 220, 230, 240 and 250 are identical to that of method 200A of FIG. 2A. Method 200B further comprises optional method step 210, where the PTZ camera 110 is calibrated, and optional method step 260 where a number plate of the vehicle is identified.
In step 210, the PTZ camera 210 may be calibrated. The PTZ camera 210 may be calibrated at a plurality of different zoom levels through a series of individual calibrations. At each zoom level, a calibration pattern is shown to the PTZ camera 110 at different angles and distances. The position of the calibration pattern on the image is detected using pattern recognition software and the image is saved as a file for post-processing, for example, in memory 122. A standard checkboard calibration pattern may be used, for example. During the post-processing, the intrinsic parameters of the PTZ camera 110, such as the focal length and optical centres, are calculated and saved. The calibration process produces a look-up table that maps the zoom level to the intrinsic parameters of the PTZ camera 110. The calibration improves the image of the PTZ camera 110 at different zoom levels and minimises radial and tangential distortion of the image. The PTZ camera may be calibrated at 5 to 20 different zoom levels, preferably 10 to 15 different zoom levels.
In step 260, after the PTZ camera 110 has focused on the vehicle in the zoomed-in configuration (after step 250), the controller 120 identifies a number plate of the vehicle in the zoomed-in configuration. In the zoomed-in configuration the number plate on the vehicle becomes large enough to be read reliably by the automatic number plate recognition (ANPR) algorithm. The controller 120 applies the ANPR algorithm to the images captured by PTZ camera 110, and specifically to the portion of the image occupied by the bounding box, to read the number plate of the vehicle. An image of the number plate may be captured and stored, for example in memory 122, for later use.
If the determination of whether the vehicle has performed a pre-determined action includes the determination of whether the vehicle has stopped within a pre-determined area, the controller 120 may further verify that the number plate is stationary when applying the ANPR algorithm to the images captured by the PTZ camera 110. This minimises the chances of capturing a vehicle which is moving slowly rather than stopped.
After the number plate of the vehicle has been identified and an image of the number plate has been stored, the PTZ camera 110 may return from the zoomed-in configuration to the wide-angle configuration. A configurable time interval may be set for this this to occur after the identification and capturing of the number plate. A further image may be captured in the wide-angle configuration including the vehicle and the surroundings and stored in memory 122. The method may then return to step 220 and continue to operate the PTZ camera 110 in the wide-angle configuration.
In the case where the determination of whether the vehicle has performed a pre-determined action includes the determination of whether the vehicle has stopped within a pre-determined area, the controller 120 may continue to track the vehicle in the zoomed in configuration as the vehicle starts to move and until it leaves the pre-determined area 10, before returning the PTZ camera 110 to the wide-angle configuration and returning to method step 220. The PTZ camera 110 may continue to record and store video footage of the vehicle throughout this time period, including any portion of the time period from when the vehicle first entered the camera's field of view, its transition into and through the predetermined area, its exit from the predetermined area, and beyond.
FIG. 3 illustrates a flow diagram of a method 240A for the method step 240 of method 200A or 200B in FIGS. 2A and 2B in more detail for the specific case where determining when the vehicle has performed a pre-determined action includes determining whether the vehicle has stopped within a predetermined area 10, such as the box junction B, for example.
In step 241, the controller 120 tracks movement of the vehicle identified in step 230 by tracking movement of the bounding box placed around the identified vehicle. This tracking of the movement of the bounding box occurs when the PTZ camera 110 is in the wide-angle configuration. Whilst tracking the movement of the bounding box, the controller 120 may also make a determination as to whether the vehicle is travelling towards the PTZ camera 110 or away from the PTZ camera 110.
In step 242, the controller 120 determines when the vehicle is stationary by determining that the bounding box is stationary.
In particular, the controller 120 may take the centre of the bounding box and determine that it is stationary based on a minimum time threshold in seconds and a maximum distance threshold dm,. in pixels. In other words, it is determined that the vehicle is stationary if the centre of the bounding box moves less than the maximum distance threshold d," within the minimum time threshold tiy,. Both the maximum distance threshold dm" and the minimum time threshold tmin may be configurable.
Given that the minimum time threshold tm;,, is in seconds, the controller 120 may determine the corresponding number of frames K using the following equation: (3) K = rt,n,n x frameratel where [.1 is the ceiling operator.
The controller 120 may then make a determination that the vehicle is stationary if the following condition is satisfied: ( 4) max (1(x(N_i_k) -v_i))2 (Y(N-i-k) Y(N-1))2 < dmax for 0 < k < K where: x, y are the coordinates of the centre of the bounding box on the image in pixels, and N is the total number of observations of the bounding box.
In step 243, once it has been determined that the vehicle is stationary, the controller 120 determines if the bounding box is at least partially positioned within the pre-determined area. This may be achieved, for example, by determining if there is an overlap between the pre-determined area 10 and the bounding box in the images captured by the PTZ camera 110. This determination may be made, for example, by determining if the bounding box and the pre-determined area 10 share any of the same pixels.
In step 244, the controller 120 defines an outline of the detected vehicle in the images captured by the PTZ camera 110. This may be done, for example, by using an algorithm to trace the outline of the vehicle. This process is done to give a more accurate estimate of the size and/or position of the vehicle relative to the pre-determined area in the field of view of the PTZ camera 110.
In step 245, the controller 120 then determines whether the vehicle is at least partially positioned within the predetermined area based on the overlap between the vehicle outline and the pre-determined area 10. The controller 120 may determine whether a threshold value of the outline of the vehicle overlaps with the pre-determined area 10 in order to determine whether the vehicle is at least partially positioned within the pre-determined area 10. The threshold value may be different depending on whether the vehicle is travelling towards the PTZ camera 110 or travelling away from the PTZ camera 110, as determined in step 241.
When the vehicle is travelling towards to the PTZ camera 110, the threshold value may be set low, for example, to 01-10% such that a small overlap of at least over 0% to 10% between the outline of the vehicle and the pre-determined area 10 results in a positive determination of the vehicle being positioned at least partially within the pre-determined area 10.
When the vehicle is travelling away from the PTZ camera 110, the threshold value may be set higher, for example, between 601-70%. In that case, the controller 120 will only make a positive determination as to the vehicle being positioned at least partially within the pre-determined area 10 if at least over 60%-70% of the outline of the vehicle overlaps with the pre-determined area 10.
This provides for a more accurate estimation of whether the vehicle is positioned and stopped within the pre-determined area 10.
The computational cost for assessing the outline of the vehicle in each image is significantly higher than for tracking the bounding box. By restricting the use of this process to the images captured when the vehicle is determined to have stopped and the bounding box is at least partially within the predetermined area, the computational cost for making the more accurate determination using the outline of the vehicle is minimised, reducing the overall computational bandwidth, memory and power requirements for the system.
If it is determined that the vehicle is at least partially positioned and stopped within the pre-determined area and that therefore the pre-determined action has been performed by the vehicle, the method moves on to step 250 as described above with reference to FIG. 2.
FIGS. 4A-C illustrate the steps of the method of FIG. 3 for determining whether the vehicle has stopped within a predetermined area. FIG. 4A illustrates method step 243 where it is determined if a bounding box 11 of a vehicle V1 is at least partially positioned within the pre-determined area 10, which may be a box junction B. The controller 120 determines that the bounding box 11 is positioned within the box junction 10 by determining that there is an overlap 12 between the bounding box 11 and the box junction 10.
FIG. 4B illustrates method step 244 where an outline 13 of the detected vehicle V1 is defined. The controller 120 may use an algorithm to trace the outline 13 of the vehicle V1 and thereby give a more accurate estimation as to the size of the vehicle Vl.
FIG. 4C illustrates method step 245 where it is determined whether the vehicle V1 is at least partially positioned within the pre-determined area 10 based on the overlap 0 between the vehicle outline 13 and the pre-determined area 10. The controller 120 determines whether a threshold value of the vehicle outline 13 overlaps with the pre-determined area 10 in order to determine whether the vehicle is at least partially positioned within the pre-determined area 10.
In the case of FIG. 4C, the vehicle V1 is travelling away from the PTZ camera. The threshold value may therefore be set to a value between 60% to 70%. If the area of the overlap 0 is greater than 603-70% of the area of the vehicle outline 13, then a positive determination is made that the vehicle V1 is at least partially positioned within the pre-determined area 10.
FIG. 5 illustrates a flow diagram of an alternative method 240B for the method step 240 of method 200A or 200B in FIGS. 2A and 2B in more detail for the specific case where determining when the vehicle has performed a pre-determined action includes determining whether the vehicle has stopped within a pre-determined area 10, such as the box junction B, for example.
Method steps 241 to 243 are identical to those in method 240A of FIG. 3 described above.
Method 240B differs from method 240A in that, after method step 243 when it has been determined that the bounding box is at least partially within the pre-determined area 10, the method moves on to method step 246.
In step 246, the controller 120 determines the position of one or more of the wheels of the vehicle. The wheels of the vehicle may be identified, and their position may be determined, using the vehicle detection algorithm stored on memory 221 or using a separate object detection algorithm which may also be stored on memory 221. The vehicle or object detection algorithm may also extrapolate the position of the wheels of the vehicle which are not visible based on the position of the wheels which are visible, for example, through the symmetry of the position of the wheels of the vehicle.
In step 247, the controller 120 then determines the lowest points of the detected wheels and determines whether these points are positioned within the pre-determined area 10. The lowest points of the detected wheels correspond to the part of the wheel which is in contact with the road. If that lowest point of any of the wheels is positioned within the pre-determined area, the controller 120 makes a positive determination that the vehicle is at least partly positioned within the pre-determined area 10.
In some embodiments, the method steps 246 and 247 may be performed in addition to method steps 244 and 245 to obtain a more accurate determination as to whether the vehicle is at least partly positioned within the pre-determined area 10.
FIGS. 6A to 6C illustrate the steps of the method of FIG. 5 for determining whether the vehicle has stopped within a predetermined area. FIG. 6A illustrates method step 243 where it is determined if a bounding box 11 of a vehicle V1 is at least partially positioned within the pre-determined area 10, which may be a box junction B. The controller 120 determines that the bounding box 11 is positioned within the box junction 10 by determining that there is an overlap 12 between the bounding box 11 and the box junction 10.
FIG. 6B illustrates method step 246, where the position of the wheels of the vehicle V1 are determined using a vehicle or object detection algorithm. The vehicle or object detection algorithm can detect and determine the position of the visible wheels, e.g. wheels W2, W3 and W4, and may place an ellipse onto the identified wheels to identify their position. Based on the position of the identified wheels W2, W3 and W4, the vehicle or object detection algorithm can also extrapolate the position of the non-visible wheels, e.g. wheel Wl. An ellipse may also be placed onto this non-visible detected wheel W1 to identify its position.
FIG. 60 illustrates method step 247 where the controller 120 determines the lowest points P1, P2, P3, P4 of the detected wheels Wl, W2, W3, W4. The controller 120 then determines whether any of these points P1, P2, P3, P4 are positioned within the pre-determined area 10. In FIG. 60, the points P1 and P2 are positioned within the pre-determined area 10 such that the controller 120 will then make a positive determination that the vehicle V1 is at least partially positioned within the pre-determined area 10.
FIGS. 7A and 7B illustrate the method step 250 of method 200 above. FIG. 7A shows a first field of view Fl of the PTZ camera 110 in the wide angle-configuration. In this field of view, the vehicle V1 is visible and positioned within the pre-determined area 10. A bounding box 11 is positioned around the vehicle V1. The controller 120 calculates a centre of the image 0 having coordinates Ox, Oy and a centre C of the bounding box 11 having coordinates Cx, Cy. The error in pixels errx, err, between the centre of the bounding box 0x, Cy and the centre of the image Ox, Oy is then calculated using the equations (1) and (2) above. The adjusted pan value is then determined by converting errx into degrees and subtracting it from the current pan value of the PTZ camera 110. Similarly, the adjusted tilt value is determined by converting err, into degrees and subtracting it from the current tilt value of the PTZ camera 110.
The adjusted zoom value is calculated by determining the coordinates of the new field of view F2 which corresponds to the smallest rectangle that fully contains the bounding box 11 of the vehicle V1 whilst maintaining the same aspect ratio as the image in the wide-angle configuration Fl. The zoom value may then be calculated from the ratio of the new field of view F2 (i.e. the zoomed in configuration) and the current field of view Fl (i.e. the wide-angle configuration) and the parameters of the calibration.
The adjusted pan, tilt and zoom values are then applied to the PTZ camera 110. This results in the PTZ camera moving from the wide-angle configuration, shown in FIG. 7A, to the zoomed-in configuration focused on the vehicle V1, as shown in FIG. 7B.
FIGS. 8A to 8D illustrate a method of determining the location of a number plate within the bounding box. This method may, for example, be applied in step 260 of method 200 if the ANPR algorithm was unable to find the number plate in the full image taken by the PTZ camera 110 in the zoomed-in configuration.
FIG. 8A shows an image of a vehicle V1 positioned within a pre-determined area 10 such as box junction B captured in a zoomed in configuration of the camera 110. A bounding box 11 is placed around the vehicle Vl. If the ANPR algorithm cannot find the location of the number plate in the full captured image, then a further algorithm can be applied to the image in order to determine the location of the number plate.
As shown in FIG. 8B, the image is rotated, and the vehicle detection algorithm is applied to several rotated versions of the image to place a bounding box 11a, llb on the vehicle V1 identified in each rotated version of the image. FIG. 8B shows two rotated versions of the image, however, a larger number of rotated versions of the image may be used.
The bounding boxes 11a, llb of the rotated images are rotated back to the original orientation and combined with the original bounding box 11, as shown in FIG. 8C.
As shown in FIG. 85, a refined outline 15 of the vehicle V1 is calculated as the intersection of all the rotated bounding boxes 11a, llb and the original bounding box 11. The ANPR algorithm is then applied to the refined outline 15 of the vehicle V1 to determine the location and content of the number plate 16. The position of the number plate 16 is then tested against the new refined outline 15 of the vehicle V1 to confirm that the number plate 16 is indeed positioned within the refined outline 15 of the vehicle V1 and therefore belongs to the vehicle Vl. This may be achieved, for example, by applying a classic point inside polygon test to the corners of the number plate region.
FIG. 9 shows a flow diagram for a computer-implemented method 300 for determining whether an object is positioned within a pre-determined area. The method 240 described with reference to FIG. 3 above can also find application outside of traffic monitoring and may be broadly applied to monitor an object in a field of view of a camera relative to a pre-determined area within the field of view.
In step 310, an object is detected in images captured by the camera. This may be done, for example, using an object detection algorithm which includes or accesses an Artificial Neural Network such as a Convolutional Neural Network. The object may be a vehicle in certain traffic monitoring applications as described above, but it may also be another object such as a person, an animal or an inanimate object.
In step 320, a bounding box is defined around the detected object.
In step 330, movement of the bounding box can then be tracked across the captured images.
In step 340, it is then determined when the bounding box is at least partially within a pre-determined area. For example, this may be determined by checking whether there is an overlap between the bounding box and the pre-determined area.
In step 350, an outline of the detected object in the captured images is defined when it is determined that the bounding box is at least partially within the pre-determined area. This may be done by tracing the outline of the object using an algorithm.
In step 360, it is then determined whether the outline of the objects is at least partially within the predetermined area. Again, this may be determined by checking whether there is an overlap between the outline of the object and the predetermined area.
This method 300 therefore provides an accurate and efficient way to determine whether an object in images captured by a camera is positioned within a pre-determined area. The tracing of the outline of the object results in a more accurate determination of whether the object is positioned within the pre-determined area. However, the tracing of the outline of the object is only done once it is determined that the bounding box is at least partially positioned within the pre-determined area thereby resulting in a more efficient use of computing resources.
Various modifications will be apparent to those skilled in the art.
The controller 120 may be disposed separate from the PTZ camera 110. For example, the controller 120 may be disposed in a separate package and positioned at a different physical location. The controller 120 may be connected to the PTZ camera with a wired connection or a wireless connection.
The method 200 is not limited to monitoring vehicular traffic in a box junction but may be applied to other vehicular traffic monitoring scenarios. For example, the method 200 may be used for special lane enforcement, red light enforcement, speed limit enforcement, or STOP sign enforcement. In the case of special lane enforcement, determining when the vehicle has performed the pre-determined action may comprise determining when the vehicle enters a special lane area. In the case of red light enforcement, determining when the vehicle has performed the pre-determined action may comprise determining when the vehicle has passed through a red traffic light. In the case of speed limit enforcement, determining when the vehicle has performed the pre-determined action may comprise determining when the vehicle has exceeded a certain speed limit. In the case of STOP sign enforcement, determining when the vehicle has performed the pre-determined action may comprise determining when the vehicle has not stopped and remained stationary at a STOP sign for a minimum threshold time before moving.
The predetermined area is not limited to a box junction but may correspond to any area into which the flow of traffic is to be monitored. For example, the pre-determined area may be a special lane, a no-stopping zone or a no-parking zone.
All of the above are fully within the scope of the present disclosure and are considered to form the basis for alternative embodiments in which one or more combinations of the above-described features are applied, without limitation to the specific combination disclosed above.
In light of this, there will be many alternatives which implement the teaching of the present disclosure. It is expected that one skilled in the art will be able to modify and adapt the above disclosure to suit its own circumstances and requirements within the scope of the present disclosure, while retaining some or all technical effects of the same, either disclosed or derivable from the above, in light of his common general knowledge in this art. All such equivalents, modifications or adaptations fall within the scope of the present disclosure.
Claims (27)
- Claims: 1. A system for monitoring vehicular traffic, the system comprising: a pan-tilt-zoom ( PT Z) camera having a wide-angle configuration and a zoomed-in configuration; a controller for operating the PTZ camera, the controller configured to: monitor images from the PTZ camera in the wide-angle configuration and detect a vehicle in the images using a vehicle detection algorithm; determine when the vehicle has performed a predetermined action; and when it is determined that the vehicle has performed the pre-determined action, focus the PTZ camera on the vehicle in the zoomed-in configuration.
- 2. The system of claim 1, wherein the controller is configured to determine that the vehicle has performed a predetermined action when the vehicle has entered a predetermined area or when the vehicle has stopped within a predetermined area.
- 3. The system of claim 1 or 2, wherein the controller is further configured to track movement of the vehicle.
- 4. The system of claim 3, wherein tracking movement of the vehicle comprises defining a bounding box around the detected vehicle in the monitored images and tracking movement of the bounding box.
- S. The system of claim 4, wherein determining when the vehicle has performed a pre-determined action comprises determining when the bounding box is at least partially within the pre-determined area.
- 6. The system of claim 4, wherein determining when the vehicle has performed a pre-determined action comprises determining when the bounding box is at least partially within the pre-determined area and is stationary.
- 7. The system of claim 5 or 6, wherein determining when the vehicle has performed a pre-determined action further comprises defining an outline of the detected vehicle in the captured image when it is determined that the bounding box is at least partially within the pre-determined area; and determining whether a threshold amount of the area of the outlined vehicle overlaps with the pre-determined area.
- 8. The system of claim 5 or 6, wherein determining when the vehicle has performed a pre-determined action further comprises determining a position of one or more of the wheels of the detected vehicle in the captured image when it is determined that the bounding box is at least partially within the pre-determined area; and determining whether a lowest point of any of the one or more wheels of the detected vehicle is positioned within the pre-determined area.
- 9. The system of any of claims 4 to 8, wherein the controller is further configured to identify the position of a number plate within the bounding box.
- 10. The system of any preceding claim, wherein the controller is further configured to identify a number plate of the vehicle with an automatic number plate recognition algorithm in the zoomed-in configuration.
- 11. The system of claim 10, wherein the controller is further configured to verify that the number plate is stationary.
- 12. The system of any preceding claim, wherein the camera is calibrated at a plurality of different zoom levels.
- 13. The system of any preceding claim, wherein the PTZ camera has a set of pan, tilt and zoom values for the wide-angle configuration, and wherein focusing the PTZ camera on the vehicle in the zoomed-in configuration comprises calculating an adjusted set of pan, tilt and zoom values for the PTZ camera and adjusting the camera to the adjusted set of pan, tilt and zoom values.
- 14. The system of claim 13, wherein calculating the adjusted pan and tilt values comprises determining an error between the centre of the image of the wide-configuration and a centre of a bounding box placed around the detected vehicle.
- 15. The system of claim 13 or 14, wherein calculating the adjusted zoom value comprises determining the smallest rectangle that fully contains a bounding box placed around the detected vehicle whilst maintaining the same aspect ratio of the image of the wide-angle configuration.
- 16. The system of any preceding claim, wherein the controller is further configured to determine a speed and/or acceleration of the detected vehicle.
- 17. The system of any preceding claim, wherein the vehicle detection algorithm includes or accesses an Artificial Neural Network, optionally a Convolutional Neural Network.
- 18. The system of any preceding claim, wherein the controller is configured to store a still image from the camera in the wide-angle configuration after detection of the vehicle using the vehicle detection algorithm.
- 19. The system of any preceding claim, wherein the controller is configured to store a still image from the camera in the zoomed-in configuration after it has been determined that the vehicle has performed the pre-determined action.
- 20. The system of any preceding claim, wherein the controller is configured to store video footage from the camera after detection of the vehicle using the vehicle detection algorithm.
- 21. The system of any preceding claim, wherein the controller is configured not to perform any number plate recognition when the camera is in the wide-angle configuration.
- 22. The system of any preceding claim, wherein the controller is configured to detect a plurality of vehicles using the vehicle detection algorithm when monitoring images from the PTZ camera in the wide-angle configuration.
- 23. A computer-implemented method for monitoring an object in a field of view of a camera relative to a predetermined area within the field of view, the method comprising: detecting an object in images captured by the camera; defining a bounding box around the detected object in the captured images; tracking movement of the bounding box across the captured images; determining when the bounding box is at least partially within the pre-determined area; defining an outline of the detected object in the captured image when it is determined that the bounding box is at least partially within the pre-determined area; determining whether the outline of the object is at least partially within the pre-determined area.
- 24. The computer-implemented method of claim 23, wherein the object is a vehicle.
- 25. The computer implemented method of claim 23 or 24, further comprising determining that the bounding box is stationary prior to defining an outline of the detected object in the captured image.
- 26. The computer-implemented method of any of claims 23 to 25, further comprising calculating a set of pan, tilt and zoom values for a pan-tilt zoom camera to move from a wide-angle configuration to a zoomed in configuration focused on the object after it has been determined that the outline of the object is at least partially within the pre-determined area.
- 27. The computer-implemented method of any of claims 23 to 26, further comprising controlling the camera to move from a wide-angle configuration to a zoomed in configuration focused on the object after it has been determined that the outline of the object is at least partially within the pre-determined area.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB2309541.7A GB2631286A (en) | 2023-06-23 | 2023-06-23 | A system for monitoring vehicular traffic |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB2309541.7A GB2631286A (en) | 2023-06-23 | 2023-06-23 | A system for monitoring vehicular traffic |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| GB202309541D0 GB202309541D0 (en) | 2023-08-09 |
| GB2631286A true GB2631286A (en) | 2025-01-01 |
Family
ID=87517733
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| GB2309541.7A Pending GB2631286A (en) | 2023-06-23 | 2023-06-23 | A system for monitoring vehicular traffic |
Country Status (1)
| Country | Link |
|---|---|
| GB (1) | GB2631286A (en) |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN119445504B (en) * | 2024-10-30 | 2025-09-26 | 西安电子科技大学 | Detection method of trucks illegally entering passenger lanes on highways based on roadside cameras |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR101939202B1 (en) * | 2018-02-27 | 2019-01-16 | 주식회사 하이앤텍 | Method for monitoring illegal stopping and parking vehicle using CCTV |
| KR102162130B1 (en) * | 2020-03-04 | 2020-10-06 | (주)드림테크 | Enforcement system of illegal parking using single camera |
-
2023
- 2023-06-23 GB GB2309541.7A patent/GB2631286A/en active Pending
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR101939202B1 (en) * | 2018-02-27 | 2019-01-16 | 주식회사 하이앤텍 | Method for monitoring illegal stopping and parking vehicle using CCTV |
| KR102162130B1 (en) * | 2020-03-04 | 2020-10-06 | (주)드림테크 | Enforcement system of illegal parking using single camera |
Also Published As
| Publication number | Publication date |
|---|---|
| GB202309541D0 (en) | 2023-08-09 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| KR102122859B1 (en) | Method for tracking multi target in traffic image-monitoring-system | |
| CN106952477B (en) | Roadside parking management method based on multi-camera image joint processing | |
| KR102253989B1 (en) | object tracking method for CCTV video by use of Deep Learning object detector | |
| KR102434154B1 (en) | Method for tracking multi target in traffic image-monitoring-system | |
| EP2549738B1 (en) | Method and camera for determining an image adjustment parameter | |
| CN108091142A (en) | For vehicle illegal activities Tracking Recognition under highway large scene and the method captured automatically | |
| KR101496390B1 (en) | System for Vehicle Number Detection | |
| CN103929592A (en) | All-dimensional intelligent monitoring equipment and method | |
| JP2017033554A (en) | Video data analysis method, apparatus, and parking lot monitoring system | |
| WO2006105655A1 (en) | Method and system for counting moving objects in a digital video stream | |
| WO2017047687A1 (en) | Monitoring system | |
| KR102713540B1 (en) | Video analysis device using fixed camera and moving camera | |
| CN118314518A (en) | An AI intelligent monitoring and management platform | |
| JP7305965B2 (en) | Video surveillance system parameter setting method, device and video surveillance system | |
| JP7203277B2 (en) | Method and apparatus for monitoring vehicle license plate recognition rate and computer readable storage medium | |
| CN117152971A (en) | AI traffic signal optimization method based on high-altitude panoramic video | |
| CN119917970A (en) | Abnormal event detection and alarm method for highways based on deep learning | |
| KR20210103210A (en) | Apparatus for Processing Images and Driving Method Thereof | |
| GB2631286A (en) | A system for monitoring vehicular traffic | |
| CN112906428A (en) | Image detection area acquisition method and space use condition judgment method | |
| EP3432575A1 (en) | Method for performing multi-camera automatic patrol control with aid of statistics data in a surveillance system, and associated apparatus | |
| US11917325B2 (en) | Automated scope limiting for video analytics | |
| US20220262122A1 (en) | Image collection apparatus and image collection method | |
| CN118279784B (en) | Event dynamic inspection method, device and electronic equipment for PTZ camera | |
| CN116912517B (en) | Method and device for detecting camera view field boundary |