WO2018148889A1 - Système d'analyse de vidéo et procédé d'analyse de vidéo basés sur une vidéosurveillance - Google Patents
Système d'analyse de vidéo et procédé d'analyse de vidéo basés sur une vidéosurveillance Download PDFInfo
- Publication number
- WO2018148889A1 WO2018148889A1 PCT/CN2017/073639 CN2017073639W WO2018148889A1 WO 2018148889 A1 WO2018148889 A1 WO 2018148889A1 CN 2017073639 W CN2017073639 W CN 2017073639W WO 2018148889 A1 WO2018148889 A1 WO 2018148889A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- video
- analysis
- subsystem
- monitoring
- video surveillance
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
Definitions
- the present application relates to the field of information technology, and in particular, to a video analysis system and a video analysis method based on video surveillance.
- Video surveillance systems have become more and more popular in China, and are widely used in public places such as transportation, banking, supermarkets, etc., and play an increasingly important role in the field of public security.
- Video surveillance systems typically acquire surveillance video images from surveillance cameras and send them to the user for browsing, playback, processing, and analysis.
- video surveillance-based analysis systems can be embedded in cameras or based on front-end embedded devices and servers.
- the former has a single function, and the processing speed and performance are greatly limited; the latter has strong processing power and greater flexibility.
- An embodiment of the present application provides a video analysis system and a video analysis method based on video surveillance, which can automatically analyze a video surveillance image acquired by a video analysis system and send the analysis result to a user, thereby improving video surveillance.
- the degree of automation and efficiency reduce labor costs.
- a video monitoring system based on video surveillance including:
- a video monitoring subsystem for taking a picture to obtain a video surveillance image
- a video analysis subsystem in communication with the video surveillance subsystem for analyzing the video surveillance image obtained by the video surveillance subsystem
- An information distribution subsystem configured to communicate with the video analysis subsystem, for transmitting the video surveillance image obtained by the video monitoring subsystem and an analysis result of the video analysis subsystem
- the video analysis subsystem includes:
- Motion analysis which is used to detect the foreground of a video surveillance image
- Feature extraction unit for extracting features of the foreground
- condition setting unit for setting a judgment condition corresponding to a scene to which the video monitoring is applied
- a rule comparison unit that compares the feature extracted by the feature extraction unit with the determination condition set by the determination condition setting unit to obtain the analysis result.
- a video monitoring method based on video surveillance including:
- the analyzing the video surveillance image to obtain the analysis result includes:
- the comparison is performed according to the extracted feature and the judgment condition to obtain the analysis result.
- the beneficial effects of the present application are: improving the automation degree and efficiency of video monitoring, and reducing labor costs.
- FIG. 1 is a schematic diagram of a video analysis system according to Embodiment 1 of the present application.
- FIG. 2 is a schematic diagram of a functional architecture of a video analysis subsystem according to Embodiment 1 of the present application;
- FIG. 3 is a schematic structural diagram of a video analysis subsystem according to Embodiment 1 of the present application.
- FIG. 4 is a schematic diagram of a video analysis method according to Embodiment 2 of the present application.
- FIG. 5 is a schematic diagram of analyzing the video surveillance image according to Embodiment 2 of the present application.
- Embodiment 1 of the present application provides a video analysis system based on video surveillance.
- the video analysis system 100 can include a video surveillance subsystem 101, a video analysis subsystem 102, and an information distribution subsystem 103.
- the video monitoring subsystem 101 is configured to perform video capture to obtain a video surveillance image; the video analysis subsystem 102 is in communication with the video surveillance subsystem 101 for analyzing video surveillance images obtained by the video surveillance subsystem 101.
- the information distribution subsystem 103 is in communication with the video analysis subsystem 102 for transmitting the video surveillance image obtained by the video surveillance subsystem 101 and the analysis results of the video analysis subsystem 102.
- the video monitoring subsystem 101 acquires a video surveillance image and transmits the video surveillance image to the video analysis subsystem 102 in the form of a video stream.
- the video analysis subsystem 102 analyzes the received video surveillance image, by which, for example, event information and data information included in the video surveillance image can be obtained, and the video analysis subsystem 102 can analyze the result as an event and a data stream.
- the form (Event and Data Flow) is sent to the information distribution subsystem 103.
- the information distribution subsystem 103 can transmit the received analysis result to the user, whereby the user can obtain the analysis result based on the video surveillance image, and in particular, for the user who is in the production and operation line, can directly refer to the analysis.
- the corresponding processing is performed, and the users who are at the front of the production and operation can be, for example, a driver, a policeman or a forester.
- the video analysis subsystem 102 can also send the received video surveillance image to the information distribution subsystem 103 in the form of a video stream, and is distributed by the information distribution subsystem 103, whereby the user not only The analysis result of the video surveillance image can be obtained, and the original information of the video surveillance image can also be obtained.
- the video analysis system of the embodiment can analyze the video surveillance image and transmit the analysis result, thereby improving the automation degree and efficiency of the video surveillance and reducing the labor cost.
- the video monitoring subsystem 101 may be an imaging device.
- the imaging device reference may be made to the prior art, which is not described in this embodiment.
- the analysis of the video surveillance image by the video analysis subsystem 102 may include: event detection, data statistics, and/or object searching.
- the event detection may be, for example, detecting a predetermined event, which may be, for example, slow traffic flow and/or personnel gathering, etc.
- the data statistics may be, for example, statistics on data involved in the video surveillance image, such as a car. Data such as the flow rate and/or the speed of movement of the moving object
- the object retrieval may be, for example, a retrieval of a predetermined object, such as a particular person, a particular shaped vehicle, and/or a particular license plate number, and the like.
- the video analysis subsystem 102 may include a motion analysis unit 201, a feature extraction unit 202, a judgment rule setting unit 203, and a rule comparison unit. 204.
- the motion analysis unit 201 may be configured to detect a foreground of the video surveillance image; the feature extraction unit 202 may be configured to extract features of the foreground; the determination condition setting unit 203 may be configured to set a determination condition, and the determination The condition may correspond to the scenario applied by the video analysis system 100; the determination unit 204 may perform comparison according to the feature extracted by the feature extraction unit 202 and the determination condition set by the determination condition setting unit 203 to obtain an analysis result.
- the motion analysis unit 201 may perform foreground detection on each frame image in the video surveillance image received by the video analysis subsystem 102 to detect the foreground of each frame image.
- foreground detection method reference may be made to the prior art, and this embodiment will not be described again.
- the feature extraction unit 202 may extract the feature of the foreground based on the foreground extracted by the motion analysis unit 201, and the feature of the foreground may include, for example, the position, motion speed, motion direction, and motion trajectory of the object in the foreground. (trajectory), size, texture, colour, and gradient.
- the feature extracted by the feature extraction unit 202 may be one or more of the above-listed features, or may extract features other than the enumerated features described above.
- the feature extraction unit 202 may further combine the extracted features to form a feature combination, where the feature combination may be a high-dimensional feature combination, and the feature combination may have the same number of types as the extracted feature.
- the latitude of the feature combination can be 12 dimensions.
- the feature extraction unit 202 can select features for composing the feature combination according to the scenario applied by the video analysis system 100, thereby being able to determine the light conditions of each scene and the analysis results required by each scenario.
- the kind to select features for composing the feature combination so that the video analysis system 100 of the present embodiment can be applied to different scenarios, thereby improving its scalability.
- the feature extraction unit 202 may select the motion combination, the texture, and the color gradient to form the feature combination; when the scene applied by the video analysis system is illegal parking monitoring
- the feature extraction unit 202 can select to combine the feature by location, motion trajectory, size, and texture.
- the feature extraction unit 202 can be based on a configuration file. The same scene is selected for the features that make up the feature combination, and the profile can correspond to the scene. In addition, the embodiment may also select features for composing the combination of features in other manners.
- the scenario applied by the video analysis system 100 may be, for example, traffic monitoring, security monitoring, forest monitoring, agricultural monitoring, or factory monitoring.
- the scenario applied by the video analysis system 100 of the present embodiment may not be limited thereto, and may be other scenarios than the enumerated scenarios described above.
- the determination condition setting unit 203 can set the determination condition corresponding to the scene, for example, when the scene applied by the video analysis system is traffic monitoring, the judgment set by the determination condition setting unit 203 Conditions may include the location and size of the Region of Interest (RoI), the direction of the lane line, the lane function, the traffic light refreshing cycle, and the duration of the event.
- the determination condition set by the determination condition setting unit 203 may include the location of the Region of Interest (RoI) and The size, the density of the object, the speed of the object, and the frequency of the object.
- the determination condition setting unit 203 can set the determination condition according to the configuration file corresponding to the scene.
- the determination condition corresponding to the scene may be set in another manner.
- the determination unit 204 can perform comparison based on the feature extracted by the feature extraction unit 202 and the determination condition set by the determination condition setting unit 203, and output the result of the analysis based on the comparison result. For example, when the comparison result is that the degree of aggregation of the object in the region of interest is above a predetermined threshold, the result of the analysis is that a person gathering event occurs.
- FIG. 3 is a schematic structural diagram of a video analysis subsystem 102 of an embodiment of the present application, which can be used to implement the functions of the video analysis subsystem shown in FIG. 2.
- the video analysis subsystem 102 may include a backend analysis device 303 for event detection, data statistics, and video statistics. And/or object searching.
- the backend analysis device 303 can be implemented, for example, by a server.
- the video analysis subsystem 102 may further include a front end processing unit 301 and/or a front end analysis device 302.
- the front end processing section 301 can be used for event detection and/or data statistics on the video surveillance image.
- the front end processing unit 301 can be embedded in the video monitoring subsystem 101, for example, The front end processing unit 301 can be embedded in the imaging device.
- the front end analysis device 302 can be used for event detection and/or data statistics on the video surveillance image.
- the front end analysis device 302 can be disposed in an outdoor environment, and the front end analysis device 302 can be, for example, an industrial PC (Industrial PC), a digital processor (DSP), and/or a dedicated embedded device (specialized embedded). Device) and so on.
- industrial PC Industrial PC
- DSP digital processor
- dedicated embedded device specialized embedded. Device
- the backend analysis device 303 can have the strongest data processing capability, the processing capability of the front end analysis device 302 is second, and the processing power of the front end processing unit 301 is the weakest.
- the processing unit 301 can perform relatively simple event detection and/or data statistics.
- the front-end analysis device 302 can perform relatively complex event detection and/or data statistics, and the back-end analysis device 303 can perform the most complicated event detection and/or data statistics. And can perform object retrieval.
- the backend analysis device 303 can acquire the analysis result of the front end processing section 301 and/or the front end analysis device 302, whereby the backend analysis device 303 can be based on the front end processing section 301 and/or the front end analysis device 302. Analyze the results for analysis to improve efficiency.
- the component of the video analysis subsystem 102 may further include a memory 304 capable of storing analysis results of the front end processing section 301 and/or the front end analysis device 302, and the backend analysis device 303. The analysis result can be read from the memory 304.
- the backend analysis device 303 since the backend analysis device 303 has the strongest data processing capability, the backend analysis device 303 can also have at least one of the following functions:
- the working status includes, for example, a device connection status, a device initialization status, a control device load, and/or a storage space and a usage rate of the memory. Further, in the case where the working state is abnormal, the backend analyzing device 303 can issue an alarm signal to notify the administrator.
- the backend analysis device 303 can initialize the analysis of the video analysis subsystem 102 by setting a configuration file, which can include setting the location and size of the region of interest (ROI) for event detection. Parameters, and / or format the user's configuration data.
- the back-end analysis device 303 can set the initialized content according to the application scenario of the video analysis system 100.
- the initialized content may include: setting a lane or road area to be observed, and setting an event detection center. The required threshold, and the format of the road congestion index (jam index format) according to different needs.
- the backend analysis device 303 can set different security levels for the users of the video analytics system 100 and save and track the user's usage records.
- the backend analysis device 303 can save, update, generate an event list, generate a report to be published, and/or set a template of a report to be published, and the like.
- the report to be published may be, for example, a combination of text and images.
- the report to be released may be information about a road section in which a traffic accident occurs, and the information to be released may include text, and may further include a traffic accident. Surveillance video image of the road segment.
- the video analysis subsystem 102 can be set independently of the video monitoring subsystem 101. Thereby, the analysis function of the video analysis subsystem 102 can be set according to the application scenario of the video analysis system 100, thereby improving the The scalability of the video analytics subsystem 102 is set.
- the information distribution subsystem 103 may send the video surveillance image obtained by the video monitoring subsystem 101 and the analysis result of the video analysis subsystem 102 to the user via a network, such as a local area network (LAN) or Wireless fidelity (Wi-Fi) network, etc.
- a network such as a local area network (LAN) or Wireless fidelity (Wi-Fi) network, etc.
- the user can receive the video surveillance image and the analysis result via the terminal device.
- the information distribution subsystem 103 can also be set independently of the video monitoring subsystem 101.
- a plurality of video analysis systems 100 may be configured in a hierarchical architecture, each level of video analysis system may have different sizes and permissions, and each level of video analysis system may have the same structure.
- the first layer video analysis system may have a size covering only one street
- the second layer video analysis system may have a size covering one administrative area
- the third layer video analysis system may have a size covering one city
- the second layer The video analysis system can read the data of the first layer video analysis system and can control the first layer video analysis system
- the third layer video analysis system can read the data of the first layer video analysis system and the second layer video analysis system and
- the first layer video analysis system and the second layer video analysis system are controlled, but the first layer video analysis system cannot read the data of the second layer video analysis system and the third layer video analysis system, and cannot be controlled.
- the video analysis system of the embodiment can analyze the video surveillance image and transmit the analysis result, thereby improving the automation degree and efficiency of the video surveillance and reducing the labor cost; moreover, the video analysis system of the embodiment has a comparison Strong scalability.
- Embodiment 2 of the present application provides a video analysis method corresponding to the video analysis system of Embodiment 1.
- FIG. 4 is a schematic diagram of a video analysis method according to this embodiment. As shown in FIG. 4, the method includes:
- Step 401 Perform shooting to obtain a video surveillance image.
- Step 402 Perform analysis on the video surveillance image to obtain an analysis result
- Step 403 Send the video surveillance image and the analysis result.
- FIG. 5 is a schematic diagram of analyzing the video surveillance image according to the embodiment. As shown in FIG. 5, the analysis includes:
- Step 501 Detecting a foreground of the video surveillance image
- Step 502 Extract features of the foreground
- Step 503 Set a determination condition, where the determination condition corresponds to a scenario applied by the video monitoring method
- Step 504 Perform comparison according to the extracted feature and the determination condition to obtain the analysis result.
- the extracted features include: position, motion speed, motion direction, trajectory, size, texture, color, and color gradient of the object in the foreground.
- the extracted features are combined to form a feature combination; in addition, the feature types and quantities constituting the feature combination may also be set according to a scenario to which the video monitoring method is applied.
- step 503 the determining condition is set according to a configuration file corresponding to the scenario, where the scenario includes traffic monitoring, security monitoring, forest monitoring, agricultural monitoring, or factory monitoring.
- the video surveillance image can be analyzed and the analysis result is transmitted, thereby improving the automation degree and efficiency of the video surveillance and reducing the labor cost;
- the video analysis method has strong scalability.
- the embodiment of the present application further provides a computer readable program, wherein the program causes the video analysis system to perform the video analysis method described in Embodiment 2 when the program is executed in a video analysis system.
- the embodiment of the present application further provides a storage medium storing a computer readable program, wherein the storage medium The above computer readable program is stored, and the computer readable program causes the video analysis system to perform the video analysis method described in Embodiment 2.
- the video analysis system described in connection with the embodiments of the present invention may be directly embodied as hardware, a software module executed by a processor, or a combination of both.
- one or more of the functional block diagrams shown in Figures 1, 2 and/or one or more combinations of functional block diagrams may correspond to various software modules of a computer program flow, or to individual hardware modules.
- These software modules may correspond to the respective steps shown in FIGS. 4 and 5, respectively.
- These hardware modules can be implemented, for example, by curing these software modules using a Field Programmable Gate Array (FPGA).
- FPGA Field Programmable Gate Array
- the software module can reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, removable disk, CD-ROM, or any other form of storage medium known in the art.
- a storage medium can be coupled to the processor to enable the processor to read information from, and write information to, the storage medium; or the storage medium can be an integral part of the processor.
- the processor and the storage medium can be located in an ASIC.
- the software module can be stored in the memory of the mobile terminal or in a memory card that can be inserted into the mobile terminal.
- the software module can be stored in the MEGA-SIM card or a large-capacity flash memory device.
- One or more of the functional block diagrams described with respect to Figures 1, 2 and/or one or more combinations of functional block diagrams may be implemented as a general purpose processor, digital signal processor (DSP) for performing the functions described herein.
- DSP digital signal processor
- ASIC application specific integrated circuit
- FPGA field programmable gate array
- One or more of the functional blocks described with respect to Figures 1-3 and/or one or more combinations of functional blocks may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, multiple microprocessors One or more microprocessors in conjunction with DSP communication or any other such configuration.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Image Analysis (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
Selon des modes de réalisation, la présente invention concerne un système d'analyse de vidéo et un procédé d'analyse de vidéo basés sur une vidéosurveillance. Le système d'analyse de vidéo comprend : un sous-système de vidéosurveillance utilisé pour prendre une photographie afin d'obtenir une image de vidéosurveillance ; un sous-système d'analyse de vidéo destiné à analyser l'image de vidéosurveillance obtenue par le sous-système de vidéosurveillance ; et un sous-système de publication d'informations servant à envoyer l'image de vidéosurveillance et le résultat d'analyse. Selon les modes de réalisation, une image de vidéosurveillance obtenue par le système d'analyse de vidéo peut être analysée automatiquement et le résultat d'analyse est envoyé à un utilisateur, ce qui permet d'améliorer le degré d'automatisation et l'efficacité de la vidéosurveillance et de réduire les coûts de main-d'œuvre.
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201780052466.8A CN109643321A (zh) | 2017-02-15 | 2017-02-15 | 基于视频监控的视频分析系统和视频分析方法 |
| PCT/CN2017/073639 WO2018148889A1 (fr) | 2017-02-15 | 2017-02-15 | Système d'analyse de vidéo et procédé d'analyse de vidéo basés sur une vidéosurveillance |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/CN2017/073639 WO2018148889A1 (fr) | 2017-02-15 | 2017-02-15 | Système d'analyse de vidéo et procédé d'analyse de vidéo basés sur une vidéosurveillance |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2018148889A1 true WO2018148889A1 (fr) | 2018-08-23 |
Family
ID=63170094
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2017/073639 Ceased WO2018148889A1 (fr) | 2017-02-15 | 2017-02-15 | Système d'analyse de vidéo et procédé d'analyse de vidéo basés sur une vidéosurveillance |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN109643321A (fr) |
| WO (1) | WO2018148889A1 (fr) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110517178A (zh) * | 2019-08-29 | 2019-11-29 | 青岛海信网络科技股份有限公司 | 一种安防一体化的综合监控系统 |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105788142A (zh) * | 2016-05-11 | 2016-07-20 | 中国计量大学 | 一种基于视频图像处理的火灾检测系统及检测方法 |
| CN106241533A (zh) * | 2016-06-28 | 2016-12-21 | 西安特种设备检验检测院 | 基于机器视觉的电梯乘员综合安全智能监控方法 |
Family Cites Families (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7646401B2 (en) * | 2004-01-30 | 2010-01-12 | ObjectVideo, Inc | Video-based passback event detection |
| US20080309760A1 (en) * | 2007-03-26 | 2008-12-18 | Pelco, Inc. | Method and apparatus for controlling a video surveillance camera |
| CN101609589A (zh) * | 2008-06-17 | 2009-12-23 | 侯荣琴 | 多频图像火灾探测系统 |
| CN102542289B (zh) * | 2011-12-16 | 2014-06-04 | 重庆邮电大学 | 一种基于多高斯计数模型的人流量统计方法 |
| WO2013165048A1 (fr) * | 2012-04-30 | 2013-11-07 | 전자부품연구원 | Système de recherche d'image et serveur d'analyse d'image |
| CN103839308B (zh) * | 2012-11-26 | 2016-12-21 | 北京百卓网络技术有限公司 | 人数获取方法、装置及系统 |
| KR20160008267A (ko) * | 2014-07-14 | 2016-01-22 | 주식회사 윈스 | 네트워크 기반 영상감시체계에서의 사용자 행위 분석 시스템 |
| CN106373143A (zh) * | 2015-07-22 | 2017-02-01 | 中兴通讯股份有限公司 | 一种自适应跨摄像机多目标跟踪方法及系统 |
-
2017
- 2017-02-15 CN CN201780052466.8A patent/CN109643321A/zh active Pending
- 2017-02-15 WO PCT/CN2017/073639 patent/WO2018148889A1/fr not_active Ceased
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105788142A (zh) * | 2016-05-11 | 2016-07-20 | 中国计量大学 | 一种基于视频图像处理的火灾检测系统及检测方法 |
| CN106241533A (zh) * | 2016-06-28 | 2016-12-21 | 西安特种设备检验检测院 | 基于机器视觉的电梯乘员综合安全智能监控方法 |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110517178A (zh) * | 2019-08-29 | 2019-11-29 | 青岛海信网络科技股份有限公司 | 一种安防一体化的综合监控系统 |
Also Published As
| Publication number | Publication date |
|---|---|
| CN109643321A (zh) | 2019-04-16 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP7036863B2 (ja) | ビデオデータを用いた活動モニタリングのためのシステム及び方法 | |
| CN104200671B (zh) | 一种基于大数据平台的虚拟卡口管理方法及系统 | |
| CN105163067B (zh) | 一种基于数字图像处理技术的高空抛物取证系统 | |
| WO2018223955A1 (fr) | Procédé de surveillance de cible, dispositif de surveillance de cible, caméra et support lisible par ordinateur | |
| Zabłocki et al. | Intelligent video surveillance systems for public spaces–a survey | |
| WO2020094088A1 (fr) | Procédé de capture d'image, caméra de surveillance et système de surveillance | |
| US11151192B1 (en) | Preserving locally stored video data in response to metadata-based search requests on a cloud-based database | |
| KR102297217B1 (ko) | 영상들 간에 객체와 객체 위치의 동일성을 식별하기 위한 방법 및 장치 | |
| CN110312057A (zh) | 智能视频处理装置 | |
| KR20110132884A (ko) | 다중 동영상 색인 및 검색이 가능한 지능형 영상 정보 검색 장치 및 방법 | |
| CN113592785A (zh) | 一种目标流量统计方法及装置 | |
| US10373015B2 (en) | System and method of detecting moving objects | |
| CN113378616A (zh) | 视频分析方法、视频分析的管理方法及相关设备 | |
| CN110177255A (zh) | 一种基于案件调度的视频信息发布方法及系统 | |
| CN104134067A (zh) | 基于智能视觉物联网的道路车辆监控系统 | |
| CN113112813A (zh) | 违章停车检测方法及装置 | |
| CN111125382A (zh) | 人员轨迹实时监测方法及终端设备 | |
| WO2021114985A1 (fr) | Procédé et appareil d'identification d'objets compagnons, serveur et système | |
| CN109740003A (zh) | 一种归档方法及装置 | |
| CN108288025A (zh) | 一种车载视频监控方法、装置及设备 | |
| CN106682590B (zh) | 一种监控业务的处理方法以及服务器 | |
| KR102421043B1 (ko) | 영상처리장치 및 그 장치의 구동방법 | |
| JP2019154027A (ja) | ビデオ監視システムのパラメータ設定方法、装置及びビデオ監視システム | |
| CN107566785B (zh) | 一种面向大数据的视频监控系统及方法 | |
| KR102192039B1 (ko) | 촬영장치를 이용한 교통법규 위반 신고처리 방법 및 시스템 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17896752 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 17896752 Country of ref document: EP Kind code of ref document: A1 |