WO2021017283A1 - Procédé et appareil de suivi en ligne basé sur un procédé hors ligne, dispositif informatique et support d'enregistrement - Google Patents
Procédé et appareil de suivi en ligne basé sur un procédé hors ligne, dispositif informatique et support d'enregistrement Download PDFInfo
- Publication number
- WO2021017283A1 WO2021017283A1 PCT/CN2019/117538 CN2019117538W WO2021017283A1 WO 2021017283 A1 WO2021017283 A1 WO 2021017283A1 CN 2019117538 W CN2019117538 W CN 2019117538W WO 2021017283 A1 WO2021017283 A1 WO 2021017283A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- offline
- tracking
- tracked
- target
- frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/254—Analysis of motion involving subtraction of images
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/40—Analysis of texture
- G06T7/41—Analysis of texture based on statistical description of texture
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Definitions
- This application relates to the field of computer technology, and in particular to an offline-based online tracking method, device, computer equipment and storage medium.
- Target tracking also a kind of visual target tracking, is an important research direction in the field of machine vision.
- target tracking detects the target of interest in the image sequence for extraction, recognition and tracking, so as to obtain the motion state parameters of the target to be tracked (such as position, velocity, acceleration, and motion trajectory, etc.), which can be further processed And analysis, in order to realize the behavioral understanding of moving targets, and provide reference data for other technical fields (such as visual navigation, pose estimation and motion analysis, etc.).
- Target tracking has a wide range of fields such as intelligent monitoring, human-computer interaction, and robot navigation. application. In these applications, target tracking is the basis for robots to perceive and react to the external environment, and is the key to understanding images.
- the offline type is also called the batch type.
- the central idea of the offline method is to connect the detection results of objects in each frame into small tracking fragments, and then use more reliable features to merge the fragments, which is more representative
- the offline methods mainly include the minimum cost network flow algorithm, the energy minimization method, and the minimum complete graph algorithm and other offline algorithms; it is precisely because the offline method uses more information in the previous and next frames, you can go back and modify the previous As a result, better accuracy is finally obtained, but it is also limited by this.
- Offline methods can only process offline videos, not real-time video streams.
- the online method starts with the matching of the target in the current frame and the next frame, that is, when the current frame appears, the result needs to be given immediately. It can process both real-time video streams and offline videos.
- the linear method can do better Therefore, it has a place in practical applications.
- the more traditional online methods mostly apply Kalman filtering, particle filtering, or Markov decision-making processes; but the accuracy of online methods is often lower than that of offline methods.
- This application provides an offline-based online tracking method, device, computer equipment, and non-volatile readable storage medium, the main purpose of which is to improve the accuracy of processing real-time online video streams.
- an offline-based online tracking method which includes:
- this application also provides an offline-based online tracking device, the device includes: a video acquisition module for real-time acquisition of online video;
- Frame image acquisition module used to acquire each frame of image containing the target to be tracked from online video
- the preprocessing module is configured to perform denoising and normalization processing on each frame of image based on the video capture device parameters and the current scene environment parameters to generate the initial result of the target to be tracked;
- the offline tracking module is used to recalculate the similarity metric value between every two targets to be tracked by using the pre-trained offline tracking model on the generated initial results every preset number of frames, when the current similarity metric is recalculated If the value is smaller than the previous similarity metric value, correct the initial result through the offline tracking model;
- the result generation module is used to generate the final result of the target to be tracked.
- the present application also provides a computer device that includes a processor and a memory, the memory is used to store computer-readable instructions, and the processor executes the at least one computer-readable instruction to implement the above-mentioned offline-based Online tracking method.
- the present application also provides a non-volatile computer-readable storage medium having computer-readable instructions stored thereon, and the at least one computer-readable instruction It is executed by one or more processors to implement the steps of the offline-based online tracking method as described above.
- the offline-based online tracking method, device, computer equipment and non-volatile computer-readable storage medium proposed in this application first collect online video in real time, and obtain each frame of image containing the target to be tracked from the online video, based on the video Collect device parameters and current scene environment parameters, perform denoising and normalization processing for each frame of image, generate the initial result of the target to be tracked, and use the pre-trained initial result every preset number of frames
- the offline tracking model recalculates the similarity measurement value between every two targets to be tracked. When the recalculated current similarity measurement value is smaller than the last similarity measurement value, the initial result is corrected by the offline tracking model To produce the final result of the target to be tracked.
- This case combines the offline tracking method and the online tracking method.
- the offline tracking method is used to realize forward and backward modification of the previous results, thereby improving the accuracy of processing real-time online video streams; on the other hand, the online tracking method is used to process real-time Online video streaming.
- FIG. 1 is a schematic flowchart of an offline-based online tracking method provided by an embodiment of the application
- FIG. 2 is a schematic flowchart of step C in FIG. 1;
- FIG. 3 is a schematic flowchart of step D in FIG. 1;
- FIG. 4 is an internal structure diagram of an offline-based online tracking device provided by an embodiment of the application.
- Fig. 5 is a schematic diagram of modules of an offline-based online tracking program provided by an embodiment of the application.
- This application provides an offline-based online tracking method.
- FIG. 1 it is a schematic flowchart of an offline-based online tracking method provided by an embodiment of this application.
- the method can be executed by a device, and the device can be implemented by software and/or hardware.
- the offline-based online tracking method includes:
- Step A Collect online video in real time
- Step B Obtain each frame of image containing the target to be tracked from the online video
- Step C Preprocessing each frame of the acquired image to generate the initial result of the target to be tracked
- Step D Correct the initial result through the pre-trained offline tracking model
- Step E Generate the final result of the target to be tracked.
- the source of the online video in step A is a video image collected by a video collection device.
- the pre-processing in step C may include, but is not limited to, including: combining preset video capture device parameters and environmental parameters of the current scene, eliminating light in different scenes through processing methods such as denoising and normalization, The influence of environmental tone, noise, etc.
- step C includes:
- Step C1 Pre-set video capture device parameters
- Step C2 Collect environmental parameters of the current scene; wherein, the environmental parameters may include, but are not limited to: lighting, hue, noise, etc.;
- Step C3 Perform denoising and normalization processing for each frame of image based on the video capture device parameters and the current scene environment parameters;
- Step C4 Generate the initial result of the target to be tracked.
- step C further includes:
- Step C5 Set a tracking area for each frame of image that has undergone denoising and normalization processing.
- the tracking area may be a polygon of any shape, and the tracking area is a detection area including a target to be tracked.
- the target to be tracked in step C may be a person, animal, plant, object, etc.
- the person may be, but is not limited to, a pedestrian, a person in a working state, a person in a car, a person driving a car, or a person on a preset transportation.
- the animal can be, but is not limited to cats, dogs, pigs, birds, fish and other animals.
- the plant can be, but is not limited to, flowers, grasses, trees and other objects.
- the object may be, but is not limited to, a computer, a code scanning device, a balloon, and other objects with a certain shape.
- the target to be tracked is described by taking a pedestrian as an example.
- the online tracking method based on the offline tracking algorithm further includes: pre-training an offline tracking model.
- the pre-trained offline tracking model includes an offline tracking algorithm.
- the principle of the offline tracking algorithm is:
- the tracking object in each frame of the video is regarded as a node, and then the similarity measure between every two objects to be tracked is obtained by fusing the pedestrian re-identification (ReID) model and the motion model. Among them, the smaller the similarity measure value is, the more similar the two tracked objects are.
- ReID pedestrian re-identification
- intersection ratio is the overlap ratio between the candidate frame (candidate bound) and the original marked frame (ground truth bound), that is, the ratio of their intersection and union; the ideal situation is complete overlap, that is, the ratio is 1. .
- Tracking object B and to-be-tracked object C are also the same object.
- the tracking problem is transformed into a binary programming problem.
- Gurob is used to solve the binary programming problem.
- Gurob is a large-scale mathematical programming optimizer. The binary planning is applied in the inter-frame difference method. Because the target in the scene is moving, the position of the target image in different image frames is different.
- the inter-frame difference method performs difference operations on two or three consecutive frames in time, and subtracts the pixels corresponding to different frames to determine the absolute value of the gray difference.
- the absolute value exceeds a certain threshold, it can be judged as a moving target , So as to achieve the target detection function.
- the specific principle is: denote the image of the nth frame and the n-1th frame as fn and fn-1 respectively, and the gray values of the corresponding pixels in the two frames are respectively denoted as fn(x,y) and fn-1(x,y) ), subtract the gray values of the corresponding pixels of the two frames of images according to the following formula, and take the absolute value to obtain the difference image Dn:
- the threshold T and perform binarization processing on the pixels one by one according to the following formula to obtain a binarized image Rn'.
- the point with a gray value of 255 is the foreground (target to be tracked) point
- the point with a gray value of 0 is the background point
- the connectivity of the image Rn' is analyzed, and finally the image Rn containing the complete moving target can be obtained.
- the step D includes:
- Step D1 Input the initial result into the pre-trained offline tracking model
- Step D2 Determine whether the initial result needs to be revised; if it is judged that the initial result needs to be revised, then execute step D3; if it is judged that the initial result does not need to be revised, then execute step D4.
- Step D3 Correct the initial result through the pre-trained offline tracking model
- Step D4 Do not modify the initial result.
- the final result of the target to be tracked is generated.
- the principle of the step D is: every predetermined number of frames (considering the amount of calculation and real-time performance, not every frame, for example, every four frames), the generated initial result is calculated,
- the offline tracking algorithm of the offline tracking model is used for recalculation, and then the tracking calculation is continued on the current result to obtain the final result. More specifically, when the current similarity metric value recalculated using the offline tracking algorithm is smaller than the last similarity metric value, it is determined that the initial result needs to be corrected.
- the offline tracking algorithm is an offline tracking algorithm in the prior art, and the embodiment of the present application does not specifically limit the offline tracking algorithm.
- the final result of the target to be tracked generated in the step E includes the position and number of the pedestrian.
- the offline-based online tracking method provided by this application first collects online videos in real time, then obtains each frame of the online video containing the target to be tracked, and preprocesses each obtained frame to generate the to-be-tracked
- the initial result of the target is modified by the pre-trained offline tracking model to produce the final result.
- This case combines the offline tracking method and the online tracking method.
- the offline tracking method is used to realize forward and backward modification of the previous results, thereby improving the accuracy of processing real-time online video streams; on the other hand, the online tracking method is used to process real-time Online video streaming.
- This application also provides an offline-based online tracking device.
- FIG. 4 it is an internal structure diagram of an offline-based online tracking device provided by an embodiment of this application.
- the offline-based online tracking device may be a PC (Personal Computer, personal computer), or a terminal device such as a smart phone, a tablet computer, and a portable computer.
- the offline-based online tracking device includes at least a memory 11, a processor 12, a network interface 13, and a communication bus 14.
- the memory 11 includes at least one type of readable storage medium, and the readable storage medium includes flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory, etc.), magnetic memory, magnetic disk, and optical disk. Wait.
- the memory 11 may be an internal storage unit based on an offline online tracking device in some embodiments, such as a hard disk of the offline online tracking device.
- the memory 11 may also be an external storage device based on an offline online tracking device, for example, a plug-in hard disk equipped on an offline online tracking device, a smart memory card (Smart Media Card, SMC). ), Secure Digital (SD) card, Flash Card, etc.
- the memory 11 may also include both an internal storage unit based on an offline online tracking device and an external storage device.
- the memory 11 can be used not only to store application software and various data installed in offline-based online tracking devices, such as codes based on offline-based online tracking programs, etc., but also to temporarily store what has been output or will be output. The data.
- the processor 12 may be a central processing unit (CPU), controller, microcontroller, microprocessor, or other data processing chip, which is used to run data stored in the memory 11 Program code or processing data, such as executing offline-based online tracking programs.
- CPU central processing unit
- controller microcontroller
- microprocessor microprocessor
- other data processing chip which is used to run data stored in the memory 11 Program code or processing data, such as executing offline-based online tracking programs.
- the network interface 13 may optionally include a standard wired interface and a wireless interface (such as a WI-FI interface), and is usually used to establish a communication connection between the reading comprehension-based marketing lead extraction device and other electronic devices.
- a standard wired interface and a wireless interface such as a WI-FI interface
- the communication bus 14 is used to realize the connection and communication between these components.
- FIG. 4 only shows an offline-based online tracking device with components 11 to 14 and an offline-based online tracking program. Those skilled in the art will understand that the structure shown in FIG. 4 does not constitute an offline-based online tracking device.
- the definition of the online tracking device may include fewer or more components than shown, or a combination of certain components, or a different component arrangement.
- the memory 11 stores an offline-based online tracking program
- the processor 12 executes the offline-based online tracking program stored in the memory 11
- the following steps are implemented when tracking the program:
- Step A Collect online video in real time
- Step B Obtain each frame of image containing the target to be tracked from the online video
- Step C Preprocessing each frame of the acquired image to generate the initial result of the target to be tracked
- Step D Correct the initial result through the pre-trained offline tracking model
- Step E Generate the final result of the target to be tracked.
- the offline-based online tracking program can be divided into one or more functional modules according to its different functions.
- One or more modules are stored in the memory 11 and executed by one or more processors (processor 12 in this embodiment) to complete the present application.
- the modules referred to in the present application refer to specific functions A series of computer-readable instruction segments used to describe the execution process of an offline-based online tracking program in an offline-based online tracking device.
- FIG. 5 is a schematic diagram of the program modules of the offline-based online tracking program in an embodiment of the offline-based online tracking device of this application.
- the offline-based online tracking program can be divided It is a video acquisition module 31, a frame image acquisition module 32, a preprocessing module 33, an offline tracking module 34, and a result generation module 35.
- a video acquisition module 31 a frame image acquisition module 32
- a preprocessing module 33 a preprocessing module 33
- an offline tracking module 34 e.g., a preprocessing module 35
- a result generation module 35 e.g., a result generation module 35.
- the video acquisition module 31 is used to collect online video in real time
- the frame image acquisition module 32 is used to acquire each frame image containing the target to be tracked from the online video;
- the preprocessing module 33 is used to preprocess each frame of the acquired image to generate the initial result of the target to be tracked;
- the offline tracking module 34 is used to correct the initial result through the pre-trained offline tracking model
- the result generation module 35 is used to generate the final result of the target to be tracked.
- FIG. 5 only shows an offline-based online tracking device with modules 31-35 and an offline-based online tracking program.
- the definition of the offline online tracking device may include fewer or more modules than shown in the figure, or a combination of some modules, or a different module arrangement.
- the functional modules in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
- the above integration can be implemented in the form of hardware, or in the form of hardware plus software functional modules.
- the embodiment of the present application also proposes a non-volatile computer-readable storage medium, the non-volatile computer-readable storage medium stores computer-readable instructions, and the at least one computer-readable instruction is controlled by one or Multiple processors execute to achieve the following operations:
- Step A Collect online video in real time
- Step B Obtain each frame of image containing the target to be tracked from the online video
- Step C Preprocessing each frame of the acquired image to generate the initial result of the target to be tracked
- Step D Correct the initial result through the pre-trained offline tracking model
- Step E Generate the final result of the target to be tracked.
- non-volatile computer-readable storage medium of the present application is basically the same as the above-mentioned offline-based online tracking device and method, and will not be repeated here.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Human Computer Interaction (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Probability & Statistics with Applications (AREA)
- Image Analysis (AREA)
Abstract
Procédé et appareil de suivi en ligne basé sur un procédé hors ligne, dispositif informatique et support d'enregistrement. Ledit procédé comprend les étapes consistant à : acquérir une vidéo en ligne en temps réel ; acquérir, à partir de la vidéo en ligne, chaque trame contenant une cible à suivre ; prétraiter chaque trame acquise pour générer un résultat initial de ladite cible ; et corriger le résultat initial au moyen d'un modèle de suivi hors ligne pré-entraîné, pour générer un résultat final. Ledit procédé combine un procédé de suivi hors ligne et un procédé de suivi en ligne, met en œuvre un retraçage pour modifier un résultat précédent au moyen du procédé de suivi hors ligne, ce qui permet d'améliorer la précision de traitement d'un flux vidéo en ligne en temps réel, et traite également un flux vidéo en ligne en temps réel au moyen du procédé de suivi en ligne.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910695584.1 | 2019-07-30 | ||
| CN201910695584.1A CN110532883B (zh) | 2019-07-30 | 2019-07-30 | 应用离线跟踪算法对在线跟踪算法进行改进 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2021017283A1 true WO2021017283A1 (fr) | 2021-02-04 |
Family
ID=68661109
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2019/117538 Ceased WO2021017283A1 (fr) | 2019-07-30 | 2019-11-12 | Procédé et appareil de suivi en ligne basé sur un procédé hors ligne, dispositif informatique et support d'enregistrement |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN110532883B (fr) |
| WO (1) | WO2021017283A1 (fr) |
Cited By (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112861971A (zh) * | 2021-02-07 | 2021-05-28 | 启迪云控(上海)汽车科技有限公司 | 一种跨点位的路侧感知目标跟踪方法及系统 |
| CN113744299A (zh) * | 2021-09-02 | 2021-12-03 | 上海安维尔信息科技股份有限公司 | 一种相机控制方法、装置、电子设备及存储介质 |
| CN114092516A (zh) * | 2021-11-08 | 2022-02-25 | 国汽智控(北京)科技有限公司 | 一种多目标跟踪检测方法、装置、设备及介质 |
| CN114495612A (zh) * | 2021-12-15 | 2022-05-13 | 华中光电技术研究所(中国船舶重工集团公司第七一七研究所) | 一种面向红外跟踪警戒设备的在线模拟训练装置 |
| CN116958651A (zh) * | 2023-06-21 | 2023-10-27 | 国网山东省电力公司淄博供电公司 | 一种电网通信光缆实时预警方法、装置、设备及介质 |
Families Citing this family (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111814725A (zh) * | 2020-07-20 | 2020-10-23 | 北京华正明天信息技术股份有限公司 | 一种基于cnn+lstm+mlp组合神经网络判断监控视频着火的预警方法 |
| CN112131966A (zh) * | 2020-09-01 | 2020-12-25 | 深圳中兴网信科技有限公司 | 泥头车监控方法、系统和存储介质 |
| CN112612768B (zh) * | 2020-12-11 | 2022-09-16 | 上海哔哩哔哩科技有限公司 | 模型训练方法和装置 |
| CN113658210A (zh) * | 2021-09-02 | 2021-11-16 | 西安中科西光航天科技有限公司 | 基于Jetson NX平台的前端实时目标跟踪方法 |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101325691A (zh) * | 2007-06-14 | 2008-12-17 | 清华大学 | 融合不同生存期的多个观测模型的跟踪方法和跟踪装置 |
| CN103218816A (zh) * | 2013-04-18 | 2013-07-24 | 中山大学 | 一种基于视频分析的人群密度估计方法与人流量统计方法 |
| CN104134078A (zh) * | 2014-07-22 | 2014-11-05 | 华中科技大学 | 一种人流量统计系统中分类器的自动选择方法 |
| US20150371102A1 (en) * | 2014-06-18 | 2015-12-24 | Delta Electronics, Inc. | Method for recognizing and locating object |
| CN109800624A (zh) * | 2018-11-27 | 2019-05-24 | 上海眼控科技股份有限公司 | 一种基于行人重识别的多目标跟踪方法 |
Family Cites Families (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107679455A (zh) * | 2017-08-29 | 2018-02-09 | 平安科技(深圳)有限公司 | 目标跟踪装置、方法及计算机可读存储介质 |
| CN109447121B (zh) * | 2018-09-27 | 2020-11-06 | 清华大学 | 一种视觉传感器网络多目标跟踪方法、装置及系统 |
| CN109636829B (zh) * | 2018-11-24 | 2021-01-01 | 华中科技大学 | 一种基于语义信息和场景信息的多目标跟踪方法 |
-
2019
- 2019-07-30 CN CN201910695584.1A patent/CN110532883B/zh active Active
- 2019-11-12 WO PCT/CN2019/117538 patent/WO2021017283A1/fr not_active Ceased
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101325691A (zh) * | 2007-06-14 | 2008-12-17 | 清华大学 | 融合不同生存期的多个观测模型的跟踪方法和跟踪装置 |
| CN103218816A (zh) * | 2013-04-18 | 2013-07-24 | 中山大学 | 一种基于视频分析的人群密度估计方法与人流量统计方法 |
| US20150371102A1 (en) * | 2014-06-18 | 2015-12-24 | Delta Electronics, Inc. | Method for recognizing and locating object |
| CN104134078A (zh) * | 2014-07-22 | 2014-11-05 | 华中科技大学 | 一种人流量统计系统中分类器的自动选择方法 |
| CN109800624A (zh) * | 2018-11-27 | 2019-05-24 | 上海眼控科技股份有限公司 | 一种基于行人重识别的多目标跟踪方法 |
Cited By (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112861971A (zh) * | 2021-02-07 | 2021-05-28 | 启迪云控(上海)汽车科技有限公司 | 一种跨点位的路侧感知目标跟踪方法及系统 |
| CN113744299A (zh) * | 2021-09-02 | 2021-12-03 | 上海安维尔信息科技股份有限公司 | 一种相机控制方法、装置、电子设备及存储介质 |
| CN113744299B (zh) * | 2021-09-02 | 2022-07-12 | 上海安维尔信息科技股份有限公司 | 一种相机控制方法、装置、电子设备及存储介质 |
| CN114092516A (zh) * | 2021-11-08 | 2022-02-25 | 国汽智控(北京)科技有限公司 | 一种多目标跟踪检测方法、装置、设备及介质 |
| CN114092516B (zh) * | 2021-11-08 | 2024-05-14 | 国汽智控(北京)科技有限公司 | 一种多目标跟踪检测方法、装置、设备及介质 |
| CN114495612A (zh) * | 2021-12-15 | 2022-05-13 | 华中光电技术研究所(中国船舶重工集团公司第七一七研究所) | 一种面向红外跟踪警戒设备的在线模拟训练装置 |
| CN114495612B (zh) * | 2021-12-15 | 2023-12-26 | 华中光电技术研究所(中国船舶重工集团公司第七一七研究所) | 一种面向红外跟踪警戒设备的在线模拟训练装置 |
| CN116958651A (zh) * | 2023-06-21 | 2023-10-27 | 国网山东省电力公司淄博供电公司 | 一种电网通信光缆实时预警方法、装置、设备及介质 |
Also Published As
| Publication number | Publication date |
|---|---|
| CN110532883A (zh) | 2019-12-03 |
| CN110532883B (zh) | 2023-09-01 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2021017283A1 (fr) | Procédé et appareil de suivi en ligne basé sur un procédé hors ligne, dispositif informatique et support d'enregistrement | |
| Akyon et al. | Slicing aided hyper inference and fine-tuning for small object detection | |
| CN112950667B (zh) | 一种视频标注方法、装置、设备及计算机可读存储介质 | |
| Laradji et al. | Where are the masks: Instance segmentation with image-level supervision | |
| Li et al. | Implementation of deep-learning algorithm for obstacle detection and collision avoidance for robotic harvester | |
| Munir et al. | LDNet: End-to-end lane marking detection approach using a dynamic vision sensor | |
| US20160358337A1 (en) | Image semantic segmentation | |
| CN110097050B (zh) | 行人检测方法、装置、计算机设备及存储介质 | |
| CN110363817B (zh) | 目标位姿估计方法、电子设备和介质 | |
| CN111931764A (zh) | 一种目标检测方法、目标检测框架及相关设备 | |
| WO2023010758A1 (fr) | Procédé et appareil de détection d'action, dispositif terminal et support de stockage | |
| Ling et al. | Optimization of autonomous driving image detection based on RFAConv and triplet attention | |
| Choi et al. | Mask2map: Vectorized hd map construction using bird’s eye view segmentation masks | |
| Getahun et al. | A deep learning approach for lane detection | |
| Yu et al. | Shallow detail and semantic segmentation combined bilateral network model for lane detection | |
| Tong et al. | A real-time detector of chicken healthy status based on modified YOLO | |
| Zhou et al. | Exploiting low-level representations for ultra-fast road segmentation | |
| CN113269821B (zh) | 消失点提取装置、提取消失点的方法及自动驾驶装置 | |
| CN114333062A (zh) | 基于异构双网络和特征一致性的行人重识别模型训练方法 | |
| CN114445787A (zh) | 非机动车重识别方法及相关设备 | |
| CN115272992B (zh) | 一种车辆姿态估计方法 | |
| CN120318499B (zh) | 基于跨空间频域的无人机目标检测方法及电子设备 | |
| CN111353429A (zh) | 基于眼球转向的感兴趣度方法与系统 | |
| CN110533688A (zh) | 改进型的目标跟踪方法、装置及计算机可读存储介质 | |
| Hassan et al. | PathFormer: A Transformer-Based Framework for Vision-Centric Autonomous Navigation in Off-Road Environments |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19939453 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 19939453 Country of ref document: EP Kind code of ref document: A1 |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 19939453 Country of ref document: EP Kind code of ref document: A1 |