[go: up one dir, main page]

CN116843726A - Pedestrian trajectory tracking method and device, electronic equipment and storage medium - Google Patents

Pedestrian trajectory tracking method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116843726A
CN116843726A CN202310712202.8A CN202310712202A CN116843726A CN 116843726 A CN116843726 A CN 116843726A CN 202310712202 A CN202310712202 A CN 202310712202A CN 116843726 A CN116843726 A CN 116843726A
Authority
CN
China
Prior art keywords
track
pedestrian
trajectory
camera
segmented
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310712202.8A
Other languages
Chinese (zh)
Inventor
张辉
吴正中
张云飞
刘喆
王晓东
张东东
张兵兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Urban Construction Intelligent Control Technology Co ltd
Original Assignee
Beijing Urban Construction Intelligent Control Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Urban Construction Intelligent Control Technology Co ltd filed Critical Beijing Urban Construction Intelligent Control Technology Co ltd
Priority to CN202310712202.8A priority Critical patent/CN116843726A/en
Publication of CN116843726A publication Critical patent/CN116843726A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/292Multi-camera tracking
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20004Adaptive image processing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)

Abstract

本发明公开了一种行人轨迹的跟踪方法及装置、电子设备及存储介质。其中,该方法包括:获取多个摄像头采集的视频集合,其中,所述视频集合包括多个视频,每个视频对应一台摄像头;识别所述视频集合中每个视频中的行人框;基于所述行人框生成目标行人的分段移动轨迹;获取所述多个摄像头的空间特征,根据所述空间特征耦合多个分段移动轨迹,生成所述目标行人的完整移动轨迹。通过本发明,可以在不提取人脸特征的前提下生成目标行人的完整移动轨迹,实现了行人的全轨迹跟踪,解决了相关技术中单摄像头生成行人轨迹不完整的技术问题。

The invention discloses a pedestrian trajectory tracking method and device, electronic equipment and storage medium. Wherein, the method includes: obtaining a video collection collected by multiple cameras, wherein the video collection includes multiple videos, each video corresponding to a camera; identifying pedestrian frames in each video in the video collection; based on the The pedestrian frame generates a segmented movement trajectory of the target pedestrian; obtains the spatial characteristics of the multiple cameras, couples the multiple segmented movement trajectories according to the spatial characteristics, and generates a complete movement trajectory of the target pedestrian. Through the present invention, the complete movement trajectory of a target pedestrian can be generated without extracting facial features, realizing full trajectory tracking of pedestrians and solving the technical problem in the related art that a single camera generates incomplete pedestrian trajectories.

Description

行人轨迹的跟踪方法及装置、电子设备及存储介质Pedestrian trajectory tracking method and device, electronic equipment and storage medium

技术领域Technical field

本发明涉及安监领域,具体而言,涉及一种行人轨迹的跟踪方法及装置、电子设备及存储介质。The present invention relates to the field of safety surveillance, and specifically to a pedestrian trajectory tracking method and device, electronic equipment and storage media.

背景技术Background technique

相关技术中,轨道交通行人行为识别、训练方法有基于图像的识别方法和基于人体骨架的识别方法,其中,基于图像的识别方法成本较低,无需增加额外数据采集设备,基于人体骨架的识别方法需要增加深度感知传感器,需要对已有设备进行改造,成本较高。Among related technologies, rail transit pedestrian behavior recognition and training methods include image-based recognition methods and human skeleton-based recognition methods. Among them, image-based recognition methods are relatively low-cost and do not require additional data collection equipment. Human skeleton-based recognition methods Depth sensing sensors need to be added, and existing equipment needs to be modified, which is costly.

相关技术中,对行人轨迹的跟踪主要是通过单一摄像头或通过人脸识别的方法进行的,且存在人流量大,遮挡严重的问题,同时,由于监控摄像头安装位置较高,难以准确识别人脸,因此,此类方法落地困难。In related technologies, pedestrian trajectories are mainly tracked through a single camera or face recognition method, and there are problems such as large flow of people and severe occlusion. At the same time, due to the high installation position of surveillance cameras, it is difficult to accurately identify faces. , therefore, it is difficult to implement such methods.

针对相关技术中存在的上述问题,暂未发现有效的解决方案。No effective solution has yet been found for the above problems existing in related technologies.

发明内容Contents of the invention

本发明提供了一种行人轨迹的跟踪方法及装置、电子设备及存储介质。The invention provides a pedestrian trajectory tracking method and device, electronic equipment and storage medium.

根据本申请实施例的一个方面,提供了一种行人轨迹的跟踪方法,所述方法包括:获取多个摄像头采集的视频集合,其中,所述视频集合包括多个视频,每个视频对应一台摄像头;识别所述视频集合中每个视频中的行人框;基于所述行人框生成目标行人的分段移动轨迹;获取所述多个摄像头的空间特征,根据所述空间特征耦合多个分段移动轨迹,生成所述目标行人的完整移动轨迹。According to one aspect of the embodiment of the present application, a pedestrian trajectory tracking method is provided. The method includes: obtaining a video collection collected by multiple cameras, wherein the video collection includes multiple videos, each video corresponding to a Camera; identify the pedestrian frame in each video in the video collection; generate the segmented movement trajectory of the target pedestrian based on the pedestrian frame; obtain the spatial characteristics of the multiple cameras, and couple the multiple segments according to the spatial characteristics Movement trajectory to generate the complete movement trajectory of the target pedestrian.

进一步,基于所述行人框生成目标行人的分段移动轨迹包括:对所述行人框进行比特跟踪,生成目标行人的初步移动轨迹;采用自适应卡尔曼滤波模型对所述初步移动轨迹进行更新,生成所述目标行人的分段移动轨迹。Further, generating the segmented movement trajectory of the target pedestrian based on the pedestrian frame includes: performing bit tracking on the pedestrian frame to generate a preliminary movement trajectory of the target pedestrian; using an adaptive Kalman filter model to update the preliminary movement trajectory, Generate segmented movement trajectories of the target pedestrian.

进一步,采用自适应卡尔曼滤波模型对所述初步移动轨迹进行更新,生成所述目标行人的分段移动轨迹包括:从预设初始值开始,迭代更新所述初步移动轨迹的轨迹向量,直到所述初步移动轨迹的最后一个轨迹向量:Further, an adaptive Kalman filter model is used to update the preliminary movement trajectory, and generating the segmented movement trajectory of the target pedestrian includes: starting from a preset initial value, iteratively updating the trajectory vector of the preliminary movement trajectory until the The last trajectory vector of the preliminary movement trajectory:

将当前轨迹向量输入以下预测阶段公式,输出当前轨迹向量的预测值Input the current trajectory vector into the following prediction stage formula and output the predicted value of the current trajectory vector. :

;

;

以下更新阶段公式,输出当前轨迹向量的更新值Will The following update phase formula outputs the updated value of the current trajectory vector. :

;

;

;

;

;其中,为当前时刻轨迹向量的预测值,所述轨迹向量包含:位 置、速度、加速度,为状态转移矩阵,用于描述轨迹向量中前一时刻与当前时刻轨迹向量之 间的数学关系,为前一时刻轨迹向量的更新值,为系统噪声系数矩阵,为当前时刻 系统噪声,为当前时刻协方差矩阵的预测值,为上一时刻协方差矩阵的更新值,为 噪声协方差矩阵,为测量噪声,可通过前一时刻的卡尔曼增益进行自适应调节,为单位矩 阵,为测量矩阵,用于描述测量量与预测量之间的数学关系,为当前时刻卡尔曼增益, 为轨迹向量的更新值,为当前时刻的测量量,为当前时刻协方差矩阵的更新值,为更 新参量,b为遗忘因子,,p为行人可能性概率,为拍摄场景的区域边 界; ;in, is the predicted value of the trajectory vector at the current moment, which includes: position, velocity, and acceleration, is the state transition matrix, used to describe the mathematical relationship between the previous moment in the trajectory vector and the current moment trajectory vector, is the updated value of the trajectory vector at the previous moment, is the system noise coefficient matrix, is the system noise at the current moment, is the predicted value of the covariance matrix at the current moment, is the updated value of the covariance matrix at the previous moment, is the noise covariance matrix, To measure noise, the Kalman gain at the previous moment can be adaptively adjusted, is the identity matrix, is a measurement matrix, used to describe the mathematical relationship between measured quantities and predicted quantities, is the Kalman gain at the current moment, is the updated value of the trajectory vector, is the measured quantity at the current moment, is the updated value of the covariance matrix at the current moment, is the update parameter, b is the forgetting factor, , p is the probability of pedestrian possibility, , is the area boundary of the shooting scene;

在所述初步移动轨迹的所有轨迹向量均迭代完成之后,采用所有轨迹点的更新值输出所述目标行人的分段移动轨迹。After all trajectory vectors of the preliminary movement trajectory are iterated, the updated values of all trajectory points are used to output the segmented movement trajectory of the target pedestrian.

进一步,获取所述多个摄像头的空间特征,根据所述空间特征耦合多个分段移动轨迹,生成所述目标行人的完整移动轨迹包括:根据所述分段移动轨迹计算所述目标行人的行走速度和相邻摄像头之间的轨迹最短距离,其中,所述空间特征包括所述轨迹最短距离;根据所述行走速度和所述轨迹最短距离计算每个摄像头中每条分段移动轨迹的分布概率;选择每个摄像头概率最大的目标轨迹曲线,采用多个摄像头的目标轨迹曲线耦合生成所述目标行人的完整移动轨迹。Further, obtaining spatial characteristics of the multiple cameras, coupling multiple segmented movement trajectories according to the spatial characteristics, and generating a complete movement trajectory of the target pedestrian includes: calculating the walking of the target pedestrian based on the segmented movement trajectories. speed and the shortest distance of trajectories between adjacent cameras, where the spatial features include the shortest distance of trajectories; the distribution probability of each segmented movement trajectory in each camera is calculated based on the walking speed and the shortest distance of trajectories. ; Select the target trajectory curve with the highest probability for each camera, and use the target trajectory curve coupling of multiple cameras to generate the complete movement trajectory of the target pedestrian.

进一步,根据所述分段移动轨迹计算相邻摄像头之间的轨迹最短距离包括:采用 以下公式计算所述目标行人在第一摄像头中的第一轨迹曲线与第二轨迹曲线之间的弗朗 明歇距离D:;其中,代表第一摄 像头的可识别区域,代表第二摄像头的可识别区域,代表行人可通行区域,P代表第一 摄像头识别的分段移动轨迹的第一轨迹曲线,Q代表第二摄像头识别的任意分段移动轨迹 的第二轨迹曲线,inf代表下确界,d为欧式距离,t为时间,为随时间变化的每一对 可能的位置描述函数,为[0,1]的值,为PQ两条轨迹曲线上所有点的距离。 Further, calculating the shortest distance between adjacent cameras based on the segmented movement trajectory includes: using the following formula to calculate the Fronmin distance between the first trajectory curve and the second trajectory curve of the target pedestrian in the first camera. Rest distance D: ;in, Represents the identifiable area of the first camera, Represents the identifiable area of the second camera, represents the pedestrian-accessible area, P represents the first trajectory curve of the segmented movement trajectory identified by the first camera, Q represents the second trajectory curve of any segmented movement trajectory identified by the second camera, inf represents the lower bound, and d is the Euclidean distance, t is time, is each possible pair of position description functions that change over time, and is the value of [0,1], is the distance between all points on the two trajectory curves of PQ.

进一步,根据所述行走速度和所述轨迹最短距离计算每个摄像头中每条分段移动 轨迹的分布概率包括:针对每个摄像头,采用以下公式计算分布概率p:;其中,为当前摄像 头识别的所有行人的行走速度的均值,为为当前摄像头识别的所有分段移动轨迹的轨迹 最短距离的均值,为速度的标准差,为距离的标准差,为相关系数,v为当前分段移动 轨迹中目标行人的行走速度,D为当前分段移动轨迹的轨迹最短距离。 Further, calculating the distribution probability of each segmented movement trajectory in each camera based on the walking speed and the shortest distance of the trajectory includes: for each camera, the following formula is used to calculate the distribution probability p: ;in, is the average walking speed of all pedestrians recognized by the current camera, is the mean of the shortest distance of all segmented movement trajectories identified for the current camera, is the standard deviation of the speed, is the standard deviation of the distance, is the correlation coefficient, v is the walking speed of the target pedestrian in the current segmented movement trajectory, and D is the shortest distance of the current segmented movement trajectory.

进一步,在根据所述行走速度和所述轨迹最短距离计算每个摄像头中每条分段移动轨迹的分布概率之前,所述方法还包括:针对每个摄像头,判断对应的最大分布概率是否小于预设门限值;若对应的最大分布概率小于预设门限值,剔除对应摄像头的目标轨迹曲线。Further, before calculating the distribution probability of each segmented movement trajectory in each camera according to the walking speed and the shortest distance of the trajectory, the method also includes: for each camera, determining whether the corresponding maximum distribution probability is less than a predetermined Set a threshold value; if the corresponding maximum distribution probability is less than the preset threshold value, the target trajectory curve of the corresponding camera is eliminated.

根据本申请实施例的另一个方面,还提供了一种行人轨迹的跟踪装置,包括:获取模块,用于获取多个摄像头采集的视频集合,其中,所述视频集合包括多个视频,每个视频对应一台摄像头;识别模块,用于识别所述视频集合中每个视频中的行人框;生成模块,用于基于所述行人框生成目标行人的分段移动轨迹;耦合模块,用于获取所述多个摄像头的空间特征,根据所述空间特征耦合多个分段移动轨迹,生成所述目标行人的完整移动轨迹。According to another aspect of the embodiment of the present application, a pedestrian trajectory tracking device is also provided, including: an acquisition module, configured to acquire a video collection collected by multiple cameras, wherein the video collection includes multiple videos, each The video corresponds to one camera; the identification module is used to identify the pedestrian frame in each video in the video collection; the generation module is used to generate the segmented movement trajectory of the target pedestrian based on the pedestrian frame; the coupling module is used to obtain The spatial characteristics of the multiple cameras are coupled with multiple segmented movement trajectories according to the spatial characteristics to generate a complete movement trajectory of the target pedestrian.

进一步,所述生成模块包括:第一生成单元,用于对所述行人框进行比特跟踪,生成目标行人的初步移动轨迹;第二生成单元,用于采用自适应卡尔曼滤波模型对所述初步移动轨迹进行更新,生成所述目标行人的分段移动轨迹。Further, the generation module includes: a first generation unit for bit tracking the pedestrian frame and generating a preliminary movement trajectory of the target pedestrian; a second generation unit for using an adaptive Kalman filter model to generate the preliminary movement trajectory of the target pedestrian. The movement trajectory is updated to generate the segmented movement trajectory of the target pedestrian.

进一步,所述第二生成单元包括:迭代子单元,用于从预设初始值开始,迭代更新 所述初步移动轨迹的轨迹向量,直到所述初步移动轨迹的最后一个轨迹向量:将当前轨迹 向量输入以下预测阶段公式,输出当前轨迹向量的预测值Further, the second generation unit includes: an iterative subunit, used to iteratively update the trajectory vector of the preliminary movement trajectory starting from a preset initial value until the last trajectory vector of the preliminary movement trajectory: convert the current trajectory vector into Enter the following prediction stage formula to output the predicted value of the current trajectory vector. :

;

;

以下更新阶段公式,输出当前轨迹向量的更新值Will The following update phase formula outputs the updated value of the current trajectory vector. :

;

;

;

;

;其中,为当前时刻轨迹向量的预测值,所述轨迹向量包含:位 置、速度、加速度,为状态转移矩阵,用于描述轨迹向量中前一时刻与当前时刻轨迹向量之 间的数学关系,为前一时刻轨迹向量的更新值,为系统噪声系数矩阵,为当前时刻 系统噪声,为当前时刻协方差矩阵的预测值,为上一时刻协方差矩阵的更新值,为 噪声协方差矩阵,为测量噪声,可通过前一时刻的卡尔曼增益进行自适应调节,为单位矩 阵,为测量矩阵,用于描述测量量与预测量之间的数学关系,为当前时刻卡尔曼增益, 为轨迹向量的更新值,为当前时刻的测量量,为当前时刻协方差矩阵的更新值,为更 新参量,b为遗忘因子,,p为行人可能性概率,为拍摄场景的区域边 界;输出子单元,用于在所述初步移动轨迹的所有轨迹值均迭代完成之后,采用所有轨迹点 的更新值输出所述目标行人的分段移动轨迹。 ;in, is the predicted value of the trajectory vector at the current moment, which includes: position, velocity, and acceleration, is the state transition matrix, used to describe the mathematical relationship between the previous moment in the trajectory vector and the current moment trajectory vector, is the updated value of the trajectory vector at the previous moment, is the system noise coefficient matrix, is the system noise at the current moment, is the predicted value of the covariance matrix at the current moment, is the updated value of the covariance matrix at the previous moment, is the noise covariance matrix, To measure noise, the Kalman gain at the previous moment can be adaptively adjusted, is the identity matrix, is a measurement matrix, used to describe the mathematical relationship between measured quantities and predicted quantities, is the Kalman gain at the current moment, is the updated value of the trajectory vector, is the measured quantity at the current moment, is the updated value of the covariance matrix at the current moment, is the update parameter, b is the forgetting factor, , p is the probability of pedestrian possibility, , is the regional boundary of the shooting scene; the output subunit is used to output the segmented movement trajectory of the target pedestrian using the updated values of all trajectory points after all trajectory values of the preliminary movement trajectory are iterated.

进一步,所述耦合模块包括:第一计算单元,用于根据所述分段移动轨迹计算所述目标行人的行走速度和相邻摄像头之间的轨迹最短距离,其中,所述空间特征包括所述轨迹最短距离;第二计算单元,用于根据所述行走速度和所述轨迹最短距离计算每个摄像头中每条分段移动轨迹的分布概率;耦合单元,用于选择每个摄像头概率最大的目标轨迹曲线,采用多个摄像头的目标轨迹曲线耦合生成所述目标行人的完整移动轨迹。Further, the coupling module includes: a first calculation unit for calculating the walking speed of the target pedestrian and the shortest distance of the trajectory between adjacent cameras according to the segmented movement trajectory, wherein the spatial feature includes the The shortest distance of the trajectory; the second calculation unit is used to calculate the distribution probability of each segmented movement trajectory in each camera according to the walking speed and the shortest distance of the trajectory; the coupling unit is used to select the target with the highest probability for each camera The trajectory curve uses the target trajectory curve coupling of multiple cameras to generate the complete movement trajectory of the target pedestrian.

进一步,所述第一计算单元包括:计算子单元,用于采用以下公式计算所述目标行 人在第一摄像头中的第一轨迹曲线与第二轨迹曲线之间的弗朗明歇距离D:;其中,代表第一摄像头的可识 别区域,代表第二摄像头的可识别区域,代表行人可通行区域,P代表第一摄像头识别 的分段移动轨迹的第一轨迹曲线,Q代表第二摄像头识别的任意分段移动轨迹的第二轨迹 曲线,inf代表下确界,d为欧式距离,t为时间,为随时间变化的每一对可能的位置 描述函数,为[0,1]的值,为PQ两条轨迹曲线上所有点的距离。 Further, the first calculation unit includes: a calculation subunit for calculating the Flamingche distance D between the first trajectory curve and the second trajectory curve of the target pedestrian in the first camera using the following formula: ;in, Represents the identifiable area of the first camera, Represents the identifiable area of the second camera, represents the pedestrian-accessible area, P represents the first trajectory curve of the segmented movement trajectory identified by the first camera, Q represents the second trajectory curve of any segmented movement trajectory identified by the second camera, inf represents the lower bound, and d is the Euclidean distance, t is time, is each possible pair of position description functions that change over time, and is the value of [0,1], is the distance between all points on the two trajectory curves of PQ.

进一步,所述第二计算单元包括:计算子单元,用于针对每个摄像头,采用以下公 式计算分布概率p:; 其中,为当前摄像头识别的所有行人的行走速度的均值,为为当前摄像头识别的所有 分段移动轨迹的轨迹最短距离的均值,为速度的标准差,为距离的标准差,为相关系 数,v为当前分段移动轨迹中目标行人的行走速度,D为当前分段移动轨迹的轨迹最短距离。 Further, the second calculation unit includes: a calculation subunit, used to calculate the distribution probability p for each camera using the following formula: ; in, is the average walking speed of all pedestrians recognized by the current camera, is the mean of the shortest distance of all segmented movement trajectories identified for the current camera, is the standard deviation of the speed, is the standard deviation of the distance, is the correlation coefficient, v is the walking speed of the target pedestrian in the current segmented movement trajectory, and D is the shortest distance of the current segmented movement trajectory.

进一步,所述耦合模块还包括:判断单元,用于在所述第二计算单元根据所述行走速度和所述轨迹最短距离计算每个摄像头中每条分段移动轨迹的分布概率之前,针对每个摄像头,判断对应的最大分布概率是否小于预设门限值;剔除单元,用于若对应的最大分布概率小于预设门限值,剔除对应摄像头的目标轨迹曲线。Further, the coupling module further includes: a judgment unit, configured to determine the distribution probability of each segmented movement trajectory in each camera according to the walking speed and the shortest distance of the trajectory before the second calculation unit calculates the distribution probability for each segmented movement trajectory in each camera. camera, to determine whether the corresponding maximum distribution probability is less than the preset threshold value; the elimination unit is used to eliminate the target trajectory curve of the corresponding camera if the corresponding maximum distribution probability is less than the preset threshold value.

根据本申请实施例的另一方面,还提供了一种存储介质,该存储介质包括存储的程序,程序运行时执行上述的步骤。According to another aspect of the embodiment of the present application, a storage medium is also provided. The storage medium includes a stored program, and the above steps are executed when the program is run.

根据本申请实施例的另一方面,还提供了一种电子设备,包括处理器、通信接口、存储器和通信总线,其中,处理器,通信接口,存储器通过通信总线完成相互间的通信;其中:存储器,用于存放计算机程序;处理器,用于通过运行存储器上所存放的程序来执行上述方法中的步骤。According to another aspect of the embodiment of the present application, an electronic device is also provided, including a processor, a communication interface, a memory, and a communication bus, wherein the processor, the communication interface, and the memory complete communication with each other through the communication bus; wherein: The memory is used to store computer programs; the processor is used to execute the steps in the above method by running the program stored in the memory.

本申请实施例还提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述方法中的步骤。Embodiments of the present application also provide a computer program product containing instructions that, when run on a computer, cause the computer to perform the steps in the above method.

通过本发明,获取多个摄像头采集的视频集合,其中,视频集合包括多个视频,每个视频对应一台摄像头;识别视频集合中每个视频中的行人框,基于行人框生成目标行人的分段移动轨迹,获取多个摄像头的空间特征,根据空间特征耦合多个分段移动轨迹,生成目标行人的完整移动轨迹,通过识别每个摄像头采集的视频中的目标行人的分段移动轨迹,并根据摄像头的空间特征耦合多个分段移动轨迹,可以在不提取人脸特征的前提下生成目标行人的完整移动轨迹,实现了行人的全轨迹跟踪,解决了相关技术中单摄像头生成行人轨迹不完整的技术问题。Through the present invention, a video collection collected by multiple cameras is obtained, where the video collection includes multiple videos, each video corresponding to a camera; the pedestrian frame in each video in the video collection is identified, and a classification of the target pedestrian is generated based on the pedestrian frame. segment movement trajectories, obtain the spatial characteristics of multiple cameras, couple multiple segment movement trajectories according to the spatial characteristics, and generate a complete movement trajectory of the target pedestrian. By identifying the segment movement trajectories of the target pedestrian in the video collected by each camera, and Coupling multiple segmented movement trajectories according to the spatial characteristics of the camera can generate the complete movement trajectory of the target pedestrian without extracting facial features, realizing full trajectory tracking of pedestrians and solving the problem of inconsistencies in generating pedestrian trajectories with a single camera in related technologies. Complete technical question.

附图说明Description of the drawings

此处所说明的附图用来提供对本发明的进一步理解,构成本申请的一部分,本发明的示意性实施例及其说明用于解释本发明,并不构成对本发明的不当限定。在附图中:The drawings described here are used to provide a further understanding of the present invention and constitute a part of this application. The illustrative embodiments of the present invention and their descriptions are used to explain the present invention and do not constitute an improper limitation of the present invention. In the attached picture:

图1是本发明实施例的一种计算机的硬件结构框图;Figure 1 is a hardware structural block diagram of a computer according to an embodiment of the present invention;

图2是根据本发明实施例的一种行人轨迹的跟踪方法的流程图;Figure 2 is a flow chart of a pedestrian trajectory tracking method according to an embodiment of the present invention;

图3是本发明实施例中自适应卡尔曼滤波的示意图;Figure 3 is a schematic diagram of adaptive Kalman filtering in an embodiment of the present invention;

图4是本发明实施例中耦合目标轨迹曲线的示意图;Figure 4 is a schematic diagram of the coupling target trajectory curve in the embodiment of the present invention;

图5是本发明实施例轨迹跟踪的流程示意图;Figure 5 is a schematic flow chart of trajectory tracking according to the embodiment of the present invention;

图6是根据本发明实施例的一种行人轨迹的跟踪装置的结构框图。Figure 6 is a structural block diagram of a pedestrian trajectory tracking device according to an embodiment of the present invention.

具体实施方式Detailed ways

为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分的实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本申请保护的范围。需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。In order to enable those in the technical field to better understand the solutions of the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below in conjunction with the accompanying drawings in the embodiments of the present application. Obviously, the described embodiments are only These are part of the embodiments of this application, not all of them. Based on the embodiments in this application, all other embodiments obtained by those of ordinary skill in the art without creative efforts should fall within the scope of protection of this application. It should be noted that, as long as there is no conflict, the embodiments and features in the embodiments of this application can be combined with each other.

需要说明的是,本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。It should be noted that the terms "first", "second", etc. in the description and claims of this application and the above-mentioned drawings are used to distinguish similar objects and are not necessarily used to describe a specific order or sequence. It is to be understood that the data so used are interchangeable under appropriate circumstances so that the embodiments of the application described herein can be practiced in sequences other than those illustrated or described herein. Furthermore, the terms "including" and "having" and any variations thereof are intended to cover non-exclusive inclusions, e.g., a process, method, product or apparatus that encompasses a series of steps or units need not be limited to those steps explicitly listed or units, but may include other steps or units not expressly listed or inherent to such processes, methods, products or devices.

实施例1Example 1

本申请实施例一所提供的方法实施例可以在控制器、服务器、计算机、平板或者类似的运算装置中执行。以运行在计算机上为例,图1是本发明实施例的一种计算机的硬件结构框图。如图1所示,计算机可以包括一个或多个(图1中仅示出一个)处理器102(处理器102可以包括但不限于微处理器MCU或可编程逻辑器件FPGA等的处理装置)和用于存储数据的存储器104,可选地,上述计算机还可以包括用于通信功能的传输设备106以及输入输出设备108。本领域普通技术人员可以理解,图1所示的结构仅为示意,其并不对上述计算机的结构造成限定。例如,计算机还可包括比图1中所示更多或者更少的组件,或者具有与图1所示不同的配置。The method embodiment provided in Embodiment 1 of the present application can be executed in a controller, server, computer, tablet or similar computing device. Taking running on a computer as an example, FIG. 1 is a hardware structure block diagram of a computer according to an embodiment of the present invention. As shown in Figure 1, the computer may include one or more (only one is shown in Figure 1) processors 102 (the processor 102 may include but is not limited to a processing device such as a microprocessor MCU or a programmable logic device FPGA) and A memory 104 for storing data. Optionally, the above-mentioned computer may also include a transmission device 106 and an input and output device 108 for communication functions. Persons of ordinary skill in the art can understand that the structure shown in Figure 1 is only illustrative and does not limit the structure of the above-mentioned computer. For example, the computer may also include more or fewer components than shown in FIG. 1 , or have a different configuration than that shown in FIG. 1 .

存储器104可用于存储计算机程序,例如,应用软件的软件程序以及模块,如本发明实施例中的一种行人轨迹的跟踪方法对应的计算机程序,处理器102通过运行存储在存储器104内的计算机程序,从而执行各种功能应用以及数据处理,即实现上述的方法。存储器104可包括高速随机存储器,还可包括非易失性存储器,如一个或者多个磁性存储装置、闪存、或者其他非易失性固态存储器。在一些实例中,存储器104可进一步包括相对于处理器102远程设置的存储器,这些远程存储器可以通过网络连接至计算机。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。The memory 104 can be used to store computer programs, such as software programs and modules of application software, such as a computer program corresponding to a pedestrian trajectory tracking method in an embodiment of the present invention. The processor 102 runs the computer program stored in the memory 104 , thereby executing various functional applications and data processing, that is, implementing the above method. Memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, memory 104 may further include memory located remotely from processor 102, and these remote memories may be connected to the computer through a network. Examples of the above-mentioned networks include but are not limited to the Internet, intranets, local area networks, mobile communication networks and combinations thereof.

传输设备106用于经由一个网络接收或者发送数据。上述的网络具体实例可包括计算机的通信供应商提供的无线网络。在一个实例中,传输设备106包括一个网络适配器(Network Interface Controller,简称为NIC),其可通过基站与其他网络设备相连从而可与互联网进行通讯。在一个实例中,传输设备106可以为射频(Radio Frequency,简称为RF)模块,其用于通过无线方式与互联网进行通讯。Transmission device 106 is used to receive or send data via a network. Specific examples of the above-mentioned network may include a wireless network provided by the computer's communication provider. In one example, the transmission device 106 includes a network adapter (Network Interface Controller, NIC for short), which can be connected to other network devices through a base station to communicate with the Internet. In one example, the transmission device 106 may be a radio frequency (Radio Frequency, RF for short) module, which is used to communicate with the Internet wirelessly.

在本实施例中提供了一种行人轨迹的跟踪方法,图2是根据本发明实施例的一种行人轨迹的跟踪方法的流程图,如图2所示,该流程包括如下步骤:This embodiment provides a pedestrian trajectory tracking method. Figure 2 is a flow chart of a pedestrian trajectory tracking method according to an embodiment of the present invention. As shown in Figure 2, the process includes the following steps:

步骤S202,获取多个摄像头采集的视频集合,其中,视频集合包括多个视频,每个视频对应一台摄像头;Step S202: Obtain a video collection collected by multiple cameras, where the video collection includes multiple videos, and each video corresponds to one camera;

本实施例的方案可以应用在地铁站,火车站,汽车站等场所中,场所中安装多个摄像头,每个摄像头采集对应区域内的图像,本实施例的摄像头为视觉摄像头,即二维摄像头。The solution of this embodiment can be applied to subway stations, railway stations, bus stations and other places. Multiple cameras are installed in the place, and each camera collects images in the corresponding area. The camera of this embodiment is a visual camera, that is, a two-dimensional camera. .

步骤S204,识别视频集合中每个视频中的行人框;Step S204, identify pedestrian frames in each video in the video collection;

本实施例中摄像头采集的视频包括多帧图像,每帧图像中可能包括若干个行人,通过行人框识别,可以检测出其中的行人框,如矩形框。In this embodiment, the video collected by the camera includes multiple frames of images, and each frame of the image may include several pedestrians. Through pedestrian frame recognition, the pedestrian frame, such as a rectangular frame, can be detected.

步骤S206,基于行人框生成目标行人的分段移动轨迹;Step S206: Generate segmented movement trajectories of the target pedestrian based on the pedestrian frame;

由于每个摄像头采集的是连续多帧图像的视频,通过对视频中连续的图像帧进行连续识别,可以得到某个行人在当前视频段中的分段移动轨迹。在本实施例中,每个摄像头对应若干条分段移动轨迹,每条分段移动轨迹对应一个行人,每个目标行人对应若干条分段移动轨迹,每条分段移动轨迹对应一个摄像头。Since each camera collects a video of consecutive multiple frames of images, by continuously identifying the consecutive image frames in the video, the segmented movement trajectory of a pedestrian in the current video segment can be obtained. In this embodiment, each camera corresponds to several segmented movement trajectories, each segmented movement trajectory corresponds to one pedestrian, each target pedestrian corresponds to several segmented movement trajectories, and each segmented movement trajectory corresponds to one camera.

步骤S208,获取多个摄像头的空间特征,根据空间特征耦合多个分段移动轨迹,生成目标行人的完整移动轨迹。Step S208: Obtain the spatial characteristics of multiple cameras, couple multiple segmented movement trajectories according to the spatial characteristics, and generate a complete movement trajectory of the target pedestrian.

可选的,空间特征可以是相邻两个摄像头识别出来的分段移动轨迹之间的距离,摄像头的布局位置,空间关系等。Optionally, the spatial features can be the distance between the segmented movement trajectories recognized by two adjacent cameras, the layout position of the cameras, spatial relationships, etc.

通过上述步骤,获取多个摄像头采集的视频集合,其中,视频集合包括多个视频,每个视频对应一台摄像头;识别视频集合中每个视频中的行人框,基于行人框生成目标行人的分段移动轨迹,获取多个摄像头的空间特征,根据空间特征耦合多个分段移动轨迹,生成目标行人的完整移动轨迹,通过识别每个摄像头采集的视频中的目标行人的分段移动轨迹,并根据摄像头的空间特征耦合多个分段移动轨迹,可以在不提取人脸特征的前提下生成目标行人的完整移动轨迹,实现了行人的全轨迹跟踪,解决了相关技术中单摄像头生成行人轨迹不完整的技术问题。Through the above steps, a video collection collected by multiple cameras is obtained. The video collection includes multiple videos, and each video corresponds to a camera. The pedestrian frame in each video in the video collection is identified, and a classification of the target pedestrian is generated based on the pedestrian frame. segment movement trajectories, obtain the spatial characteristics of multiple cameras, couple multiple segment movement trajectories according to the spatial characteristics, and generate a complete movement trajectory of the target pedestrian. By identifying the segment movement trajectories of the target pedestrian in the video collected by each camera, and Coupling multiple segmented movement trajectories according to the spatial characteristics of the camera can generate the complete movement trajectory of the target pedestrian without extracting facial features, realizing full trajectory tracking of pedestrians and solving the problem of inconsistencies in generating pedestrian trajectories with a single camera in related technologies. Complete technical question.

在本实施例的一个实施方式中,基于行人框生成目标行人的分段移动轨迹包括:In one implementation of this embodiment, generating the segmented movement trajectory of the target pedestrian based on the pedestrian frame includes:

S11,对行人框进行比特跟踪,生成目标行人的初步移动轨迹;S11, perform bit tracking on the pedestrian frame and generate the preliminary movement trajectory of the target pedestrian;

S12,采用自适应卡尔曼滤波模型对初步移动轨迹进行更新,生成目标行人的分段移动轨迹。S12, use the adaptive Kalman filter model to update the preliminary movement trajectory and generate the segmented movement trajectory of the target pedestrian.

在一个示例中,采用自适应卡尔曼滤波模型对初步移动轨迹进行更新,生成目标 行人的分段移动轨迹包括:从预设初始值开始,迭代更新初步移动轨迹的轨迹向量,直到初 步移动轨迹的最后一个轨迹向量:将当前轨迹向量输入以下预测阶段公式,输出当前轨迹 向量的预测值In one example, the adaptive Kalman filter model is used to update the preliminary movement trajectory. Generating the segmented movement trajectory of the target pedestrian includes: starting from a preset initial value, iteratively updating the trajectory vector of the preliminary movement trajectory until the The last trajectory vector: Input the current trajectory vector into the following prediction stage formula and output the predicted value of the current trajectory vector. :

;

;

以下更新阶段公式,输出当前轨迹向量的更新值Will The following update phase formula outputs the updated value of the current trajectory vector. :

;

;

;

;

;其中,为当前时刻轨迹向量的预测值,所述轨迹向量包含:位 置、速度、加速度,为状态转移矩阵,用于描述轨迹向量中前一时刻与当前时刻轨迹向量之 间的数学关系,为前一时刻轨迹向量的更新值,为系统噪声系数矩阵,为当前时刻 系统噪声,为当前时刻协方差矩阵的预测值,为上一时刻协方差矩阵的更新值,为 噪声协方差矩阵,为测量噪声,可通过前一时刻的卡尔曼增益进行自适应调节,为单位矩 阵,为测量矩阵,用于描述测量量与预测量之间的数学关系,为当前时刻卡尔曼增益, 为轨迹向量的更新值,为当前时刻的测量量,为当前时刻协方差矩阵的更新值,为更 新参量,b为遗忘因子,,p为行人可能性概率,为拍摄场景的区域边 界; ;in, is the predicted value of the trajectory vector at the current moment, which includes: position, velocity, and acceleration, is the state transition matrix, used to describe the mathematical relationship between the previous moment in the trajectory vector and the current moment trajectory vector, is the updated value of the trajectory vector at the previous moment, is the system noise coefficient matrix, is the system noise at the current moment, is the predicted value of the covariance matrix at the current moment, is the updated value of the covariance matrix at the previous moment, is the noise covariance matrix, To measure noise, the Kalman gain at the previous moment can be adaptively adjusted, is the identity matrix, is a measurement matrix, used to describe the mathematical relationship between measured quantities and predicted quantities, is the Kalman gain at the current moment, is the updated value of the trajectory vector, is the measured quantity at the current moment, is the updated value of the covariance matrix at the current moment, is the update parameter, b is the forgetting factor, , p is the probability of pedestrian possibility, , is the area boundary of the shooting scene;

在初步移动轨迹的所有轨迹向量均迭代完成之后,采用所有轨迹点的更新值输出目标行人的分段移动轨迹。After all trajectory vectors of the preliminary movement trajectory are iterated, the updated values of all trajectory points are used to output the segmented movement trajectory of the target pedestrian.

在生成每个行人的分段移动轨迹时,由于地铁站厅等场景下人流量大,轨迹生成困难,为此,本实施例先通过Byte Track生成初步的轨迹,然后通过一种改进的自适应卡尔曼滤波(AKF)对检测到的轨迹进行更新,并生成优化后的分段移动轨迹。When generating the segmented movement trajectory of each pedestrian, it is difficult to generate trajectories due to the large flow of people in scenes such as subway stations. For this reason, this embodiment first generates a preliminary trajectory through Byte Track, and then uses an improved adaptive Kalman filter (AKF) updates the detected trajectories and generates optimized segmented movement trajectories.

图3是本发明实施例中自适应卡尔曼滤波的示意图,与传统自适应卡尔曼滤波相比,本实施例的算法增加了边界约束,由于在地铁等场景下存在墙体,因此在预测和更新时均需考虑边界约束带来的影响,同时,所使用的R自调节和Q自调节中其使用的更新参量与相应的在Byte Track中获得的行人可能性概率有关。Figure 3 is a schematic diagram of adaptive Kalman filtering in an embodiment of the present invention. Compared with traditional adaptive Kalman filtering, the algorithm of this embodiment adds boundary constraints. Due to the existence of walls in scenes such as subways, there is a problem in prediction and The impact of boundary constraints needs to be considered when updating. At the same time, the update parameters used in R self-adjustment and Q self-adjustment are related to the corresponding pedestrian possibility probability obtained in Byte Track.

在本实施例中,获取多个摄像头的空间特征,根据空间特征耦合多个分段移动轨迹,生成目标行人的完整移动轨迹包括:In this embodiment, obtaining the spatial characteristics of multiple cameras, coupling multiple segmented movement trajectories according to the spatial characteristics, and generating the complete movement trajectory of the target pedestrian includes:

S21,根据分段移动轨迹计算目标行人的行走速度和相邻摄像头之间的轨迹最短距离,其中,空间特征包括轨迹最短距离;S21, calculate the walking speed of the target pedestrian and the shortest distance between adjacent cameras based on the segmented movement trajectory, where the spatial features include the shortest distance of the trajectory;

在一些示例中,根据分段移动轨迹计算相邻摄像头之间的轨迹最短距离包括:采 用以下公式计算目标行人在第一摄像头中的第一轨迹曲线与第二轨迹曲线之间的弗朗明 歇距离D:;其中,代表第一摄像 头的可识别区域,代表第二摄像头的可识别区域,代表行人可通行区域,P代表第一摄 像头识别的分段移动轨迹的第一轨迹曲线,Q代表第二摄像头识别的任意分段移动轨迹的 第二轨迹曲线,inf代表下确界,d为欧式距离,t为时间,为随时间变化的每一对可 能的位置描述函数,为[0,1]的值,为PQ两条轨迹曲线上所有点的距离。 In some examples, calculating the shortest distance between adjacent cameras based on the segmented movement trajectories includes: using the following formula to calculate the flamenche between the first trajectory curve and the second trajectory curve of the target pedestrian in the first camera. Distance D: ;in, Represents the identifiable area of the first camera, Represents the identifiable area of the second camera, represents the pedestrian-accessible area, P represents the first trajectory curve of the segmented movement trajectory identified by the first camera, Q represents the second trajectory curve of any segmented movement trajectory identified by the second camera, inf represents the lower bound, and d is the Euclidean distance, t is time, is each possible pair of position description functions that change over time, and is the value of [0,1], is the distance between all points on the two trajectory curves of PQ.

本实施例根据单摄像头生成的轨迹及摄像头位置提取空间特征,同一个行人在不同摄像头下的出现可能性与行走速度及可识别轨迹的距离有关,首先通过不同摄像头可检测的轨迹曲线均值为输入,经过Frechet Distance(弗朗明歇距离)计算其最短距离。This embodiment extracts spatial features based on the trajectory generated by a single camera and the camera position. The possibility of the same pedestrian appearing under different cameras is related to the walking speed and the distance of the identifiable trajectory. First, the average of the trajectory curves detectable by different cameras is the input. , calculate its shortest distance via Frechet Distance.

Frechet Distance在地铁车站应用的原理如下:The principle of Frechet Distance application in subway stations is as follows:

;

其中,D代表空间度量距离,即两条轨迹的最近距离,P、Q代表两条轨迹曲线,inf代 表下确界,d为欧式距离,t为时间,为随时间变化的每一对可能的位置描述函 数,为[0,1]的值,为PQ两条轨迹曲线上所有点的距离。 Among them, D represents the spatial metric distance, that is, the closest distance between the two trajectories, P and Q represent the two trajectory curves, inf represents the lower bound, d is the Euclidean distance, and t is time. is each possible pair of position description functions that change over time, and is the value of [0,1], is the distance between all points on the two trajectory curves of PQ.

进一步的,在地铁车站场景下的边界约束问题,可表示为以下公式:Furthermore, the boundary constraint problem in the subway station scenario can be expressed as the following formula:

;

其中代表摄像头1和摄像头2的可识别区域,代表行人可通行区域。 in , Represents the identifiable area of camera 1 and camera 2, Represents a pedestrian-accessible area.

S22,根据行走速度和轨迹最短距离计算每个摄像头中每条分段移动轨迹的分布概率;S22, calculate the distribution probability of each segmented movement trajectory in each camera based on the walking speed and the shortest distance of the trajectory;

可选的,根据行走速度和轨迹最短距离计算每个摄像头中每条分段移动轨迹的分 布概率包括:针对每个摄像头,采用以下公式计算分布概率p:;其中,为当前摄像 头识别的所有行人的行走速度的均值,为当前摄像头识别的所有分段移动轨迹的轨迹最 短距离的均值,为速度的标准差,为距离的标准差,为相关系数,v为当前分段移动轨 迹中目标行人的行走速度,D为当前分段移动轨迹的轨迹最短距离。 Optionally, calculating the distribution probability of each segmented movement trajectory in each camera based on the walking speed and the shortest distance of the trajectory includes: for each camera, use the following formula to calculate the distribution probability p: ;in, is the average walking speed of all pedestrians recognized by the current camera, is the mean of the shortest distance of all segmented movement trajectories identified by the current camera, is the standard deviation of the speed, is the standard deviation of the distance, is the correlation coefficient, v is the walking speed of the target pedestrian in the current segmented movement trajectory, and D is the shortest distance of the current segmented movement trajectory.

S23,选择每个摄像头概率最大的目标轨迹曲线,采用多个摄像头的目标轨迹曲线耦合生成目标行人的完整移动轨迹。S23, select the target trajectory curve with the highest probability for each camera, and use the target trajectory curve coupling of multiple cameras to generate the complete movement trajectory of the target pedestrian.

图4是本发明实施例中耦合目标轨迹曲线的示意图,代表摄像头1和摄像头 2的可识别区域,轨迹1为摄像头1概率最大的目标轨迹曲线,轨迹2为摄像头2概率最大的目 标轨迹曲线,概率最大的为不同摄像头下的同一曲线,可将其耦合进行输出。 Figure 4 is a schematic diagram of the coupling target trajectory curve in the embodiment of the present invention. , Represents the identifiable areas of camera 1 and camera 2. Trajectory 1 is the target trajectory curve with the highest probability for camera 1. Trajectory 2 is the target trajectory curve with the highest probability for camera 2. The highest probability is the same curve under different cameras, which can be coupled. for output.

在一个示例中,在根据行走速度和轨迹最短距离计算每个摄像头中每条分段移动轨迹的分布概率之前,还包括:针对每个摄像头,判断对应的最大分布概率是否小于预设门限值;若对应的最大分布概率小于预设门限值,剔除对应摄像头的目标轨迹曲线。In one example, before calculating the distribution probability of each segmented movement trajectory in each camera based on the walking speed and the shortest distance of the trajectory, it also includes: for each camera, determining whether the corresponding maximum distribution probability is less than a preset threshold value ; If the corresponding maximum distribution probability is less than the preset threshold, the target trajectory curve of the corresponding camera is eliminated.

通过剔除,可以过滤掉未采集到目标行人的摄像头的目标轨迹曲线,防止移动轨迹无序乱跳,提高目标行人的完整移动轨迹的准确性。Through elimination, the target trajectory curve of the camera that has not collected the target pedestrian can be filtered out, preventing the movement trajectory from jumping out of order, and improving the accuracy of the complete movement trajectory of the target pedestrian.

图5是本发明实施例轨迹跟踪的流程示意图,提出了一种多摄像头耦合的轨道交通乘客完整轨迹跟踪方法及装置,可以通过摄像头之间的耦合与关联,结合空间特征,准确刻画乘客在站内的行走轨迹,包括以下步骤:Figure 5 is a schematic flow chart of trajectory tracking according to an embodiment of the present invention. A multi-camera coupled rail transit passenger complete trajectory tracking method and device are proposed, which can accurately depict passengers in the station through coupling and association between cameras and spatial characteristics. The walking trajectory includes the following steps:

步骤1:数据采集,将各摄像头数据实时传输到服务器进行图像数据的处理。Step 1: Data collection, transmit the data from each camera to the server in real time for image data processing.

步骤2:行人识别,通过yolo框架对单摄像头下的行人进行识别。Step 2: Pedestrian recognition, identify pedestrians under a single camera through the yolo framework.

步骤3:轨迹生成,在地铁站厅场景下人流量大,轨迹生成困难,为此,本发明首先通过ByteTrack生成初步的轨迹,然后通过一种改进的自适应卡尔曼滤波(AKF)对检测到的轨迹进行更新,并生成优化的轨迹。Step 3: Trajectory generation. In the subway station hall scene, where the flow of people is large, trajectory generation is difficult. For this reason, the present invention first generates a preliminary trajectory through ByteTrack, and then uses an improved adaptive Kalman filter (AKF) to detect The trajectory is updated and an optimized trajectory is generated.

步骤4:空间特征生成,根据生成的轨迹及摄像头位置提取空间特征,并通过Frechet Distance 计算轨迹距离。Step 4: Spatial feature generation, extract spatial features based on the generated trajectory and camera position, and calculate the trajectory distance through Frechet Distance.

步骤5:轨迹跟踪,通过概率分布曲线获取完整轨迹路径。Step 5: Trajectory tracking, obtain the complete trajectory path through the probability distribution curve.

采用本实施例的方案,采用改进的自适应卡尔曼滤波方法,基于改进的FrechetDistance的摄像头空间特征提取,并进行多摄像头轨迹曲线耦合。可以解决地铁等场景摄像头分散,多视角人脸融合检测困难的问题,可以解决人流密度大,易产生遮挡,路径跟踪困难的问题。对于人脸信息采集困难,通过对路径及人体特征的识别,判断多摄像头下是否为同一乘客,可解决在无法识别人脸特征情况下的轨迹跟踪问题,可以解决单摄像头监控在人多的情况下轨迹不稳定的问题,可解决地铁站内全轨迹跟踪问题。Using the solution of this embodiment, an improved adaptive Kalman filtering method is adopted, camera space feature extraction is based on improved FrechetDistance, and multi-camera trajectory curve coupling is performed. It can solve the problem of dispersed cameras in subway and other scenes and the difficulty of multi-view face fusion detection. It can also solve the problem of high density of people flow, easy occlusion and difficulty in path tracking. For face information collection difficulties, through the recognition of paths and human body features, it can be judged whether the same passenger is viewed by multiple cameras, which can solve the problem of trajectory tracking when facial features cannot be recognized, and can solve the problem of single camera monitoring in crowded situations. It can solve the problem of unstable trajectory in subway stations and solve the problem of full trajectory tracking in subway stations.

通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到根据上述实施例的方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本发明各个实施例所述的方法。Through the description of the above embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by means of software plus the necessary general hardware platform. Of course, it can also be implemented by hardware, but in many cases the former is Better implementation. Based on this understanding, the technical solution of the present invention can be embodied in the form of a software product in essence or that contributes to the existing technology. The computer software product is stored in a storage medium (such as ROM/RAM, disk, CD), including several instructions to cause a terminal device (which can be a mobile phone, computer, server, or network device, etc.) to execute the methods described in various embodiments of the present invention.

实施例2Example 2

在本实施例中还提供了一种行人轨迹的跟踪装置,用于实现上述实施例及优选实施方式,已经进行过说明的不再赘述。如以下所使用的,术语“模块”可以实现预定功能的软件和/或硬件的组合。尽管以下实施例所描述的装置较佳地以软件来实现,但是硬件,或者软件和硬件的组合的实现也是可能并被构想的。This embodiment also provides a pedestrian trajectory tracking device, which is used to implement the above embodiments and preferred implementations. What has already been explained will not be described again. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the apparatus described in the following embodiments is preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.

图6是根据本发明实施例的一种行人轨迹的跟踪装置的结构框图,如图6所示,该装置包括:获取模块60,识别模块62,生成模块64,耦合模块66,其中,Figure 6 is a structural block diagram of a pedestrian trajectory tracking device according to an embodiment of the present invention. As shown in Figure 6, the device includes: an acquisition module 60, an identification module 62, a generation module 64, and a coupling module 66, where,

获取模块60,用于获取多个摄像头采集的视频集合,其中,所述视频集合包括多个视频,每个视频对应一台摄像头;The acquisition module 60 is used to acquire a video collection collected by multiple cameras, where the video collection includes multiple videos, and each video corresponds to one camera;

识别模块62,用于识别所述视频集合中每个视频中的行人框;Recognition module 62, used to identify pedestrian frames in each video in the video collection;

生成模块64,用于基于所述行人框生成目标行人的分段移动轨迹;Generating module 64, configured to generate segmented movement trajectories of the target pedestrian based on the pedestrian frame;

耦合模块66,用于获取所述多个摄像头的空间特征,根据所述空间特征耦合多个分段移动轨迹,生成所述目标行人的完整移动轨迹。The coupling module 66 is used to obtain the spatial characteristics of the multiple cameras, couple multiple segmented movement trajectories according to the spatial characteristics, and generate a complete movement trajectory of the target pedestrian.

可选的,所述生成模块包括:第一生成单元,用于对所述行人框进行比特跟踪,生成目标行人的初步移动轨迹;第二生成单元,用于采用自适应卡尔曼滤波模型对所述初步移动轨迹进行更新,生成所述目标行人的分段移动轨迹。Optionally, the generation module includes: a first generation unit for bit tracking the pedestrian frame and generating a preliminary movement trajectory of the target pedestrian; a second generation unit for using an adaptive Kalman filter model to The preliminary movement trajectory is updated to generate the segmented movement trajectory of the target pedestrian.

可选的,所述第二生成单元包括:迭代子单元,用于从预设初始值开始,迭代更新 所述初步移动轨迹的轨迹向量,直到所述初步移动轨迹的最后一个轨迹向量:将当前轨迹 向量输入以下预测阶段公式,输出当前轨迹向量的预测值Optionally, the second generation unit includes: an iterative subunit, used to iteratively update the trajectory vector of the preliminary movement trajectory starting from a preset initial value until the last trajectory vector of the preliminary movement trajectory: convert the current The trajectory vector inputs the following prediction stage formula and outputs the predicted value of the current trajectory vector. :

;

;

以下更新阶段公式,输出当前轨迹向量的更新值Will The following update phase formula outputs the updated value of the current trajectory vector. :

;

;

;

;

;其中,为当前时刻轨迹向量的预测值,所述轨迹向量包含:位 置、速度、加速度,为状态转移矩阵,用于描述轨迹向量中前一时刻与当前时刻轨迹向量之 间的数学关系,为前一时刻轨迹向量的更新值,为系统噪声系数矩阵,为当前时刻 系统噪声,为当前时刻协方差矩阵的预测值,为上一时刻协方差矩阵的更新值,为 噪声协方差矩阵,为测量噪声,可通过前一时刻的卡尔曼增益进行自适应调节,为单位矩 阵,为测量矩阵,用于描述测量量与预测量之间的数学关系,为当前时刻卡尔曼增益, 为轨迹向量的更新值,为当前时刻的测量量,为当前时刻协方差矩阵的更新值,为更 新参量,b为遗忘因子,,p为行人可能性概率,为拍摄场景的区域边 界;输出子单元,用于在所述初步移动轨迹的所有轨迹向量均迭代完成之后,采用所有轨迹 点的更新值输出所述目标行人的分段移动轨迹。 ;in, is the predicted value of the trajectory vector at the current moment, which includes: position, velocity, and acceleration, is the state transition matrix, used to describe the mathematical relationship between the previous moment in the trajectory vector and the current moment trajectory vector, is the updated value of the trajectory vector at the previous moment, is the system noise coefficient matrix, is the system noise at the current moment, is the predicted value of the covariance matrix at the current moment, is the updated value of the covariance matrix at the previous moment, is the noise covariance matrix, To measure noise, the Kalman gain at the previous moment can be adaptively adjusted, is the identity matrix, is a measurement matrix, used to describe the mathematical relationship between measured quantities and predicted quantities, is the Kalman gain at the current moment, is the updated value of the trajectory vector, is the measured quantity at the current moment, is the updated value of the covariance matrix at the current moment, is the update parameter, b is the forgetting factor, , p is the probability of pedestrian possibility, , is the regional boundary of the shooting scene; the output subunit is used to output the segmented movement trajectory of the target pedestrian using the updated values of all trajectory points after all trajectory vectors of the preliminary movement trajectory are iterated.

可选的,所述耦合模块包括:第一计算单元,用于根据所述分段移动轨迹计算所述目标行人的行走速度和相邻摄像头之间的轨迹最短距离,其中,所述空间特征包括所述轨迹最短距离;第二计算单元,用于根据所述行走速度和所述轨迹最短距离计算每个摄像头中每条分段移动轨迹的分布概率;耦合单元,用于选择每个摄像头概率最大的目标轨迹曲线,采用多个摄像头的目标轨迹曲线耦合生成所述目标行人的完整移动轨迹。Optionally, the coupling module includes: a first calculation unit, configured to calculate the walking speed of the target pedestrian and the shortest distance between adjacent cameras according to the segmented movement trajectory, wherein the spatial features include The shortest distance of the trajectory; a second calculation unit, used to calculate the distribution probability of each segmented movement trajectory in each camera according to the walking speed and the shortest distance of the trajectory; a coupling unit, used to select the maximum probability of each camera The target trajectory curve of multiple cameras is coupled to generate the complete movement trajectory of the target pedestrian.

可选的,所述第一计算单元包括:计算子单元,用于采用以下公式计算所述目标行 人在第一摄像头中的第一轨迹曲线与第二轨迹曲线之间的弗朗明歇距离D:;其中,代表第一摄像头的可识 别区域,代表第二摄像头的可识别区域,代表行人可通行区域,P代表第一摄像头识别 的分段移动轨迹的第一轨迹曲线,Q代表第二摄像头识别的任意分段移动轨迹的第二轨迹 曲线,inf代表下确界d为欧式距离,t为时间,为随时间变化的每一对可能的位置描 述函数,为[0,1]的值,为PQ两条轨迹曲线上所有点的距离。 Optionally, the first calculation unit includes: a calculation subunit for calculating the Flamingche distance D between the first trajectory curve and the second trajectory curve of the target pedestrian in the first camera using the following formula: : ;in, Represents the identifiable area of the first camera, Represents the identifiable area of the second camera, represents the pedestrian-accessible area, P represents the first trajectory curve of the segmented movement trajectory identified by the first camera, Q represents the second trajectory curve of any segmented movement trajectory identified by the second camera, inf represents the lower bound d is the Euclidean distance , t is time, is each possible pair of position description functions that change over time, and is the value of [0,1], is the distance between all points on the two trajectory curves of PQ.

可选的,所述第二计算单元包括:计算子单元,用于针对每个摄像头,采用以下公 式计算分布概率p:; 其中,为当前摄像头识别的所有行人的行走速度的均值,为为当前摄像头识别的所有 分段移动轨迹的轨迹最短距离的均值,为速度的标准差,为距离的标准差,为相关系 数,v为当前分段移动轨迹中目标行人的行走速度,D为当前分段移动轨迹的轨迹最短距离。 Optionally, the second calculation unit includes: a calculation subunit, used to calculate the distribution probability p for each camera using the following formula: ; in, is the average walking speed of all pedestrians recognized by the current camera, is the mean of the shortest distance of all segmented movement trajectories identified for the current camera, is the standard deviation of the speed, is the standard deviation of the distance, is the correlation coefficient, v is the walking speed of the target pedestrian in the current segmented movement trajectory, and D is the shortest distance of the current segmented movement trajectory.

可选的,所述耦合模块还包括:判断单元,用于在所述第二计算单元根据所述行走速度和所述轨迹最短距离计算每个摄像头中每条分段移动轨迹的分布概率之前,针对每个摄像头,判断对应的最大分布概率是否小于预设门限值;剔除单元,用于若对应的最大分布概率小于预设门限值,剔除对应摄像头的目标轨迹曲线。Optionally, the coupling module also includes: a judgment unit, configured to calculate the distribution probability of each segmented movement trajectory in each camera according to the walking speed and the shortest distance of the trajectory before the second calculation unit, For each camera, determine whether the corresponding maximum distribution probability is less than the preset threshold value; the elimination unit is used to eliminate the target trajectory curve of the corresponding camera if the corresponding maximum distribution probability is less than the preset threshold value.

需要说明的是,上述各个模块是可以通过软件或硬件来实现的,对于后者,可以通过以下方式实现,但不限于此:上述模块均位于同一处理器中;或者,上述各个模块以任意组合的形式分别位于不同的处理器中。It should be noted that each of the above modules can be implemented through software or hardware. For the latter, it can be implemented in the following ways, but is not limited to this: the above modules are all located in the same processor; or the above modules can be implemented in any combination. The forms are located in different processors.

实施例3Example 3

本发明的实施例还提供了一种存储介质,该存储介质中存储有计算机程序,其中,该计算机程序被设置为运行时执行上述任一项方法实施例中的步骤。Embodiments of the present invention also provide a storage medium in which a computer program is stored, wherein the computer program is configured to execute the steps in any of the above method embodiments when running.

可选地,在本实施例中,上述存储介质可以被设置为存储用于执行以下步骤的计算机程序:Optionally, in this embodiment, the above-mentioned storage medium may be configured to store a computer program for performing the following steps:

S1,获取多个摄像头采集的视频集合,其中,所述视频集合包括多个视频,每个视频对应一台摄像头;S1. Obtain a video collection collected by multiple cameras, where the video collection includes multiple videos, and each video corresponds to one camera;

S2,识别所述视频集合中每个视频中的行人框;S2, identify the pedestrian frame in each video in the video collection;

S3,基于所述行人框生成目标行人的分段移动轨迹;S3: Generate segmented movement trajectories of the target pedestrian based on the pedestrian frame;

S4,获取所述多个摄像头的空间特征,根据所述空间特征耦合多个分段移动轨迹,生成所述目标行人的完整移动轨迹。S4: Obtain the spatial characteristics of the multiple cameras, couple multiple segmented movement trajectories according to the spatial characteristics, and generate a complete movement trajectory of the target pedestrian.

可选地,在本实施例中,上述存储介质可以包括但不限于:U盘、只读存储器(Read-Only Memory,简称为ROM)、随机存取存储器(Random Access Memory,简称为RAM)、移动硬盘、磁碟或者光盘等各种可以存储计算机程序的介质。Optionally, in this embodiment, the above storage medium may include but is not limited to: U disk, read-only memory (Read-Only Memory, referred to as ROM), random access memory (Random Access Memory, referred to as RAM), Various media that can store computer programs, such as removable hard drives, magnetic disks, or optical disks.

本发明的实施例还提供了一种电子设备,包括存储器和处理器,该存储器中存储有计算机程序,该处理器被设置为运行计算机程序以执行上述任一项方法实施例中的步骤。An embodiment of the present invention also provides an electronic device, including a memory and a processor. A computer program is stored in the memory, and the processor is configured to run the computer program to perform the steps in any of the above method embodiments.

可选地,上述电子设备还可以包括传输设备以及输入输出设备,其中,该传输设备和上述处理器连接,该输入输出设备和上述处理器连接。Optionally, the above-mentioned electronic device may further include a transmission device and an input-output device, wherein the transmission device is connected to the above-mentioned processor, and the input-output device is connected to the above-mentioned processor.

可选地,在本实施例中,上述处理器可以被设置为通过计算机程序执行以下步骤:Optionally, in this embodiment, the above-mentioned processor may be configured to perform the following steps through a computer program:

S1,获取多个摄像头采集的视频集合,其中,所述视频集合包括多个视频,每个视频对应一台摄像头;S1. Obtain a video collection collected by multiple cameras, where the video collection includes multiple videos, and each video corresponds to one camera;

S2,识别所述视频集合中每个视频中的行人框;S2, identify the pedestrian frame in each video in the video collection;

S3,基于所述行人框生成目标行人的分段移动轨迹;S3: Generate segmented movement trajectories of the target pedestrian based on the pedestrian frame;

S4,获取所述多个摄像头的空间特征,根据所述空间特征耦合多个分段移动轨迹,生成所述目标行人的完整移动轨迹。S4: Obtain the spatial characteristics of the multiple cameras, couple multiple segmented movement trajectories according to the spatial characteristics, and generate a complete movement trajectory of the target pedestrian.

可选地,本实施例中的具体示例可以参考上述实施例及可选实施方式中所描述的示例,本实施例在此不再赘述。Optionally, for specific examples in this embodiment, reference can be made to the examples described in the above-mentioned embodiments and optional implementations, and details will not be described again in this embodiment.

上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。The above serial numbers of the embodiments of the present application are only for description and do not represent the advantages or disadvantages of the embodiments.

在本申请的上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。In the above-mentioned embodiments of the present application, each embodiment is described with its own emphasis. For parts that are not described in detail in a certain embodiment, please refer to the relevant descriptions of other embodiments.

在本申请所提供的几个实施例中,应该理解到,所揭露的技术内容,可通过其它的方式实现。其中,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。In the several embodiments provided in this application, it should be understood that the disclosed technical content can be implemented in other ways. Among them, the device embodiments described above are only illustrative. For example, the division of the units is only a logical function division. In actual implementation, there may be other division methods. For example, multiple units or components may be combined or may be Integrated into another system, or some features can be ignored, or not implemented. On the other hand, the coupling or direct coupling or communication connection between each other shown or discussed may be through some interfaces, and the indirect coupling or communication connection of the units or modules may be in electrical or other forms.

所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or they may be distributed to multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.

另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present application can be integrated into one processing unit, each unit can exist physically alone, or two or more units can be integrated into one unit. The above integrated units can be implemented in the form of hardware or software functional units.

所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。If the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application is essentially or contributes to the existing technology, or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , including several instructions to cause a computer device (which can be a personal computer, a server or a network device, etc.) to execute all or part of the steps of the methods described in various embodiments of this application. The aforementioned storage media include: U disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), mobile hard disk, magnetic disk or optical disk and other media that can store program code. .

以上所述仅是本申请的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本申请的保护范围。The above are only the preferred embodiments of the present application. It should be pointed out that for those of ordinary skill in the art, several improvements and modifications can be made without departing from the principles of the present application. These improvements and modifications can also be made. should be regarded as the scope of protection of this application.

Claims (10)

1. A method of tracking a pedestrian trajectory, the method comprising:
acquiring a video set acquired by a plurality of cameras, wherein the video set comprises a plurality of videos, and each video corresponds to one camera;
identifying a pedestrian box in each video in the set of videos;
generating a segmented movement track of a target pedestrian based on the pedestrian frame;
and acquiring the spatial characteristics of the cameras, coupling a plurality of segmented movement tracks according to the spatial characteristics, and generating a complete movement track of the target pedestrian.
2. The method of claim 1, wherein generating a segmented movement trajectory of a target pedestrian based on the pedestrian frame comprises:
performing bit tracking on the pedestrian frame to generate a preliminary movement track of a target pedestrian;
and updating the preliminary movement track by adopting a self-adaptive Kalman filtering model to generate the segmented movement track of the target pedestrian.
3. The method of claim 2, wherein updating the preliminary movement trajectory using an adaptive kalman filter model, generating a segmented movement trajectory of the target pedestrian comprises:
iteratively updating the track vector of the preliminary movement track from a preset initial value until the last track vector of the preliminary movement track:
inputting the current track vector into the following prediction stage formula, and outputting the predicted value of the current track vector
Will beThe following update phase formula outputs the update value of the current track vector +.>
; wherein ,/>A predicted value of a track vector at a current moment, wherein the track vector comprises: position, speed, acceleration, ">Is a state transition matrix for describing mathematical relationship between the track vector of the previous moment and the track vector of the current moment,/for the track vector>For the updated value of the trajectory vector at the previous time, and (2)>Is a system noise coefficient matrix->For the current moment system noise +.>For the predicted value of the covariance matrix at the current moment, < + >>For the updated value of the covariance matrix at the last moment,/->Is a noise covariance matrix>For measuring noise, the adaptive adjustment can be made by means of the Kalman gain at the previous time instant,/ >Is a unitary matrix->For the measurement matrix, a mathematical relationship between the measured quantity and the predicted quantity is described, < >>For the Kalman gain at the current moment, +.>For the updated value of the track vector, +.>For the measurement of the current time, +.>For the updated value of the covariance matrix at the current instant, and (2)>For updating parameters, b is a forgetting factor, < ->P is the probability of pedestrian probability, +.>,/>The method comprises the steps of (1) shooting a region boundary of a scene;
and after all track vectors of the preliminary movement track are iterated, outputting the segmented movement track of the target pedestrian by adopting updated values of all track points.
4. The method of claim 1, wherein obtaining spatial features of the plurality of cameras, coupling a plurality of segmented movement trajectories according to the spatial features, generating a complete movement trajectory of the target pedestrian comprises:
calculating the walking speed of the target pedestrian and the track shortest distance between adjacent cameras according to the segmented moving track, wherein the spatial features comprise the track shortest distance;
calculating the distribution probability of each sectional moving track in each camera according to the walking speed and the track shortest distance;
and selecting a target track curve with the maximum probability of each camera, and generating a complete moving track of the target pedestrian by adopting target track curve coupling of a plurality of cameras.
5. The method of claim 4, wherein calculating a trajectory shortest distance between adjacent cameras from the segmented movement trajectory comprises:
the Fregming distance D between a first track curve and a second track curve of the target pedestrian in the first camera is calculated by adopting the following formula:
wherein ,representing an identifiable region of the first camera, < >>Representing an identifiable region of the second camera, < >>Representing a passable area of a pedestrian, P representing a first track curve of a segmented moving track identified by a first camera, Q representing a second track curve of any segmented moving track identified by a second camera, inf representing a lower definition, d being a Euclidean distance, t being time>For each pair of possible location description functions that vary with time, it is [0,1]Value of->Is the distance between all points on the two trajectory curves of PQ.
6. The method of claim 4, wherein calculating a distribution probability for each segmented movement track in each camera based on the travel speed and the track shortest distance comprises:
for each camera, the distribution probability p is calculated using the following formula:
wherein ,mean value of walking speeds of all pedestrians identified for the current camera, +. >For the mean value of the track shortest distance of all the segmented moving tracks identified by the current camera, the +.>Is the standard deviation of the velocity +.>Is the standard deviation of distance>V is the walking speed of the target pedestrian in the current sectional moving track, and D is the track shortest distance of the current sectional moving track.
7. The method of claim 4, wherein prior to calculating the probability of distribution of each segmented movement track in each camera from the travel speed and the track shortest distance, the method further comprises:
judging whether the corresponding maximum distribution probability is smaller than a preset threshold value or not according to each camera;
and if the corresponding maximum distribution probability is smaller than a preset threshold value, eliminating the target track curve of the corresponding camera.
8. A tracking device for a pedestrian track, comprising:
the system comprises an acquisition module, a video acquisition module and a display module, wherein the acquisition module is used for acquiring a video set acquired by a plurality of cameras, the video set comprises a plurality of videos, and each video corresponds to one camera;
the identification module is used for identifying pedestrian frames in each video in the video set;
the generation module is used for generating a segmented movement track of the target pedestrian based on the pedestrian frame;
The coupling module is used for acquiring the spatial characteristics of the cameras, coupling the plurality of segmented movement tracks according to the spatial characteristics, and generating the complete movement track of the target pedestrian.
9. The electronic equipment is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus; wherein:
a memory for storing a computer program;
a processor for performing the steps of the method of any one of claims 1 to 7 by running a program stored on a memory.
10. A storage medium comprising a stored program, wherein the program when run performs the steps of the method of any of the preceding claims 1 to 7.
CN202310712202.8A 2023-06-15 2023-06-15 Pedestrian trajectory tracking method and device, electronic equipment and storage medium Pending CN116843726A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310712202.8A CN116843726A (en) 2023-06-15 2023-06-15 Pedestrian trajectory tracking method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310712202.8A CN116843726A (en) 2023-06-15 2023-06-15 Pedestrian trajectory tracking method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116843726A true CN116843726A (en) 2023-10-03

Family

ID=88169740

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310712202.8A Pending CN116843726A (en) 2023-06-15 2023-06-15 Pedestrian trajectory tracking method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116843726A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117237418A (en) * 2023-11-15 2023-12-15 成都航空职业技术学院 Moving object detection method and system based on deep learning
CN119091394A (en) * 2023-12-01 2024-12-06 宁夏交投高速公路管理有限公司 Dynamic small target tracking and detection method and system on highway pavement based on improved YOLOv5 and ByteTrack

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103914854A (en) * 2014-03-24 2014-07-09 河海大学 Method for target correlation and track generation of image sequence
CN108629791A (en) * 2017-03-17 2018-10-09 北京旷视科技有限公司 Pedestrian tracting method and device and across camera pedestrian tracting method and device
CN110232712A (en) * 2019-06-11 2019-09-13 武汉数文科技有限公司 Indoor occupant positioning and tracing method and computer equipment
AU2018102199A4 (en) * 2018-11-13 2021-01-28 Beijing Didi Infinity Technology And Development Co., Ltd. Methods and systems for color point cloud generation
CN115171185A (en) * 2022-07-01 2022-10-11 中铁第四勘察设计院集团有限公司 Cross-camera face tracking method, device and medium based on time-space correlation
CN115984318A (en) * 2023-03-20 2023-04-18 宝略科技(浙江)有限公司 Cross-camera pedestrian tracking method based on feature maximum association probability

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103914854A (en) * 2014-03-24 2014-07-09 河海大学 Method for target correlation and track generation of image sequence
CN108629791A (en) * 2017-03-17 2018-10-09 北京旷视科技有限公司 Pedestrian tracting method and device and across camera pedestrian tracting method and device
AU2018102199A4 (en) * 2018-11-13 2021-01-28 Beijing Didi Infinity Technology And Development Co., Ltd. Methods and systems for color point cloud generation
CN110232712A (en) * 2019-06-11 2019-09-13 武汉数文科技有限公司 Indoor occupant positioning and tracing method and computer equipment
CN115171185A (en) * 2022-07-01 2022-10-11 中铁第四勘察设计院集团有限公司 Cross-camera face tracking method, device and medium based on time-space correlation
CN115984318A (en) * 2023-03-20 2023-04-18 宝略科技(浙江)有限公司 Cross-camera pedestrian tracking method based on feature maximum association probability

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘玉杰;窦长红;赵其鲁;李宗民;: "基于状态预测和运动结构的在线多目标跟踪", 计算机辅助设计与图形学学报, no. 02, 15 February 2018 (2018-02-15) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117237418A (en) * 2023-11-15 2023-12-15 成都航空职业技术学院 Moving object detection method and system based on deep learning
CN119091394A (en) * 2023-12-01 2024-12-06 宁夏交投高速公路管理有限公司 Dynamic small target tracking and detection method and system on highway pavement based on improved YOLOv5 and ByteTrack

Similar Documents

Publication Publication Date Title
CN109446942B (en) Target tracking method, device and system
US10019637B2 (en) Method and system for moving object detection with single camera
JP6904346B2 (en) Image processing equipment, image processing systems, and image processing methods, and programs
CN108027877B (en) System and method for non-obstacle area detection
CN108388879B (en) Object detection method, device and storage medium
CN104732187B (en) A kind of method and apparatus of image trace processing
CN110717445B (en) Front vehicle distance tracking system and method for automatic driving
US20150146917A1 (en) Method and system for video-based vehicle tracking adaptable to traffic conditions
CN106559645B (en) Camera-based monitoring method, system and device
US20150104062A1 (en) Probabilistic neural network based moving object detection method and an apparatus using the same
US12094252B2 (en) Occlusion-aware prediction of human behavior
CN116843726A (en) Pedestrian trajectory tracking method and device, electronic equipment and storage medium
WO2016149938A1 (en) Video monitoring method, video monitoring system and computer program product
JP6868061B2 (en) Person tracking methods, devices, equipment and storage media
CN103366155B (en) Temporal coherence in unobstructed pathways detection
CN108229456A (en) Method for tracking target and device, electronic equipment, computer storage media
CN105631418A (en) People counting method and device
CN110111565A (en) A kind of people&#39;s vehicle flowrate System and method for flowed down based on real-time video
CN113486850A (en) Traffic behavior recognition method and device, electronic equipment and storage medium
CN109522814B (en) A kind of target tracking method and device based on video data
CN110782433A (en) Dynamic information violent parabolic detection method and device based on time sequence and storage medium
CN104517095A (en) Head division method based on depth image
CN112381132A (en) Target object tracking method and system based on fusion of multiple cameras
KR20170006356A (en) Method for customer analysis based on two-dimension video and apparatus for the same
CN111814767B (en) Fall detection method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination