[go: up one dir, main page]

WO2021043168A1 - Person re-identification network training method and person re-identification method and apparatus - Google Patents

Person re-identification network training method and person re-identification method and apparatus Download PDF

Info

Publication number
WO2021043168A1
WO2021043168A1 PCT/CN2020/113041 CN2020113041W WO2021043168A1 WO 2021043168 A1 WO2021043168 A1 WO 2021043168A1 CN 2020113041 W CN2020113041 W CN 2020113041W WO 2021043168 A1 WO2021043168 A1 WO 2021043168A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
pedestrian
training
anchor point
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2020/113041
Other languages
French (fr)
Chinese (zh)
Inventor
魏龙辉
张天宇
谢凌曦
田奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of WO2021043168A1 publication Critical patent/WO2021043168A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Definitions

  • Pedestrian re-identification can also be called pedestrian re-identification.
  • Pedestrian re-identification is a technology that uses computer vision technology to determine whether there is a specific pedestrian in an image or video sequence.
  • Step 1 Obtain training data
  • the training data in step 1 includes M training images and labeled data of M training images, and M is an integer greater than 1;
  • Step 2 Initialize the network parameters of the pedestrian re-identification network to obtain the initial values of the network parameters of the pedestrian re-identification network;
  • Step 3 Input a batch of training images from the M training images to the pedestrian recognition network for feature extraction, and obtain the feature vector of each training image in the batch of training images;
  • Step 4 Determine the function value of the loss function according to the feature vector of a batch of training images
  • each training image includes a pedestrian
  • the annotation data of each training image includes the bounding box where the pedestrian in each training image is located and the pedestrian identification information.
  • Different pedestrians Corresponding to different pedestrian identification information, among the M training images, the training images with the same pedestrian identification information come from the same image capturing device.
  • the M training images can be all the training images used in the training of the pedestrian re-recognition network. In the specific training process, a batch of training images of the M training images can be selected and input into the pedestrian re-recognition network each time. deal with.
  • the aforementioned image capturing device may specifically be a device capable of acquiring images of pedestrians, such as a video camera and a camera.
  • the pedestrian identification information in step 1 above can also be called pedestrian identification information, which is a type of information used to identify the identity of a pedestrian.
  • Each pedestrian can correspond to unique pedestrian identification information.
  • the pedestrian identification information may specifically be a pedestrian identity (identity, ID), that is, a unique ID can be assigned to each pedestrian.
  • the network parameters of the pedestrian re-identification network can be randomly set to obtain the initial values of the network parameters of the pedestrian re-identification network.
  • the above batch of training images may include N anchor point images, where the N anchor point images are any N training images in the above batch of training images, and each of the N anchor point images Each anchor point image corresponds to a most difficult positive sample image, a first most difficult negative sample image and a second most difficult negative sample image.
  • the following describes the most difficult positive sample image corresponding to each anchor point image, the first most difficult negative sample image, and the second most difficult negative sample image.
  • the most difficult positive sample image corresponding to each anchor point image the training image that has the same pedestrian identification information as each anchor point image and the farthest distance from the feature vector of each anchor point image in the above batch of training images ;
  • the first most difficult negative sample image corresponding to each anchor point image the above batch of training images and each anchor point image come from the same image capture device, and are different from the pedestrian identification information of each anchor point image and are different from each anchor point image.
  • the second most difficult negative sample image corresponding to each anchor point image the above batch of training images and each anchor point image come from different image capture equipment, and are different from the pedestrian identification information of each anchor point image and are different from each anchor point image.
  • the training image with the closest distance between the feature vectors of the point image.
  • the above N is a positive integer, and the above N is less than M.
  • the function value of the first loss function can be directly used as the function value of the loss function in step 4.
  • the function value of each of the foregoing first loss functions is the sum of the first difference and the second difference corresponding to each anchor point image.
  • the second most difficult negative sample distance corresponding to each anchor point image the distance between the feature vector of the second most difficult negative sample image corresponding to each anchor point image and the feature vector of each anchor point image;
  • the distance of the first most difficult negative sample corresponding to each anchor point image the distance between the feature vector of the first most difficult negative sample image corresponding to each anchor point image and the feature vector of each anchor point image.
  • the most difficult negative sample images from different image capturing devices and the same image capturing device are considered in the process of constructing the loss function, and the first difference and the second difference are reduced as much as possible during the training process.
  • Small which can eliminate as much as possible the interference of the image capturing device's own information on the image information, so that the trained pedestrian re-recognition network can more accurately extract features from the image.
  • the network parameters of the pedestrian re-recognition network are optimized to make the first difference and the second difference as small as possible, so that the distance between the most difficult positive sample and the second hardest
  • the difference between the distance of the negative sample and the distance between the second most difficult negative sample and the distance of the first most difficult negative sample are as small as possible, so that the pedestrian re-recognition network can distinguish the most difficult image from the second most difficult negative as much as possible.
  • the pedestrian re-identification network meets preset requirements, including: when at least one of the following conditions (1) to (3) is met, the pedestrian re-identification network Meet the preset requirements:
  • the number of training times of the pedestrian re-identification network is greater than or equal to the preset number
  • the value range of the foregoing preset threshold is [0, 0.01].
  • the function value of the loss function is less than or equal to a preset threshold, including: the first difference is less than the first preset threshold, and the second difference is less than the second preset Set the threshold.
  • both the first preset threshold and the second threshold may be 0.1.
  • the images of each image capturing device can be marked separately, regardless of whether the same pedestrian will appear between different image capturing devices, specifically, if the multiple images captured by the image capturing device A include pedestrians.
  • X then, after marking the M training images captured by image capture device A, there is no need to look for images of pedestrian X from the images captured by other image capture devices, thus avoiding the need to use different image capture devices
  • the process of finding the same pedestrian in the captured image can save a lot of marking time and reduce the complexity of marking.
  • a pedestrian re-recognition method includes: acquiring an image to be recognized; using a pedestrian re-recognition network to process the image to be recognized to obtain a feature vector of the image to be recognized, wherein the pedestrian re-recognition network is based on the above
  • the training method of the first aspect is obtained by training; according to the feature vector of the image to be recognized and the feature vector of the existing pedestrian image, the recognition result of the image to be recognized is obtained.
  • the above-mentioned comparison is performed based on the feature vector of the image to be recognized with the feature vector of the existing pedestrian image to obtain the recognition result of the image to be recognized, including: outputting the target pedestrian Image and attribute information of the target pedestrian image.
  • the above-mentioned target pedestrian image may be a pedestrian image whose feature vector is most similar to the feature vector of the image to be recognized in the existing pedestrian image, and the attribute information of the target pedestrian image includes the shooting time and shooting location of the target pedestrian image.
  • the attribute information of the target pedestrian image may also include the identity information of the pedestrian and the like.
  • a pedestrian re-identification device in a fourth aspect, includes modules for executing the method in the second aspect.
  • a training device for a pedestrian re-identification network includes: a memory for storing a program; a processor for executing the program stored in the memory, and when the program stored in the memory is executed , The processor is configured to execute the method in the above-mentioned first aspect.
  • a pedestrian re-identification device in a sixth aspect, includes: a memory for storing a program; a processor for executing the program stored in the memory, and when the program stored in the memory is executed, the The processor is used to execute the method in the second aspect described above.
  • the computer device may specifically be a server or a cloud device or the like.
  • an electronic device includes the pedestrian re-identification device of the fourth aspect.
  • the electronic device may specifically be a mobile terminal (for example, a smart phone), a tablet computer, a notebook computer, an augmented reality/virtual reality device, a vehicle-mounted terminal device, and so on.
  • a mobile terminal for example, a smart phone
  • a tablet computer for example, a tablet computer
  • a notebook computer for example, a tablet computer
  • an augmented reality/virtual reality device for example, a vehicle-mounted terminal device, and so on.
  • a computer-readable storage medium stores program code, and the program code includes instructions for executing steps in any one of the first aspect or the second aspect.
  • a chip in an eleventh aspect, includes a processor and a data interface.
  • the processor reads instructions stored in a memory through the data interface, and executes any one of the first aspect or the second aspect.
  • Kind of method is provided.
  • the method in the first aspect may specifically refer to the method in the first aspect and any one of the various implementation manners in the first aspect
  • the method in the second aspect may specifically refer to the second aspect. Aspect and the method in any one of the various implementation manners in the second aspect.
  • FIG. 1 is a schematic structural diagram of a system architecture provided by an embodiment of the present application.
  • FIG. 11 is a schematic block diagram of a training device for a pedestrian re-identification network according to an embodiment of the present application.
  • the intelligent monitoring system can collect the images of pedestrians captured by various image capturing devices to form an image library.
  • the pedestrian re-recognition network also called a pedestrian re-recognition model
  • the pedestrian re-recognition network can be trained using the images in the image library to obtain a trained pedestrian re-recognition network.
  • a neural network can be composed of neural units.
  • a neural unit can refer to an arithmetic unit that takes x s and intercept 1 as inputs.
  • the output of the arithmetic unit can be as shown in formula (1):
  • the above-mentioned target model/rule 101 can be used to implement the pedestrian re-identification method of the embodiment of the present application, that is, the pedestrian image (the pedestrian image may be an image that requires pedestrian recognition) is input into the target model/rule 101 to obtain the pedestrian re-identification method.
  • the image extracts feature vectors, and performs pedestrian recognition based on the extracted feature vectors to determine the pedestrian recognition results.
  • the target model/rule 101 in the embodiment of the present application may specifically be a neural network. It should be noted that, in actual applications, the training data maintained in the database 130 may not all come from the collection of the data collection device 160, and may also be received from other devices.
  • the target model/rule 101 trained according to the training device 120 can be applied to different systems or devices, such as the execution device 110 shown in FIG. 1, which can be a terminal, such as a mobile phone terminal, a tablet computer, notebook computers, augmented reality (AR)/virtual reality (VR), in-vehicle terminals, etc., can also be servers or clouds.
  • the execution device 110 is configured with an input/output (input/output, I/O) interface 112 for data interaction with external devices.
  • the user can input data to the I/O interface 112 through the client device 140.
  • the input data in the embodiment of the present application may include: a pedestrian image input by the client device.
  • the client device 140 here may specifically be a monitoring device.
  • the multiple weight matrices have the same size (row ⁇ column), the size of the convolution feature maps extracted by the multiple weight matrices of the same size are also the same, and then the multiple extracted convolution feature maps of the same size are merged to form The output of the convolution operation.
  • the arithmetic circuit 503 includes multiple processing units (process engines, PE). In some implementations, the arithmetic circuit 503 is a two-dimensional systolic array. The arithmetic circuit 503 may also be a one-dimensional systolic array or other electronic circuit capable of performing mathematical operations such as multiplication and addition. In some implementations, the arithmetic circuit 503 is a general-purpose matrix processor.
  • each layer in the convolutional neural network shown in FIG. 2 may be executed by the arithmetic circuit 503 or the vector calculation unit 507.
  • an embodiment of the present application provides a system architecture 300.
  • the system architecture includes a local device 301, a local device 302, an execution device 210 and a data storage system 250, where the local device 301 and the local device 302 are connected to the execution device 210 through a communication network.
  • Each local device can represent any computing device, such as personal computers, computer workstations, smart phones, tablets, smart cameras, smart cars or other types of cellular phones, media consumption devices, wearable devices, set-top boxes, game consoles, etc.
  • the local device of each user can interact with the execution device 210 through a communication network of any communication mechanism/communication standard.
  • the communication network can be a wide area network, a local area network, a point-to-point connection, etc., or any combination thereof.
  • the pedestrian re-recognition network can be trained through the single-image shooting device annotation data, and a trained pedestrian re-recognition network can be obtained.
  • the trained pedestrian re-recognition network can process pedestrian images. Obtain the feature vector of the pedestrian image. Next, by comparing the feature vector of the pedestrian image with the feature vector in the image library, the person you are looking for can be obtained. Specifically, through feature comparison, the target pedestrian image that is most similar to the feature vector of the pedestrian image can be found, and basic information such as the shooting time and location of the target pedestrian image can be output.
  • each pedestrian only appears in one image capture device (or one image capture device group). In this way, after pedestrian detection and tracking are used to obtain the pedestrian image in the video, only a small amount is required. Human power can associate several pictures of the same person in similar frames to form annotations. Moreover, the labels of each image capturing device are relatively independent, and there will be no overlap in the number of pedestrians of different image capturing devices. By setting different collection time periods for different image capturing devices, the number of people recurring in the video captured by each image capturing device can be reduced, thereby achieving the requirement of labeling data for a single image capturing device.
  • the training data in the above step 1002 includes M (M is an integer greater than 1) training images and the annotation data of M training images.
  • M is an integer greater than 1
  • each training image includes pedestrians
  • the annotation data includes the bounding box and pedestrian identification information of the pedestrian in each training image. Different pedestrians correspond to different pedestrian identification information.
  • the training images with the same pedestrian identification information are taken from the same image. equipment.
  • the aforementioned image capturing device may specifically be a device capable of acquiring images of pedestrians, such as a video camera and a camera.
  • Input a batch of training images among the M training images to the pedestrian recognition network to perform feature extraction, and obtain a feature vector of each training image in the batch of training images.
  • the foregoing batch of training images may include N anchor point images, where the N anchor point images are any N training images in the foregoing batch of training images, and each anchor point image in the N anchor point images corresponds to one
  • the most difficult positive sample image, a first most difficult negative sample image and a second most difficult negative sample image, N is a positive integer, and N is less than M.
  • the most difficult positive sample image corresponding to each anchor point image the training image that has the same pedestrian identification information as each anchor point image and the farthest distance from the feature vector of each anchor point image in the above batch of training images ;
  • the second most difficult negative sample image corresponding to each anchor point image the above batch of training images and each anchor point image come from different image capture equipment, and are different from the pedestrian identification information of each anchor point image and are different from each anchor point image.
  • the training image with the closest distance between the feature vectors of the point image.
  • the function value of each first loss function in the above N first loss functions is calculated according to the first difference and the second difference corresponding to each of the N anchor point images.
  • the second difference value corresponding to each anchor point image the difference between the distance of the second most difficult negative sample corresponding to each anchor point image and the distance of the first most difficult negative sample corresponding to each anchor point image.
  • step 1007 when the pedestrian re-identification network meets at least one of the above conditions (1) to (3), it can be determined that the pedestrian re-identification network meets the preset requirements, and step 1008 is executed, and the training process of the pedestrian re-identification network ends;
  • the pedestrian re-recognition network does not meet any of the above conditions (1) to (3), it means that the pedestrian re-recognition network has not yet met the preset requirements, and the pedestrian re-recognition network needs to continue to be trained, that is, step 1004 is re-executed To 1007, the network will not be recognized until pedestrians meeting the preset requirements are obtained.
  • the most difficult negative sample images from different image capturing devices and the same image capturing device are considered in the process of constructing the loss function, and the first difference and the second difference are reduced as much as possible during the training process.
  • Small which can eliminate as much as possible the interference of the image capturing device's own information on the image information, so that the trained pedestrian re-recognition network can more accurately extract features from the image.
  • the pedestrian re-identification network in this application can use the existing residual network (for example, ResNet50) as the main body of the network, remove the last fully connected layer, and add the global mean after the last layer of residual block (ResBlock) Pooling (global average pooling) layer, and obtain the feature vector of 2048 dimensions (or other values) as the output of the network model.
  • ResNet50 residual network
  • ResBlock global mean after the last layer of residual block Pooling (global average pooling) layer
  • the input training image can be scaled to a size of 256x128 pixels, the adaptive moment estimation (Adam) optimizer can be used to train the network parameters during training, and the learning rate can be set to 2 ⁇ 10 -4 . After 100 rounds of training, the learning rate decays exponentially, until after 200 rounds of learning, the learning rate can be set to 2 ⁇ 10 -7 , then training can be stopped.
  • Adam adaptive moment estimation
  • the memory 9001 may be a read only memory (ROM), a static storage device, a dynamic storage device, or a random access memory (RAM).
  • the memory 9001 may store a program.
  • the processor 9002 is configured to execute each step of the pedestrian re-identification network training method in the embodiment of the present application.
  • the bus 9004 may include a path for transferring information between various components of the device 9000 (for example, the memory 9001, the processor 9002, and the communication interface 9003).
  • the acquiring unit 10001 may perform the foregoing step 6001, and the identifying unit 10002 may perform the foregoing step 6002.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of the present application essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (read-only memory, ROM), random access memory (random access memory, RAM), magnetic disks or optical disks and other media that can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

Provided by the present application are a person re-identification network training method and a person re-identification method and apparatus. The present application relates to the field of artificial intelligence, and relates in particular to the field of computer vision. The method comprises: obtaining M training images and annotation data of the M training images; performing initialization processing on a network parameter of a person re-identification network so as to obtain an initial value of the network parameter of the person re-identification network; inputting a batch of training images among the M training images into the person re-identification network to perform feature extraction so as to obtain a feature vector of each training image in the batch of training images; then determining a loss function according to the feature vectors of the batch of training images; and obtaining, according to a function value of the loss function, a person re-identification network that meets a preset requirement. In the present application, a person re-identification network having good performance may be trained when data is labelled using a single image photographing device.

Description

行人再识别网络的训练方法、行人再识别方法和装置Pedestrian re-identification network training method, pedestrian re-identification method and device

本申请要求于2019年09月05日提交中国专利局、申请号为201910839017.9、申请名称为“行人再识别网络的训练方法、行人再识别方法和装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of a Chinese patent application filed with the Chinese Patent Office on September 5, 2019, the application number is 201910839017.9, and the application title is "Pedestrian Re-identification Network Training Method, Pedestrian Re-identification Method and Device", and its entire contents Incorporated in this application by reference.

技术领域Technical field

本申请涉及计算机视觉领域,并且更具体地,涉及一种行人再识别网络的训练方法、行人再识别方法和装置。This application relates to the field of computer vision, and more specifically, to a method for training a pedestrian re-recognition network, a method and device for pedestrian re-recognition.

背景技术Background technique

计算机视觉是各个应用领域,如制造业、检验、文档分析、医疗诊断,和军事等领域中各种智能/自主系统中不可分割的一部分,它是一门关于如何运用照相机/图像拍摄设备和计算机来获取我们所需的,被拍摄对象的数据与信息的学问。形象地说,就是给计算机安装上眼睛(照相机/图像拍摄设备)和大脑(算法)用来代替人眼对目标进行识别、跟踪和测量等,从而使计算机能够感知环境。因为感知可以看作是从感官信号中提取信息,所以计算机视觉也可以看作是研究如何使人工系统从图像或多维数据中“感知”的科学。总的来说,计算机视觉就是用各种成像系统代替视觉器官获取输入信息,再由计算机来代替大脑对这些输入信息完成处理和解释。计算机视觉的最终研究目标就是使计算机能像人那样通过视觉观察和理解世界,具有自主适应环境的能力。Computer vision is an inseparable part of various intelligent/autonomous systems in various application fields, such as manufacturing, inspection, document analysis, medical diagnosis, and military. It is about how to use cameras/image capturing equipment and computers. To get the knowledge we need, the data and information of the subject. To put it vividly, it is to install eyes (cameras/image capture equipment) and brains (algorithms) on the computer to replace the human eyes to identify, track and measure targets, so that the computer can perceive the environment. Because perception can be seen as extracting information from sensory signals, computer vision can also be seen as a science that studies how to make artificial systems "perceive" from images or multi-dimensional data. In general, computer vision is to use various imaging systems to replace the visual organs to obtain input information, and then the computer replaces the brain to complete the processing and interpretation of the input information. The ultimate research goal of computer vision is to enable computers to observe and understand the world through vision like humans, and have the ability to adapt to the environment autonomously.

监控领域常常涉及行人再识别的问题,行人重识别(person re-identification,ReID)也可以称为行人再识别,行人再识别是利用计算机视觉技术判断图像或者视频序列中是否存在特定行人的技术。The surveillance field often involves the problem of pedestrian re-identification. Pedestrian re-identification (ReID) can also be called pedestrian re-identification. Pedestrian re-identification is a technology that uses computer vision technology to determine whether there is a specific pedestrian in an image or video sequence.

传统方案一般是训练数据以及跨图像拍摄设备的标注数据,对行人再识别网络进行训练,使得行人再识别网络能够区分开不同行人的图像,进而进行行人的识别。但是,传统方案中的训练数据中包括同一行人由不同图像拍摄设备拍摄的图像,对于这种由不同图像拍摄设备拍摄的图像需要人工进行标注,使得同一行人由不同图像拍摄设备拍摄的图像关联起来(也就是将行人进行跨图像拍摄设备的关联)。但是,在很多场景下,将行人进行跨图像拍摄设备的关联非常困难,尤其是当人数增多、图像拍摄设备数量增多时,进行跨图像拍摄设备关联的难度也随之大幅提升。数据标注的经济成本高,时间消耗大。The traditional scheme is generally training data and annotation data across image capturing equipment to train the pedestrian re-recognition network so that the pedestrian re-recognition network can distinguish images of different pedestrians, and then perform pedestrian recognition. However, the training data in the traditional scheme includes images of the same pedestrian captured by different image capturing devices. For such images captured by different image capturing devices, manual annotation is required to associate the same pedestrian images captured by different image capturing devices. (That is, associate pedestrians across image capture devices). However, in many scenarios, it is very difficult to associate pedestrians across image capturing devices, especially when the number of people increases and the number of image capturing devices increases, the difficulty of performing cross image capturing devices is also greatly increased. The economic cost of data labeling is high and time consuming.

发明内容Summary of the invention

本申请提供一种行人再识别网络的训练方法、行人再识别方法和装置,以在单图像拍摄设备标注数据情况下训练出性能较好的行人再识别网络。This application provides a pedestrian re-recognition network training method, pedestrian re-recognition method and device, so as to train a pedestrian re-recognition network with better performance in the case of single-image shooting equipment labeling data.

第一方面,提供了一种行人再识别网络的训练方法,该方法包括:In the first aspect, a training method for a pedestrian re-identification network is provided, the method includes:

步骤1:获取训练数据;Step 1: Obtain training data;

其中,步骤1中的训练数据包括M个训练图像和M个训练图像的标注数据,M为大于1的整数;Wherein, the training data in step 1 includes M training images and labeled data of M training images, and M is an integer greater than 1;

步骤2:对行人再识别网络的网络参数进行初始化处理,以得到行人再识别网络的网络参数的初始值;Step 2: Initialize the network parameters of the pedestrian re-identification network to obtain the initial values of the network parameters of the pedestrian re-identification network;

重复执行下面的步骤3至步骤5,直到行人再识别网络满足预设要求;Repeat the following steps 3 to 5 until the pedestrian re-identification network meets the preset requirements;

步骤3:将M个训练图像中的一批训练图像输入到行人再识别网络进行特征提取,得到一批训练图像中的每个训练图像的特征向量;Step 3: Input a batch of training images from the M training images to the pedestrian recognition network for feature extraction, and obtain the feature vector of each training image in the batch of training images;

步骤4:根据一批训练图像的特征向量确定损失函数的函数值;Step 4: Determine the function value of the loss function according to the feature vector of a batch of training images;

步骤5:根据损失函数的函数值对行人再识别网络的网络参数进行更新。Step 5: Update the network parameters of the pedestrian re-identification network according to the function value of the loss function.

在上述步骤1中,在训练数据的M个训练图像中,每个训练图像包括行人,每个训练图像的标注数据包括每个训练图像中的行人所在的包围框和行人标识信息,不同的行人对应不同的行人标识信息,在M个训练图像中,具有相同的行人标识信息的训练图像来自于同一图像拍摄设备。该M个训练图像可以是对行人再识别网络进行训练时采用的所有的训练图像,在具体训练过程,可以每次选择该M个训练图像中的一批训练图像输入到行人再识别网络中进行处理。In the above step 1, in the M training images of the training data, each training image includes a pedestrian, and the annotation data of each training image includes the bounding box where the pedestrian in each training image is located and the pedestrian identification information. Different pedestrians Corresponding to different pedestrian identification information, among the M training images, the training images with the same pedestrian identification information come from the same image capturing device. The M training images can be all the training images used in the training of the pedestrian re-recognition network. In the specific training process, a batch of training images of the M training images can be selected and input into the pedestrian re-recognition network each time. deal with.

上述图像拍摄设备具体可以是摄像机、照相机等能够获取行人图像的设备。The aforementioned image capturing device may specifically be a device capable of acquiring images of pedestrians, such as a video camera and a camera.

上述步骤1中的行人标识信息也可以称为行人身份标识信息,是用于表示标识行人身份的一种信息,每个行人可以对应唯一的行人标识信息,该行人标识信息的表示方式有多种,只要能够指示行人的身份信息即可,例如,该行人标识信息具体可以是行人身份(identity,ID),也就是说,可以为每一个行人分配一个唯一的ID。The pedestrian identification information in step 1 above can also be called pedestrian identification information, which is a type of information used to identify the identity of a pedestrian. Each pedestrian can correspond to unique pedestrian identification information. There are many ways to express the pedestrian identification information. , As long as the identity information of the pedestrian can be indicated, for example, the pedestrian identification information may specifically be a pedestrian identity (identity, ID), that is, a unique ID can be assigned to each pedestrian.

在上述步骤2中可以随机设置行人再识别网络的网络参数,得到行人再识别网络的网络参数的初始值。In step 2 above, the network parameters of the pedestrian re-identification network can be randomly set to obtain the initial values of the network parameters of the pedestrian re-identification network.

在上述步骤3中,上述一批训练图像可以包括N个锚点图像,其中,该N个锚点图像是上述一批训练图像中的任意N个训练图像,该N个锚点图像中的每个锚点图像对应一个最难正样本图像,一个第一最难负样本图像和一个第二最难负样本图像。In the above step 3, the above batch of training images may include N anchor point images, where the N anchor point images are any N training images in the above batch of training images, and each of the N anchor point images Each anchor point image corresponds to a most difficult positive sample image, a first most difficult negative sample image and a second most difficult negative sample image.

下面对每个锚点图像对应的最难正样本图像,第一最难负样本图像和第二最难负样本图像进行说明。The following describes the most difficult positive sample image corresponding to each anchor point image, the first most difficult negative sample image, and the second most difficult negative sample image.

每个锚点图像对应的最难正样本图像:上述一批训练图像中与每个锚点图像的行人标识信息相同,并且与每个锚点图像的特征向量之间的距离最远的训练图像;The most difficult positive sample image corresponding to each anchor point image: the training image that has the same pedestrian identification information as each anchor point image and the farthest distance from the feature vector of each anchor point image in the above batch of training images ;

每个锚点图像对应的第一最难负样本图像:上述一批训练图像中与每个锚点图像来自于同一图像拍摄设备,并与每个锚点图像的行人标识信息不同且与每个锚点图像的特征向量之间的距离最近的训练图像;The first most difficult negative sample image corresponding to each anchor point image: the above batch of training images and each anchor point image come from the same image capture device, and are different from the pedestrian identification information of each anchor point image and are different from each anchor point image. The training image with the closest distance between the feature vectors of the anchor image;

每个锚点图像对应的第二最难负样本图像:上述一批训练图像中与每个锚点图像来自不同图像拍摄设备,并与每个锚点图像的行人标识信息不同且与每个锚点图像的特征向量之间的距离最近的训练图像。The second most difficult negative sample image corresponding to each anchor point image: the above batch of training images and each anchor point image come from different image capture equipment, and are different from the pedestrian identification information of each anchor point image and are different from each anchor point image. The training image with the closest distance between the feature vectors of the point image.

在上述步骤4中,损失函数的函数值是N个第一损失函数的函数值经过平均处理得到的。其中,上述N个第一损失函数中的每个第一损失函数的函数值是根据N个锚点图像中的每个锚点图像对应的第一差值和第二差值计算得到的。In the above step 4, the function value of the loss function is obtained by averaging the function values of the N first loss functions. Wherein, the function value of each first loss function in the above N first loss functions is calculated according to the first difference and the second difference corresponding to each of the N anchor point images.

上述N为正整数,上述N小于M。当N=1时,只有一个第一损失函数的函数值,此 时可以直接将该第一损失函数的函数值作为步骤4中的损失函数的函数值。The above N is a positive integer, and the above N is less than M. When N=1, there is only one function value of the first loss function. At this time, the function value of the first loss function can be directly used as the function value of the loss function in step 4.

可选地,上述每个第一损失函数的函数值是每个锚点图像对应的第一差值和第二差值的和。Optionally, the function value of each of the foregoing first loss functions is the sum of the first difference and the second difference corresponding to each anchor point image.

可选地,上述每个第一损失函数的函数值是每个锚点图像对应的第一差值、第二差值和其他常数项的和。Optionally, the function value of each of the foregoing first loss functions is the sum of the first difference, the second difference, and other constant items corresponding to each anchor point image.

下面对第一差值和第二差值以及形成第一差值和第二差值的各个距离的含义进行说明。The meaning of the first difference value and the second difference value and the respective distances forming the first difference value and the second difference value will be described below.

每个锚点图像对应的第一差值:每个锚点图像对应的最难正样本距离与每个锚点图像对应的第二最难负样本距离的差;The first difference corresponding to each anchor image: the difference between the distance of the most difficult positive sample corresponding to each anchor image and the distance of the second most difficult negative sample corresponding to each anchor image;

每个锚点图像对应的第二差值:每个锚点图像对应的第二最难负样本距离与每个锚点图像对应的第一最难负样本距离的差;The second difference value corresponding to each anchor point image: the difference between the distance of the second most difficult negative sample corresponding to each anchor point image and the distance of the first most difficult negative sample corresponding to each anchor point image;

每个锚点图像对应的最难正样本距离:每个锚点图像对应的最难正样本图像的特征向量与每个锚点图像的特征向量的距离;The distance of the most difficult positive sample corresponding to each anchor point image: the distance between the feature vector of the most difficult positive sample image corresponding to each anchor point image and the feature vector of each anchor point image;

每个锚点图像对应的第二最难负样本距离:每个锚点图像对应的第二最难负样本图像的特征向量与每个锚点图像的特征向量的距离;The second most difficult negative sample distance corresponding to each anchor point image: the distance between the feature vector of the second most difficult negative sample image corresponding to each anchor point image and the feature vector of each anchor point image;

每个锚点图像对应的第一最难负样本距离:每个锚点图像对应的第一最难负样本图像的特征向量与每个锚点图像的特征向量的距离。The distance of the first most difficult negative sample corresponding to each anchor point image: the distance between the feature vector of the first most difficult negative sample image corresponding to each anchor point image and the feature vector of each anchor point image.

另外,在本申请中,几个训练图像来自于同一图像拍摄设备是指这几个训练图像是通过同一个图像拍摄设备进行拍摄得到的。In addition, in this application, that several training images are from the same image capturing device means that these several training images are captured by the same image capturing device.

本申请中,在构造损失函数的过程中考虑到了来自于不同图像拍摄设备和相同图像拍摄设备的最难负样本图像,并在训练过程中使得第一差值和第二差值尽可能的减小,从而能够尽可能的消除图像拍摄设备本身信息对图像信息的干扰,使得训练出来的行人再识别网络能够更准确的从图像中进行特征的提取。In this application, the most difficult negative sample images from different image capturing devices and the same image capturing device are considered in the process of constructing the loss function, and the first difference and the second difference are reduced as much as possible during the training process. Small, which can eliminate as much as possible the interference of the image capturing device's own information on the image information, so that the trained pedestrian re-recognition network can more accurately extract features from the image.

具体地,在对行人再识别网络的训练过程中,通过优化行人再识别网络的网络参数使得第一差值和第二差值尽可能的小,从而使得最难正样本距离与第二最难负样本距离的差以及第二最难负样本距离和第一最难负样本距离的差尽可能的小,进而使得行人再识别网络能够尽可能的区分开最难本图像与第二最难负样本图像的特征,以及第二最难负样本图像与第一最难负样本图像的特征,从而使得训练出来的行人再识别网络能够更好更准确地对图像进行特征提取。Specifically, in the training process of the pedestrian re-recognition network, the network parameters of the pedestrian re-recognition network are optimized to make the first difference and the second difference as small as possible, so that the distance between the most difficult positive sample and the second hardest The difference between the distance of the negative sample and the distance between the second most difficult negative sample and the distance of the first most difficult negative sample are as small as possible, so that the pedestrian re-recognition network can distinguish the most difficult image from the second most difficult negative as much as possible. The characteristics of the sample image, and the characteristics of the second most difficult negative sample image and the first most difficult negative sample image, so that the trained pedestrian re-recognition network can perform feature extraction on the image better and more accurately.

结合第一方面,在第一方面的某些实现方式中,上述行人再识别网络满足预设要求,包括:在满足下列条件(1)至(3)中的至少一种时,行人再识别网络满足预设要求:With reference to the first aspect, in some implementations of the first aspect, the pedestrian re-identification network meets preset requirements, including: when at least one of the following conditions (1) to (3) is met, the pedestrian re-identification network Meet the preset requirements:

(1)行人再识别网络的训练次数大于或者等于预设次数;(1) The number of training times of the pedestrian re-identification network is greater than or equal to the preset number;

(2)损失函数的函数值小于或者等于预设阈值;(2) The function value of the loss function is less than or equal to the preset threshold;

(3)行人再识别网络的识别性能达到预设要求。(3) The recognition performance of the pedestrian re-identification network meets the preset requirements.

上述预设阈值可以是经验来灵活设置,当预设阈值设置的过大时训练得到的行人再识别网络的行人识别效果可能不够好,而当预设阈值设置的过小时在训练时损失函数的函数值可能难以收敛。The above preset threshold can be flexibly set by experience. When the preset threshold is set too large, the pedestrian recognition effect of the trained pedestrian re-recognition network may not be good enough, and when the preset threshold is set too small, the loss of the function during training The function value may be difficult to converge.

可选地,上述预设阈值的取值范围为[0,0.01]。Optionally, the value range of the foregoing preset threshold is [0, 0.01].

具体地,上述预设阈值的取值可以为0.01。Specifically, the value of the foregoing preset threshold may be 0.01.

结合第一方面,在第一方面的某些实现方式中,上述损失函数的函数值小于或者等于预设阈值,包括:第一差值小于第一预设阈值,第二差值小于第二预设阈值。With reference to the first aspect, in some implementations of the first aspect, the function value of the loss function is less than or equal to a preset threshold, including: the first difference is less than the first preset threshold, and the second difference is less than the second preset Set the threshold.

上述第一预设阈值和第二预设阈值也可以根据经验来确定,当第一预设阈值和第二预设阈值设置的过大时训练得到的行人再识别网络的行人识别效果可能不够好,而当第一预设阈值和第二预设阈值设置的过小时在训练时损失函数的函数值可能难以收敛。The above-mentioned first preset threshold and second preset threshold can also be determined based on experience. When the first preset threshold and the second preset threshold are set too large, the pedestrian recognition effect of the trained pedestrian re-recognition network may not be good enough , And when the first preset threshold and the second preset threshold are set too small, the function value of the loss function may be difficult to converge during training.

可选地,上述第一预设阈值的取值范围为[0,0.4]。Optionally, the value range of the foregoing first preset threshold is [0, 0.4].

可选地,上述第一预设阈值的取值范围为[0,0.4]。Optionally, the value range of the foregoing first preset threshold is [0, 0.4].

具体地,上述第一预设阈值和上述第二阈值均可以取0.1。Specifically, both the first preset threshold and the second threshold may be 0.1.

结合第一方面,在第一方面的某些实现方式中,上述M个训练图像为来自多个图像拍摄设备的训练图像,其中,来自不同图像拍摄设备的训练图像的标记数据是单独标记得到的。With reference to the first aspect, in some implementations of the first aspect, the above-mentioned M training images are training images from a plurality of image capturing devices, wherein the labeled data of the training images from different image capturing devices are separately labeled .

也就是说,针对每个图像拍摄设备的图像可以单独进行标记,而不必考虑不同的图像拍摄设备之间是否会出现相同的行人,具体地,如果图像拍摄设备A拍摄的多个图像中包括行人X,那么,当标记完了图像拍摄设备A拍摄的M个训练图像之后,就不必再从其他的图像拍摄设备拍摄的图像中寻找是否包含行人X的图像,这样就避免了在不同的图像拍摄设备拍摄的图像中寻找同一行人的过程,可以节省大量的标记时间,减少标注的复杂度。That is to say, the images of each image capturing device can be marked separately, regardless of whether the same pedestrian will appear between different image capturing devices, specifically, if the multiple images captured by the image capturing device A include pedestrians. X, then, after marking the M training images captured by image capture device A, there is no need to look for images of pedestrian X from the images captured by other image capture devices, thus avoiding the need to use different image capture devices The process of finding the same pedestrian in the captured image can save a lot of marking time and reduce the complexity of marking.

第二方面,提供了一种行人再识别方法,该方法包括:获取待识别图像;利用行人再识别网络对待识别图像进行处理,得到待识别图像的特征向量,其中,行人再识别网络是根据上述第一方面的训练方法训练得到的;根据待识别图像的特征向量与已有的行人图像的特征向量进行比对,得到待识别图像的识别结果。In a second aspect, a pedestrian re-recognition method is provided. The method includes: acquiring an image to be recognized; using a pedestrian re-recognition network to process the image to be recognized to obtain a feature vector of the image to be recognized, wherein the pedestrian re-recognition network is based on the above The training method of the first aspect is obtained by training; according to the feature vector of the image to be recognized and the feature vector of the existing pedestrian image, the recognition result of the image to be recognized is obtained.

本申请中,由于采用第一方面的训练方法训练得到的行人再识别网络能够更好的进行特征的提取,因此,采用第一方面的训练方法训练得到的行人再识别网络进行行人识别能够取得更好的行人识别结果。In this application, because the pedestrian re-recognition network trained by the training method of the first aspect can better perform feature extraction, the pedestrian re-recognition network trained by the training method of the first aspect can be used for pedestrian recognition. Good pedestrian recognition results.

结合第二方面,在第二方面的某些实现方式中,上述根据待识别图像的特征向量与已有的行人图像的特征向量进行比对,得到待识别图像的识别结果,包括:输出目标行人图像,以及目标行人图像的属性信息。In combination with the second aspect, in some implementations of the second aspect, the above-mentioned comparison is performed based on the feature vector of the image to be recognized with the feature vector of the existing pedestrian image to obtain the recognition result of the image to be recognized, including: outputting the target pedestrian Image and attribute information of the target pedestrian image.

其中,上述目标行人图像可以是已有的行人图像中特征向量与待识别图像的特征向量最相似的行人图像,该目标行人图像的属性信息包括该目标行人图像的拍摄时间,拍摄位置。另外,上述目标行人图像的属性信息中还可以包括行人的身份信息等。The above-mentioned target pedestrian image may be a pedestrian image whose feature vector is most similar to the feature vector of the image to be recognized in the existing pedestrian image, and the attribute information of the target pedestrian image includes the shooting time and shooting location of the target pedestrian image. In addition, the attribute information of the target pedestrian image may also include the identity information of the pedestrian and the like.

第三方面,提供了一种行人再识别网络的训练装置,该行人再识别网络的训练装置包括用于执行上述第一方面中的方法中的各个模块。In a third aspect, a training device for a pedestrian re-identification network is provided. The training device for the pedestrian re-identification network includes various modules for executing the method in the above-mentioned first aspect.

第四方面,提供了一种行人再识别装置,该装置包括用于执行上述第二方面中的方法中的各个模块。In a fourth aspect, a pedestrian re-identification device is provided. The device includes modules for executing the method in the second aspect.

第五方面,提供了一种行人再识别网络的训练装置,该装置包括:存储器,用于存储程序;处理器,用于执行所述存储器存储的程序,当所述存储器存储的程序被执行时,所述处理器用于执行上述第一方面中的方法。In a fifth aspect, a training device for a pedestrian re-identification network is provided. The device includes: a memory for storing a program; a processor for executing the program stored in the memory, and when the program stored in the memory is executed , The processor is configured to execute the method in the above-mentioned first aspect.

第六方面,提供了一种行人再识别装置,该装置包括:存储器,用于存储程序;处理器,用于执行所述存储器存储的程序,当所述存储器存储的程序被执行时,所述处理器用 于执行上述第二方面中的方法。In a sixth aspect, a pedestrian re-identification device is provided. The device includes: a memory for storing a program; a processor for executing the program stored in the memory, and when the program stored in the memory is executed, the The processor is used to execute the method in the second aspect described above.

第七方面,提供了一种计算机设备,该计算机设备包括上述第三方面中的行人再识别网络的训练装置。In a seventh aspect, a computer device is provided, and the computer device includes the training device for the pedestrian re-identification network in the third aspect.

在上述第七方面中,该计算机设备具体可以是服务器或者云端设备等等。In the above seventh aspect, the computer device may specifically be a server or a cloud device or the like.

第八方面,提供了一种电子设备,该电子设备包括上述第四方面的行人再识别装置。In an eighth aspect, an electronic device is provided, and the electronic device includes the pedestrian re-identification device of the fourth aspect.

在上述第八方面中,电子设备具体可以是移动终端(例如,智能手机),平板电脑,笔记本电脑,增强现实/虚拟现实设备以及车载终端设备等等。In the above eighth aspect, the electronic device may specifically be a mobile terminal (for example, a smart phone), a tablet computer, a notebook computer, an augmented reality/virtual reality device, a vehicle-mounted terminal device, and so on.

第九方面,提供一种计算机可读存储介质,该计算机可读存储介质存储有程序代码,该程序代码包括用于执行第一方面或第二方面中的任意一种方法中的步骤的指令。In a ninth aspect, a computer-readable storage medium is provided, and the computer-readable storage medium stores program code, and the program code includes instructions for executing steps in any one of the first aspect or the second aspect.

第十方面,提供一种包含指令的计算机程序产品,当该计算机程序产品在计算机上运行时,使得计算机执行上述第一方面或第二方面中的任意一种方法。In a tenth aspect, a computer program product containing instructions is provided. When the computer program product runs on a computer, the computer executes any one of the methods in the first aspect or the second aspect.

第十一方面,提供一种芯片,所述芯片包括处理器与数据接口,所述处理器通过所述数据接口读取存储器上存储的指令,执行上述第一方面或第二方面中的任意一种方法。In an eleventh aspect, a chip is provided. The chip includes a processor and a data interface. The processor reads instructions stored in a memory through the data interface, and executes any one of the first aspect or the second aspect. Kind of method.

可选地,作为一种实现方式,所述芯片还可以包括存储器,所述存储器中存储有指令,所述处理器用于执行所述存储器上存储的指令,当所述指令被执行时,所述处理器用于执行上述第一方面或第二方面中的任意一种方法。Optionally, as an implementation manner, the chip may further include a memory in which instructions are stored, and the processor is configured to execute instructions stored on the memory. When the instructions are executed, the The processor is configured to execute any one of the methods in the first aspect or the second aspect described above.

上述芯片具体可以是现场可编程门阵列FPGA或者专用集成电路ASIC。The above-mentioned chip may specifically be a field programmable gate array FPGA or an application-specific integrated circuit ASIC.

应理解,本申请中,第一方面的方法具体可以是指第一方面以及第一方面中各种实现方式中的任意一种实现方式中的方法,第二方面的方法具体可以是指第二方面以及第二方面中各种实现方式中的任意一种实现方式中的方法。It should be understood that in this application, the method in the first aspect may specifically refer to the method in the first aspect and any one of the various implementation manners in the first aspect, and the method in the second aspect may specifically refer to the second aspect. Aspect and the method in any one of the various implementation manners in the second aspect.

附图说明Description of the drawings

图1是本申请实施例提供的系统架构的结构示意图;FIG. 1 is a schematic structural diagram of a system architecture provided by an embodiment of the present application;

图2是利用本申请实施例提供的卷积神经网络模型进行行人再识别的示意图;FIG. 2 is a schematic diagram of re-identifying pedestrians using the convolutional neural network model provided by an embodiment of the present application;

图3是本申请实施例提供的一种芯片硬件结构示意图;FIG. 3 is a schematic diagram of a chip hardware structure provided by an embodiment of the present application;

图4是本申请实施例提供的一种系统架构的示意图;FIG. 4 is a schematic diagram of a system architecture provided by an embodiment of the present application;

图5是本申请实施例的一种可能的应用场景的示意图;FIG. 5 is a schematic diagram of a possible application scenario of an embodiment of the present application;

图6是本申请实施例的行人再识别网络的训练方法的总体流程示意图;FIG. 6 is a schematic diagram of the overall flow of the training method of the pedestrian re-identification network according to an embodiment of the present application;

图7是本申请实施例的行人再识别网络的训练方法的示意性流程图;FIG. 7 is a schematic flowchart of a method for training a pedestrian re-identification network according to an embodiment of the present application;

图8是确定损失函数的函数值的过程的示意图;FIG. 8 is a schematic diagram of the process of determining the function value of the loss function;

图9是本申请实施例的行人再识别方法的示意性流程图;FIG. 9 is a schematic flowchart of a pedestrian re-identification method according to an embodiment of the present application;

图10是本申请实施例的行人再识别网络的训练装置的示意性框图;FIG. 10 is a schematic block diagram of a training device for a pedestrian re-identification network according to an embodiment of the present application;

图11是本申请实施例的行人再识别网络的训练装置的示意性框图;FIG. 11 is a schematic block diagram of a training device for a pedestrian re-identification network according to an embodiment of the present application;

图12是本申请实施例的行人再识别装置的示意性框图;Fig. 12 is a schematic block diagram of a pedestrian re-identification device according to an embodiment of the present application;

图13是本申请实施例的行人再识别装置的示意性框图。Fig. 13 is a schematic block diagram of a pedestrian re-identification device according to an embodiment of the present application.

具体实施方式detailed description

下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施 例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。The following describes the technical solutions in the embodiments of the present application with reference to the drawings in the embodiments of the present application. Obviously, the described embodiments are only a part of the embodiments of the present application, rather than all the embodiments. Based on the embodiments in this application, all other embodiments obtained by a person of ordinary skill in the art without creative work shall fall within the protection scope of this application.

本申请的方案可以应用在城市监控,平安城市等领域。The solution of this application can be applied in the fields of city monitoring, safe city and so on.

具体地,本申请可以应用在智能监控系统寻人的场景中,下面对该场景下的应用进行介绍。Specifically, the present application can be applied to the scene of tracing people in the intelligent monitoring system, and the application in this scene will be introduced below.

智能监控系统寻人:Intelligent monitoring system to find people:

以部署在某园区的智能监控系统为例,该智能监控系统可以采集各个图像拍摄设备下拍摄到的行人的图像,形成图像库。接下来,可以利用图像库中的图像对行人再识别网络(也可以称为行人再识别模型)进行训练,得到训练好的行人再识别网络。Take an intelligent monitoring system deployed in a park as an example. The intelligent monitoring system can collect the images of pedestrians captured by various image capturing devices to form an image library. Next, the pedestrian re-recognition network (also called a pedestrian re-recognition model) can be trained using the images in the image library to obtain a trained pedestrian re-recognition network.

接下来,就可以利用该训练好的行人再识别网络提取采集到的行人图像的特征向量。当一个人行踪可疑,或者有其他需要跨镜头跟踪该行人的情况时,可以将行人再识别网络采集到的行人图像的特征向量与图像库中图像的特征向量进行对比,并返回特征向量最相似的行人图像,并给出这些图像的拍摄时间、位置等基本信息。再经过后续的核对筛选后,即可完成寻人过程。Next, the trained pedestrian re-recognition network can be used to extract the feature vector of the collected pedestrian image. When a person's whereabouts are suspicious, or there are other situations where the pedestrian needs to be tracked across the camera, the feature vector of the pedestrian image collected by the pedestrian re-recognition network can be compared with the feature vector of the image in the image library, and the feature vector is returned the most similar Pedestrian images, and provide basic information such as the shooting time and location of these images. After the follow-up check and screening, the tracing process can be completed.

在本申请方案中,行人再识别网络可以是一种神经网络(模型),为了更好地理解本申请方案,下面先对神经网络的相关术语和概念进行介绍。In the solution of this application, the pedestrian re-identification network may be a neural network (model). In order to better understand the solution of this application, the following first introduces the related terms and concepts of the neural network.

(1)神经网络(1) Neural network

神经网络可以是由神经单元组成的,神经单元可以是指以x s和截距1为输入的运算单元,该运算单元的输出可以如公式(1)所示: A neural network can be composed of neural units. A neural unit can refer to an arithmetic unit that takes x s and intercept 1 as inputs. The output of the arithmetic unit can be as shown in formula (1):

Figure PCTCN2020113041-appb-000001
Figure PCTCN2020113041-appb-000001

其中,s=1、2、……n,n为大于1的自然数,W s为x s的权重,b为神经单元的偏置。f为神经单元的激活函数(activation functions),该激活函数用于对神经网络中的特征进行非线性变换,从而将神经单元中的输入信号转换为输出信号。该激活函数的输出信号可以作为下一层卷积层的输入,激活函数可以是sigmoid函数。神经网络是将多个上述单一的神经单元联结在一起形成的网络,即一个神经单元的输出可以是另一个神经单元的输入。每个神经单元的输入可以与前一层的局部接受域相连,来提取局部接受域的特征,局部接受域可以是由若干个神经单元组成的区域。 Among them, s=1, 2,...n, n is a natural number greater than 1, W s is the weight of x s , and b is the bias of the neural unit. f is the activation function of the neural unit, which is used to perform non-linear transformation of the features in the neural network, thereby converting the input signal in the neural unit into an output signal. The output signal of the activation function can be used as the input of the next convolutional layer, and the activation function can be a sigmoid function. A neural network is a network formed by connecting multiple above-mentioned single neural units together, that is, the output of one neural unit can be the input of another neural unit. The input of each neural unit can be connected with the local receptive field of the previous layer to extract the characteristics of the local receptive field. The local receptive field can be a region composed of several neural units.

(2)深度神经网络(2) Deep neural network

深度神经网络(deep neural network,DNN),也称多层神经网络,可以理解为具有多层隐含层的神经网络。按照不同层的位置对DNN进行划分,DNN内部的神经网络可以分为三类:输入层,隐含层,输出层。一般来说第一层是输入层,最后一层是输出层,中间的层数都是隐含层。层与层之间是全连接的,也就是说,第i层的任意一个神经元一定与第i+1层的任意一个神经元相连。Deep neural network (DNN), also known as multi-layer neural network, can be understood as a neural network with multiple hidden layers. The DNN is divided according to the positions of different layers. The neural network inside the DNN can be divided into three categories: input layer, hidden layer, and output layer. Generally speaking, the first layer is the input layer, the last layer is the output layer, and the number of layers in the middle are all hidden layers. The layers are fully connected, that is to say, any neuron in the i-th layer must be connected to any neuron in the i+1th layer.

虽然DNN看起来很复杂,但是就每一层的工作来说,其实并不复杂,简单来说就是如下线性关系表达式:

Figure PCTCN2020113041-appb-000002
其中,
Figure PCTCN2020113041-appb-000003
是输入向量,
Figure PCTCN2020113041-appb-000004
是输出向量,
Figure PCTCN2020113041-appb-000005
是偏移向量,W是权重矩阵(也称系数),α()是激活函数。每一层仅仅是对输入向量
Figure PCTCN2020113041-appb-000006
经过如此简单的操作得到输出向量
Figure PCTCN2020113041-appb-000007
由于DNN层数多,系数W和偏移向量
Figure PCTCN2020113041-appb-000008
的数量也比较多。这些参数在DNN中的定义如下所述:以系数W为例,假设在一个三层的DNN中,第二层的第4个神经元到第三层的第2个神经元的线性系数定义为
Figure PCTCN2020113041-appb-000009
上标3代表系数W所 在的层数,而下标对应的是输出的第三层索引2和输入的第二层索引4。 Although DNN looks complicated, it is not complicated as far as the work of each layer is concerned. Simply put, it is the following linear relationship expression:
Figure PCTCN2020113041-appb-000002
among them,
Figure PCTCN2020113041-appb-000003
Is the input vector,
Figure PCTCN2020113041-appb-000004
Is the output vector,
Figure PCTCN2020113041-appb-000005
Is the offset vector, W is the weight matrix (also called coefficient), and α() is the activation function. Each layer is just the input vector
Figure PCTCN2020113041-appb-000006
After such a simple operation, the output vector is obtained
Figure PCTCN2020113041-appb-000007
Due to the large number of DNN layers, the coefficient W and the offset vector
Figure PCTCN2020113041-appb-000008
The number is also relatively large. The definition of these parameters in DNN is as follows: Take the coefficient W as an example, suppose that in a three-layer DNN, the linear coefficients from the fourth neuron in the second layer to the second neuron in the third layer are defined as
Figure PCTCN2020113041-appb-000009
The superscript 3 represents the number of layers where the coefficient W is located, and the subscript corresponds to the output third-level index 2 and the input second-level index 4.

综上,第L-1层的第k个神经元到第L层的第j个神经元的系数定义为

Figure PCTCN2020113041-appb-000010
In summary, the coefficient from the kth neuron in the L-1th layer to the jth neuron in the Lth layer is defined as
Figure PCTCN2020113041-appb-000010

需要注意的是,输入层是没有W参数的。在深度神经网络中,更多的隐含层让网络更能够刻画现实世界中的复杂情形。理论上而言,参数越多的模型复杂度越高,“容量”也就越大,也就意味着它能完成更复杂的学习任务。训练深度神经网络的也就是学习权重矩阵的过程,其最终目的是得到训练好的深度神经网络的所有层的权重矩阵(由很多层的向量W形成的权重矩阵)。It should be noted that there is no W parameter in the input layer. In deep neural networks, more hidden layers make the network more capable of portraying complex situations in the real world. In theory, a model with more parameters is more complex and has a greater "capacity", which means it can complete more complex learning tasks. Training the deep neural network is also the process of learning the weight matrix, and its ultimate goal is to obtain the weight matrix of all layers of the trained deep neural network (the weight matrix formed by the vector W of many layers).

(3)卷积神经网络(3) Convolutional neural network

卷积神经网络(convolutional neuron network,CNN)是一种带有卷积结构的深度神经网络。卷积神经网络包含了一个由卷积层和子采样层构成的特征抽取器,该特征抽取器可以看作是滤波器。卷积层是指卷积神经网络中对输入信号进行卷积处理的神经元层。在卷积神经网络的卷积层中,一个神经元可以只与部分邻层神经元连接。一个卷积层中,通常包含若干个特征平面,每个特征平面可以由一些矩形排列的神经单元组成。同一特征平面的神经单元共享权重,这里共享的权重就是卷积核。共享权重可以理解为提取图像信息的方式与位置无关。卷积核可以以随机大小的矩阵的形式初始化,在卷积神经网络的训练过程中卷积核可以通过学习得到合理的权重。另外,共享权重带来的直接好处是减少卷积神经网络各层之间的连接,同时又降低了过拟合的风险。Convolutional neural network (convolutional neuron network, CNN) is a deep neural network with a convolutional structure. The convolutional neural network contains a feature extractor composed of a convolutional layer and a sub-sampling layer. The feature extractor can be regarded as a filter. The convolutional layer refers to the neuron layer that performs convolution processing on the input signal in the convolutional neural network. In the convolutional layer of a convolutional neural network, a neuron can only be connected to a part of the neighboring neurons. A convolutional layer usually contains several feature planes, and each feature plane can be composed of some rectangularly arranged neural units. Neural units in the same feature plane share weights, and the shared weights here are the convolution kernels. Sharing weight can be understood as the way of extracting image information has nothing to do with location. The convolution kernel can be initialized in the form of a matrix of random size, and the convolution kernel can obtain reasonable weights through learning during the training process of the convolutional neural network. In addition, the direct benefit of sharing weights is to reduce the connections between the layers of the convolutional neural network, and at the same time reduce the risk of overfitting.

(4)残差网络(4) Residual network

残差网络是在2015年提出的一种深度卷积网络,相比于传统的卷积神经网络,残差网络更容易优化,并且能够通过增加相当的深度来提高准确率。残差网络的核心是解决了增加深度带来的副作用(退化问题),这样能够通过单纯地增加网络深度,来提高网络性能。残差网络一般会包含很多结构相同的子模块,通常会采用残差网络(residual network,ResNet)连接一个数字表示子模块重复的次数,比如ResNet50表示残差网络中有50个子模块。The residual network is a deep convolutional network proposed in 2015. Compared with the traditional convolutional neural network, the residual network is easier to optimize and can increase the accuracy by adding a considerable depth. The core of the residual network is to solve the side effect (degradation problem) caused by increasing the depth, so that the network performance can be improved by simply increasing the network depth. The residual network generally contains many sub-modules with the same structure. A residual network (ResNet) is usually used to connect a number to indicate the number of times the sub-module is repeated. For example, ResNet50 means that there are 50 sub-modules in the residual network.

(6)分类器(6) Classifier

很多神经网络结构最后都有一个分类器,用于对图像中的物体进行分类。分类器一般由全连接层(fully connected layer)和softmax函数(可以称为归一化指数函数)组成,能够根据输入而输出不同类别的概率。Many neural network structures have a classifier at the end to classify objects in the image. The classifier is generally composed of a fully connected layer and a softmax function (which can be called a normalized exponential function), and can output different types of probabilities according to the input.

(7)损失函数(7) Loss function

在训练深度神经网络的过程中,因为希望深度神经网络的输出尽可能的接近真正想要预测的值,所以可以通过比较当前网络的预测值和真正想要的目标值,再根据两者之间的差异情况来更新每一层神经网络的权重向量(当然,在第一次更新之前通常会有初始化的过程,即为深度神经网络中的各层预先配置参数),比如,如果网络的预测值高了,就调整权重向量让它预测低一些,不断地调整,直到深度神经网络能够预测出真正想要的目标值或与真正想要的目标值非常接近的值。因此,就需要预先定义“如何比较预测值和目标值之间的差异”,这便是损失函数(loss function)或目标函数(objective function),它们是用于衡量预测值和目标值的差异的重要方程。其中,以损失函数举例,损失函数的输出值(loss)越高表示差异越大,那么深度神经网络的训练就变成了尽可能缩小这个loss的过程。In the process of training a deep neural network, because it is hoped that the output of the deep neural network is as close as possible to the value that you really want to predict, you can compare the predicted value of the current network with the target value you really want, and then based on the difference between the two To update the weight vector of each layer of neural network (of course, there is usually an initialization process before the first update, that is, pre-configured parameters for each layer in the deep neural network), for example, if the predicted value of the network If it is high, adjust the weight vector to make it predict lower, and keep adjusting until the deep neural network can predict the really wanted target value or a value very close to the really wanted target value. Therefore, it is necessary to predefine "how to compare the difference between the predicted value and the target value". This is the loss function or objective function, which is used to measure the difference between the predicted value and the target value. Important equation. Among them, taking the loss function as an example, the higher the output value (loss) of the loss function, the greater the difference, then the training of the deep neural network becomes a process of reducing this loss as much as possible.

(8)反向传播算法(8) Backpropagation algorithm

神经网络可以采用误差反向传播(back propagation,BP)算法在训练过程中修正初始的神经网络模型中参数的数值,使得神经网络模型的重建误差损失越来越小。具体地,前向传递输入信号直至输出会产生误差损失,通过反向传播误差损失信息来更新初始的神经网络模型中参数,从而使误差损失收敛。反向传播算法是以误差损失为主导的反向传播运动,旨在得到最优的神经网络模型的参数,例如权重矩阵。The neural network can use the backpropagation (BP) algorithm to modify the parameter values in the initial neural network model during the training process, so that the reconstruction error loss of the neural network model becomes smaller and smaller. Specifically, forwarding the input signal until the output will cause error loss, and the parameters in the initial neural network model are updated by backpropagating the error loss information, so that the error loss is converged. The backpropagation algorithm is a backpropagation motion dominated by error loss, and aims to obtain the optimal parameters of the neural network model, such as the weight matrix.

下面结合图1对本申请实施例的系统架构进行详细的介绍。The system architecture of the embodiment of the present application will be described in detail below in conjunction with FIG. 1.

图1是本申请实施例的系统架构的示意图。如图1所示,系统架构100包括执行设备110、训练设备120、数据库130、客户设备140、数据存储系统150、以及数据采集系统160。FIG. 1 is a schematic diagram of the system architecture of an embodiment of the present application. As shown in FIG. 1, the system architecture 100 includes an execution device 110, a training device 120, a database 130, a client device 140, a data storage system 150, and a data collection system 160.

另外,执行设备110包括计算模块111、I/O接口112、预处理模块113和预处理模块114。其中,计算模块111中可以包括目标模型/规则101,预处理模块113和预处理模块114是可选的。In addition, the execution device 110 includes a calculation module 111, an I/O interface 112, a preprocessing module 113, and a preprocessing module 114. Among them, the calculation module 111 may include the target model/rule 101, and the preprocessing module 113 and the preprocessing module 114 are optional.

数据采集设备160用于采集训练数据。针对本申请实施例的行人再识别网络的训练方法来说,训练数据可以包括M个训练图像以及该M个训练图像的标注数据。在采集到训练数据之后,数据采集设备160将这些训练数据存入数据库130,训练设备120基于数据库130中维护的训练数据训练得到目标模型/规则101。The data collection device 160 is used to collect training data. For the training method of the pedestrian re-recognition network in the embodiment of the present application, the training data may include M training images and the annotation data of the M training images. After the training data is collected, the data collection device 160 stores the training data in the database 130, and the training device 120 trains to obtain the target model/rule 101 based on the training data maintained in the database 130.

下面对训练设备120基于训练数据得到目标模型/规则101进行描述,训练设备120对输入的训练图像进行特征提取,得到训练图像的特征向量,重复对输入的训练图像进行特征提取,直到损失函数的函数值满足预设要求(小于或者等于预设阈值),从而完成目标模型/规则101的训练。The following describes the target model/rule 101 obtained by the training device 120 based on the training data. The training device 120 performs feature extraction on the input training image to obtain the feature vector of the training image, and repeats feature extraction on the input training image until the loss function The function value of satisfies the preset requirement (less than or equal to the preset threshold), thereby completing the training of the target model/rule 101.

应理解,上述目标模型/规则101的训练可以是一个无监督的训练。It should be understood that the training of the aforementioned target model/rule 101 may be an unsupervised training.

上述目标模型/规则101能够用于实现本申请实施例的行人再识别方法,即,将行人图像(行人图像可以是需要进行行人识别的图像)输入该目标模型/规则101,即可得到对行人图像提取特征向量,并基于提取到的特征向量进行行人识别,确定行人的识别结果。本申请实施例中的目标模型/规则101具体可以为神经网络。需要说明的是,在实际应用中,数据库130中维护的训练数据不一定都来自于数据采集设备160的采集,也有可能是从其他设备接收得到的。另外需要说明的是,训练设备120也不一定完全基于数据库130维护的训练数据进行目标模型/规则101的训练,也有可能从云端或其他地方获取训练数据进行模型训练,上述描述不应该作为对本申请实施例的限定。The above-mentioned target model/rule 101 can be used to implement the pedestrian re-identification method of the embodiment of the present application, that is, the pedestrian image (the pedestrian image may be an image that requires pedestrian recognition) is input into the target model/rule 101 to obtain the pedestrian re-identification method. The image extracts feature vectors, and performs pedestrian recognition based on the extracted feature vectors to determine the pedestrian recognition results. The target model/rule 101 in the embodiment of the present application may specifically be a neural network. It should be noted that, in actual applications, the training data maintained in the database 130 may not all come from the collection of the data collection device 160, and may also be received from other devices. In addition, it should be noted that the training device 120 does not necessarily perform the training of the target model/rule 101 completely based on the training data maintained by the database 130. It may also obtain training data from the cloud or other places for model training. The above description should not be used as a reference to this application. Limitations of the embodiment.

根据训练设备120训练得到的目标模型/规则101可以应用于不同的系统或设备中,如应用于图1所示的执行设备110,所述执行设备110可以是终端,如手机终端,平板电脑,笔记本电脑,增强现实(augmented reality,AR)/虚拟现实(virtual reality,VR),车载终端等,还可以是服务器或者云端等。在图1中,执行设备110配置输入/输出(input/output,I/O)接口112,用于与外部设备进行数据交互,用户可以通过客户设备140向I/O接口112输入数据,所述输入数据在本申请实施例中可以包括:客户设备输入的行人图像。这里的客户设备140具体可以是监控设备。The target model/rule 101 trained according to the training device 120 can be applied to different systems or devices, such as the execution device 110 shown in FIG. 1, which can be a terminal, such as a mobile phone terminal, a tablet computer, Notebook computers, augmented reality (AR)/virtual reality (VR), in-vehicle terminals, etc., can also be servers or clouds. In FIG. 1, the execution device 110 is configured with an input/output (input/output, I/O) interface 112 for data interaction with external devices. The user can input data to the I/O interface 112 through the client device 140. The input data in the embodiment of the present application may include: a pedestrian image input by the client device. The client device 140 here may specifically be a monitoring device.

预处理模块113和预处理模块114用于根据I/O接口112接收到的输入数据(如行人图像)进行预处理,在本申请实施例中,可以没有预处理模块113和预处理模块114或者 只有的一个预处理模块。当不存在预处理模块113和预处理模块114时,可以直接采用计算模块111对输入数据进行处理。The preprocessing module 113 and the preprocessing module 114 are used to perform preprocessing according to the input data (such as pedestrian images) received by the I/O interface 112. In the embodiment of the present application, the preprocessing module 113 and the preprocessing module 114 may be omitted or There is only one preprocessing module. When the preprocessing module 113 and the preprocessing module 114 do not exist, the calculation module 111 can be directly used to process the input data.

在执行设备110对输入数据进行预处理,或者在执行设备110的计算模块111执行计算等相关的处理过程中,执行设备110可以调用数据存储系统150中的数据、代码等以用于相应的处理,也可以将相应处理得到的数据、指令等存入数据存储系统150中。When the execution device 110 preprocesses input data, or when the calculation module 111 of the execution device 110 performs calculations and other related processing, the execution device 110 may call data, codes, etc. in the data storage system 150 for corresponding processing , The data, instructions, etc. obtained by corresponding processing may also be stored in the data storage system 150.

最后,I/O接口112将处理结果(具体可以是行人再识别得到的高质量图像),如将目标模型/规则101对行人图像进行行人再识别处理得到的待识别图像的识别结果呈现给客户设备140,从而提供给用户。Finally, the I/O interface 112 presents the processing results (specifically, high-quality images obtained by re-recognition of pedestrians), such as the recognition results of the image to be recognized obtained by the target model/rule 101 on pedestrian re-recognition processing, to the customer The device 140 is thus provided to the user.

具体地,经过计算模块111中的目标模型/规则101进行行人再识别得到的高质量图像可以通过预处理模块113(也可以再加上预处理模块114的处理)的处理(例如,进行图像渲染处理)后将处理结果送入到I/O接口,再由I/O接口将处理结果送入到客户设备140中显示。Specifically, the high-quality image obtained by re-identification of pedestrians through the target model/rule 101 in the calculation module 111 can be processed by the preprocessing module 113 (the processing of the preprocessing module 114 can also be added) (for example, image rendering is performed). After processing), the processing result is sent to the I/O interface, and then the processing result is sent to the client device 140 through the I/O interface for display.

应理解,当上述系统架构100中不存在预处理模块113和预处理模块114时,计算模块111还可以将通过行人再识别处理得到的高质量图像传输到I/O接口,然后再由I/O接口将处理结果送入到客户设备140中显示。It should be understood that when the preprocessing module 113 and the preprocessing module 114 do not exist in the above system architecture 100, the calculation module 111 can also transmit the high-quality image obtained through the pedestrian re-identification process to the I/O interface, and then the I/O interface. The O interface sends the processing result to the client device 140 for display.

值得说明的是,训练设备120可以针对不同的目标或称不同的任务(例如,训练设备可以针对不同场景下真实高质量图像和近似低质量图像进行训练),基于不同的训练数据生成相应的目标模型/规则101,该相应的目标模型/规则101即可以用于实现上述目标或完成上述任务,从而为用户提供所需的结果。It is worth noting that the training device 120 can target different targets or tasks (for example, the training device can train for real high-quality images and approximate low-quality images in different scenarios), and generate corresponding targets based on different training data. Model/rule 101, the corresponding target model/rule 101 can be used to achieve the above goals or complete the above tasks, so as to provide users with desired results.

值得注意的是,图1仅是本申请实施例提供的一种系统架构的示意图,图中所示设备、器件、模块等之间的位置关系不构成任何限制,例如,在图1中,数据存储系统150相对执行设备110是外部存储器,在其它情况下,也可以将数据存储系统150置于执行设备110中。It is worth noting that FIG. 1 is only a schematic diagram of a system architecture provided by an embodiment of the present application, and the positional relationship between the devices, devices, modules, etc. shown in the figure does not constitute any limitation. For example, in FIG. 1, the data The storage system 150 is an external memory relative to the execution device 110. In other cases, the data storage system 150 may also be placed in the execution device 110.

如图1所示,根据训练设备120训练得到目标模型/规则101,可以是神经网络(模型)。具体的,该神经网络(模型)可以是CNN以及深度卷积神经网络(deep convolutional neural networks,DCNN)等等。As shown in FIG. 1, the target model/rule 101 obtained by training according to the training device 120 may be a neural network (model). Specifically, the neural network (model) may be CNN, deep convolutional neural networks (deep convolutional neural networks, DCNN), and so on.

由于CNN是一种非常常见的神经网络,下面结合图2重点对CNN的结构进行详细的介绍。如上文的基础概念介绍所述,卷积神经网络是一种带有卷积结构的深度神经网络,是一种深度学习(deep learning)架构,深度学习架构是指通过机器学习的算法,在不同的抽象层级上进行多个层次的学习。作为一种深度学习架构,CNN是一种前馈(feed-forward)人工神经网络,该前馈人工神经网络中的各个神经元可以对输入其中的图像作出响应。Since CNN is a very common neural network, the structure of CNN will be introduced in detail below in conjunction with Figure 2. As mentioned in the introduction to the basic concepts above, a convolutional neural network is a deep neural network with a convolutional structure. It is a deep learning architecture. The deep learning architecture refers to the algorithm of machine learning. Multi-level learning is carried out on the abstract level of the system. As a deep learning architecture, CNN is a feed-forward artificial neural network. Each neuron in the feed-forward artificial neural network can respond to the input image.

如图2所示,卷积神经网络(CNN)200可以包括输入层210,卷积层/池化层220(其中池化层为可选的),以及全连接层(fully connected layer)230。下面对这些层的相关内容做详细介绍。As shown in FIG. 2, a convolutional neural network (CNN) 200 may include an input layer 210, a convolutional layer/pooling layer 220 (the pooling layer is optional), and a fully connected layer 230. The following is a detailed introduction to the relevant content of these layers.

卷积层/池化层220:Convolutional layer/pooling layer 220:

卷积层:Convolutional layer:

如图2所示卷积层/池化层220可以包括如示例221-226层,举例来说:在一种实现中,221层为卷积层,222层为池化层,223层为卷积层,224层为池化层,225为卷积层,226 为池化层;在另一种实现方式中,221、222为卷积层,223为池化层,224、225为卷积层,226为池化层。即卷积层的输出可以作为随后的池化层的输入,也可以作为另一个卷积层的输入以继续进行卷积操作。The convolutional layer/pooling layer 220 shown in FIG. 2 may include layers 221-226 as shown in the examples. For example, in one implementation, layer 221 is a convolutional layer, layer 222 is a pooling layer, and layer 223 is a convolutional layer. Layers, 224 is the pooling layer, 225 is the convolutional layer, and 226 is the pooling layer; in another implementation, 221 and 222 are convolutional layers, 223 is the pooling layer, and 224 and 225 are convolutional layers. Layer, 226 is the pooling layer. That is, the output of the convolutional layer can be used as the input of the subsequent pooling layer, or as the input of another convolutional layer to continue the convolution operation.

下面将以卷积层221为例,介绍一层卷积层的内部工作原理。The following will take the convolutional layer 221 as an example to introduce the internal working principle of a convolutional layer.

卷积层221可以包括很多个卷积算子,卷积算子也称为核,其在图像处理中的作用相当于一个从输入图像矩阵中提取特定信息的过滤器,卷积算子本质上可以是一个权重矩阵,这个权重矩阵通常被预先定义,在对图像进行卷积操作的过程中,权重矩阵通常在输入图像上沿着水平方向一个像素接着一个像素(或两个像素接着两个像素……这取决于步长stride的取值)的进行处理,从而完成从图像中提取特定特征的工作。该权重矩阵的大小应该与图像的大小相关,需要注意的是,权重矩阵的纵深维度(depth dimension)和输入图像的纵深维度是相同的,在进行卷积运算的过程中,权重矩阵会延伸到输入图像的整个深度。因此,和一个单一的权重矩阵进行卷积会产生一个单一纵深维度的卷积化输出,但是大多数情况下不使用单一权重矩阵,而是应用多个尺寸(行×列)相同的权重矩阵,即多个同型矩阵。每个权重矩阵的输出被堆叠起来形成卷积图像的纵深维度,这里的维度可以理解为由上面所述的“多个”来决定。不同的权重矩阵可以用来提取图像中不同的特征,例如一个权重矩阵用来提取图像边缘信息,另一个权重矩阵用来提取图像的特定颜色,又一个权重矩阵用来对图像中不需要的噪点进行模糊化等。该多个权重矩阵尺寸(行×列)相同,经过该多个尺寸相同的权重矩阵提取后的卷积特征图的尺寸也相同,再将提取到的多个尺寸相同的卷积特征图合并形成卷积运算的输出。The convolution layer 221 can include many convolution operators. The convolution operator is also called a kernel. Its function in image processing is equivalent to a filter that extracts specific information from the input image matrix. The convolution operator is essentially It can be a weight matrix. This weight matrix is usually pre-defined. In the process of convolution on the image, the weight matrix is usually one pixel after one pixel (or two pixels after two pixels) along the horizontal direction on the input image. ...It depends on the value of stride) to complete the work of extracting specific features from the image. The size of the weight matrix should be related to the size of the image. It should be noted that the depth dimension of the weight matrix and the depth dimension of the input image are the same. During the convolution operation, the weight matrix will extend to Enter the entire depth of the image. Therefore, convolution with a single weight matrix will produce a single depth dimension convolution output, but in most cases, a single weight matrix is not used, but multiple weight matrices of the same size (row×column) are applied. That is, multiple homogeneous matrices. The output of each weight matrix is stacked to form the depth dimension of the convolutional image, where the dimension can be understood as determined by the "multiple" mentioned above. Different weight matrices can be used to extract different features in the image. For example, one weight matrix is used to extract edge information of the image, another weight matrix is used to extract specific colors of the image, and another weight matrix is used to eliminate unwanted noise in the image. Perform fuzzification, etc. The multiple weight matrices have the same size (row×column), the size of the convolution feature maps extracted by the multiple weight matrices of the same size are also the same, and then the multiple extracted convolution feature maps of the same size are merged to form The output of the convolution operation.

这些权重矩阵中的权重值在实际应用中需要经过大量的训练得到,通过训练得到的权重值形成的各个权重矩阵可以用来从输入图像中提取信息,从而使得卷积神经网络200进行正确的预测。The weight values in these weight matrices need to be obtained through a lot of training in practical applications. Each weight matrix formed by the weight values obtained through training can be used to extract information from the input image, so that the convolutional neural network 200 can make correct predictions. .

当卷积神经网络200有多个卷积层的时候,初始的卷积层(例如221)往往提取较多的一般特征,该一般特征也可以称之为低级别的特征;随着卷积神经网络200深度的加深,越往后的卷积层(例如226)提取到的特征越来越复杂,比如高级别的语义之类的特征,语义越高的特征越适用于待解决的问题。When the convolutional neural network 200 has multiple convolutional layers, the initial convolutional layer (such as 221) often extracts more general features, which can also be called low-level features; with the convolutional neural network With the deepening of the network 200, the features extracted by the subsequent convolutional layers (for example, 226) become more and more complex, such as features such as high-level semantics, and features with higher semantics are more suitable for the problem to be solved.

池化层:Pooling layer:

由于常常需要减少训练参数的数量,因此卷积层之后常常需要周期性的引入池化层,在如图2中220所示例的221-226各层,可以是一层卷积层后面跟一层池化层,也可以是多层卷积层后面接一层或多层池化层。在图像处理过程中,池化层的唯一目的就是减少图像的空间大小。池化层可以包括平均池化算子和/或最大池化算子,以用于对输入图像进行采样得到较小尺寸的图像。平均池化算子可以在特定范围内对图像中的像素值进行计算产生平均值作为平均池化的结果。最大池化算子可以在特定范围内取该范围内值最大的像素作为最大池化的结果。另外,就像卷积层中用权重矩阵的大小应该与图像尺寸相关一样,池化层中的运算符也应该与图像的大小相关。通过池化层处理后输出的图像尺寸可以小于输入池化层的图像的尺寸,池化层输出的图像中每个像素点表示输入池化层的图像的对应子区域的平均值或最大值。Since it is often necessary to reduce the number of training parameters, it is often necessary to periodically introduce a pooling layer after the convolutional layer. In the 221-226 layers as illustrated by 220 in Figure 2, it can be a convolutional layer followed by a layer. The pooling layer can also be a multi-layer convolutional layer followed by one or more pooling layers. In the image processing process, the sole purpose of the pooling layer is to reduce the size of the image space. The pooling layer may include an average pooling operator and/or a maximum pooling operator for sampling the input image to obtain an image with a smaller size. The average pooling operator can calculate the pixel values in the image within a specific range to generate an average value as the result of the average pooling. The maximum pooling operator can take the pixel with the largest value within a specific range as the result of the maximum pooling. In addition, just as the size of the weight matrix used in the convolutional layer should be related to the image size, the operators in the pooling layer should also be related to the image size. The size of the image output after processing by the pooling layer can be smaller than the size of the image of the input pooling layer, and each pixel in the image output by the pooling layer represents the average value or the maximum value of the corresponding sub-region of the image input to the pooling layer.

全连接层230:Fully connected layer 230:

在经过卷积层/池化层220的处理后,卷积神经网络200还不足以输出所需要的输出 信息。因为如前所述,卷积层/池化层220只会提取特征,并减少输入图像带来的参数。然而为了生成最终的输出信息(所需要的类信息或其他相关信息),卷积神经网络200需要利用全连接层230来生成一个或者一组所需要的类的数量的输出。因此,在全连接层230中可以包括多层隐含层(如图2所示的231、232至23n)以及输出层240,该多层隐含层中所包含的参数可以根据具体的任务类型的相关训练数据进行预先训练得到,例如该任务类型可以包括图像识别,图像分类,图像超分辨率重建等等。After processing by the convolutional layer/pooling layer 220, the convolutional neural network 200 is not enough to output the required output information. Because as mentioned above, the convolutional layer/pooling layer 220 only extracts features and reduces the parameters brought by the input image. However, in order to generate the final output information (the required class information or other related information), the convolutional neural network 200 needs to use the fully connected layer 230 to generate one or a group of required classes of output. Therefore, the fully connected layer 230 may include multiple hidden layers (231, 232 to 23n as shown in FIG. 2) and an output layer 240. The parameters contained in the multiple hidden layers can be based on specific task types. The relevant training data of the, for example, the task type can include image recognition, image classification, image super-resolution reconstruction and so on.

在全连接层230中的多层隐含层之后,也就是整个卷积神经网络200的最后层为输出层240,该输出层240具有类似分类交叉熵的损失函数,具体用于计算预测误差,一旦整个卷积神经网络200的前向传播(如图2由210至240方向的传播为前向传播)完成,反向传播(如图2由240至210方向的传播为反向传播)就会开始更新前面提到的各层的权重值以及偏差,以减少卷积神经网络200的损失,及卷积神经网络200通过输出层输出的结果和理想结果之间的误差。After the multiple hidden layers in the fully connected layer 230, that is, the final layer of the entire convolutional neural network 200 is the output layer 240. The output layer 240 has a loss function similar to the classification cross entropy, which is specifically used to calculate the prediction error. Once the forward propagation of the entire convolutional neural network 200 (as shown in Figure 2 from 210 to 240 is the forward propagation) is completed, the back propagation (as shown in Figure 2 is the propagation from 240 to 210 is the back propagation). Start to update the weight values and deviations of the aforementioned layers to reduce the loss of the convolutional neural network 200 and the error between the output result of the convolutional neural network 200 through the output layer and the ideal result.

需要说明的是,如图2所示的卷积神经网络200仅作为一种卷积神经网络的示例,在具体的应用中,卷积神经网络还可以以其他网络模型的形式存在。It should be noted that the convolutional neural network 200 shown in FIG. 2 is only used as an example of a convolutional neural network. In specific applications, the convolutional neural network may also exist in the form of other network models.

应理解,可以采用图2所示的卷积神经网络(CNN)200执行本申请实施例的行人再识别方法,如图2所示,行人图像经过输入层210、卷积层/池化层220和全连接层230的处理之后可以待识别图像的图像特征,后续可以根据待识别图像的图像特征再获取到待识别图像的识别结果。It should be understood that the convolutional neural network (CNN) 200 shown in FIG. 2 may be used to perform the pedestrian re-identification method of the embodiment of the present application. As shown in FIG. 2, the pedestrian image passes through the input layer 210 and the convolutional layer/pooling layer 220. After processing with the fully connected layer 230, the image characteristics of the image to be recognized can be obtained, and then the recognition result of the image to be recognized can be obtained according to the image characteristics of the image to be recognized.

图3为本申请实施例提供的一种芯片硬件结构,该芯片包括神经网络处理器50。该芯片可以被设置在如图1所示的执行设备110中,用以完成计算模块111的计算工作。该芯片也可以被设置在如图1所示的训练设备120中,用以完成训练设备120的训练工作并输出目标模型/规则101。如图2所示的卷积神经网络中各层的算法均可在如图3所示的芯片中得以实现。FIG. 3 is a hardware structure of a chip provided by an embodiment of the application, and the chip includes a neural network processor 50. The chip can be set in the execution device 110 as shown in FIG. 1 to complete the calculation work of the calculation module 111. The chip can also be set in the training device 120 as shown in FIG. 1 to complete the training work of the training device 120 and output the target model/rule 101. The algorithms of each layer in the convolutional neural network as shown in Fig. 2 can be implemented in the chip as shown in Fig. 3.

神经网络处理器(neural-network processing unit,NPU)50作为协处理器挂载到主中央处理器(central processing unit,CPU)(host CPU)上,由主CPU分配任务。NPU的核心部分为运算电路503,控制器504控制运算电路503提取存储器(权重存储器或输入存储器)中的数据并进行运算。A neural network processor (neural-network processing unit, NPU) 50 is mounted on a main central processing unit (central processing unit, CPU) (host CPU) as a coprocessor, and the main CPU allocates tasks. The core part of the NPU is the arithmetic circuit 503. The controller 504 controls the arithmetic circuit 503 to extract data from the memory (weight memory or input memory) and perform calculations.

在一些实现中,运算电路503内部包括多个处理单元(process engine,PE)。在一些实现中,运算电路503是二维脉动阵列。运算电路503还可以是一维脉动阵列或者能够执行例如乘法和加法这样的数学运算的其它电子线路。在一些实现中,运算电路503是通用的矩阵处理器。In some implementations, the arithmetic circuit 503 includes multiple processing units (process engines, PE). In some implementations, the arithmetic circuit 503 is a two-dimensional systolic array. The arithmetic circuit 503 may also be a one-dimensional systolic array or other electronic circuit capable of performing mathematical operations such as multiplication and addition. In some implementations, the arithmetic circuit 503 is a general-purpose matrix processor.

举例来说,假设有输入矩阵A,权重矩阵B,输出矩阵C。运算电路503从权重存储器502中取矩阵B相应的数据,并缓存在运算电路503中每一个PE上。运算电路503从输入存储器501中取矩阵A数据与矩阵B进行矩阵运算,得到的矩阵的部分结果或最终结果,保存在累加器(accumulator)508中。For example, suppose there is an input matrix A, a weight matrix B, and an output matrix C. The arithmetic circuit 503 fetches the data corresponding to matrix B from the weight memory 502 and caches it on each PE in the arithmetic circuit 503. The arithmetic circuit 503 takes the matrix A data and the matrix B from the input memory 501 to perform matrix operations, and the partial result or final result of the obtained matrix is stored in an accumulator 508.

向量计算单元507可以对运算电路503的输出做进一步处理,如向量乘,向量加,指数运算,对数运算,大小比较等等。例如,向量计算单元507可以用于神经网络中非卷积/非FC层的网络计算,如池化(pooling),批归一化(batch normalization),局部响应归一化(local response normalization)等。The vector calculation unit 507 can perform further processing on the output of the arithmetic circuit 503, such as vector multiplication, vector addition, exponential operation, logarithmic operation, size comparison, and so on. For example, the vector calculation unit 507 can be used for network calculations in the non-convolutional/non-FC layer of the neural network, such as pooling, batch normalization, local response normalization, etc. .

在一些实现中,向量计算单元能507将经处理的输出的向量存储到统一缓存器506。例如,向量计算单元507可以将非线性函数应用到运算电路503的输出,例如累加值的向量,用以生成激活值。在一些实现中,向量计算单元507生成归一化的值、合并值,或二者均有。在一些实现中,处理过的输出的向量能够用作到运算电路503的激活输入,例如用于在神经网络中的后续层中的使用。In some implementations, the vector calculation unit 507 can store the processed output vector to the unified buffer 506. For example, the vector calculation unit 507 may apply a nonlinear function to the output of the arithmetic circuit 503, such as a vector of accumulated values, to generate the activation value. In some implementations, the vector calculation unit 507 generates a normalized value, a combined value, or both. In some implementations, the processed output vector can be used as an activation input to the arithmetic circuit 503, for example for use in a subsequent layer in a neural network.

统一存储器506用于存放输入数据以及输出数据。The unified memory 506 is used to store input data and output data.

权重数据直接通过存储单元访问控制器505(direct memory access controller,DMAC)将外部存储器中的输入数据搬运到输入存储器501和/或统一存储器506、将外部存储器中的权重数据存入权重存储器502,以及将统一存储器506中的数据存入外部存储器。The weight data directly transfers the input data in the external memory to the input memory 501 and/or the unified memory 506 through the storage unit access controller 505 (direct memory access controller, DMAC), and stores the weight data in the external memory into the weight memory 502, And the data in the unified memory 506 is stored in the external memory.

总线接口单元(bus interface unit,BIU)510,用于通过总线实现主CPU、DMAC和取指存储器509之间进行交互。The bus interface unit (BIU) 510 is used to implement interaction between the main CPU, the DMAC, and the instruction fetch memory 509 through the bus.

与控制器504连接的取指存储器(instruction fetch buffer)509,用于存储控制器504使用的指令;An instruction fetch buffer 509 connected to the controller 504 is used to store instructions used by the controller 504;

控制器504,用于调用指存储器509中缓存的指令,实现控制该运算加速器的工作过程。The controller 504 is used to call the instructions cached in the memory 509 to control the working process of the computing accelerator.

一般地,统一存储器506,输入存储器501,权重存储器502以及取指存储器509均为片上(on-chip)存储器,外部存储器为该NPU外部的存储器,该外部存储器可以为双倍数据率同步动态随机存储器(double data rate synchronous dynamic random access memory,简称DDR SDRAM)、高带宽存储器(high bandwidth memory,HBM)或其他可读可写的存储器。Generally, the unified memory 506, the input memory 501, the weight memory 502, and the instruction fetch memory 509 are all on-chip memories. The external memory is a memory external to the NPU. The external memory can be a double data rate synchronous dynamic random access memory. Memory (double data rate synchronous dynamic random access memory, referred to as DDR SDRAM), high bandwidth memory (HBM) or other readable and writable memory.

另外,在本申请中,图2所示的卷积神经网络中各层的运算可以由运算电路503或向量计算单元507执行。In addition, in this application, the operations of each layer in the convolutional neural network shown in FIG. 2 may be executed by the arithmetic circuit 503 or the vector calculation unit 507.

如图4所示,本申请实施例提供了一种系统架构300。该系统架构包括本地设备301、本地设备302以及执行设备210和数据存储系统250,其中,本地设备301和本地设备302通过通信网络与执行设备210连接。As shown in FIG. 4, an embodiment of the present application provides a system architecture 300. The system architecture includes a local device 301, a local device 302, an execution device 210 and a data storage system 250, where the local device 301 and the local device 302 are connected to the execution device 210 through a communication network.

执行设备210可以由一个或多个服务器实现。可选的,执行设备210可以与其它计算设备配合使用,例如:数据存储器、路由器、负载均衡器等设备。执行设备210可以布置在一个物理站点上,或者分布在多个物理站点上。执行设备210可以使用数据存储系统250中的数据,或者调用数据存储系统250中的程序代码来实现本申请实施例的行人再识别方法。The execution device 210 may be implemented by one or more servers. Optionally, the execution device 210 can be used in conjunction with other computing devices, such as data storage, routers, load balancers, and other devices. The execution device 210 may be arranged on one physical site or distributed on multiple physical sites. The execution device 210 may use the data in the data storage system 250 or call the program code in the data storage system 250 to implement the pedestrian re-identification method in the embodiment of the present application.

用户可以操作各自的用户设备(例如本地设备301和本地设备302)与执行设备210进行交互。每个本地设备可以表示任何计算设备,例如个人计算机、计算机工作站、智能手机、平板电脑、智能摄像头、智能汽车或其他类型蜂窝电话、媒体消费设备、可穿戴设备、机顶盒、游戏机等。The user can operate respective user devices (for example, the local device 301 and the local device 302) to interact with the execution device 210. Each local device can represent any computing device, such as personal computers, computer workstations, smart phones, tablets, smart cameras, smart cars or other types of cellular phones, media consumption devices, wearable devices, set-top boxes, game consoles, etc.

每个用户的本地设备可以通过任何通信机制/通信标准的通信网络与执行设备210进行交互,通信网络可以是广域网、局域网、点对点连接等方式,或它们的任意组合。The local device of each user can interact with the execution device 210 through a communication network of any communication mechanism/communication standard. The communication network can be a wide area network, a local area network, a point-to-point connection, etc., or any combination thereof.

在一种实现方式中,本地设备301、本地设备302从执行设备210获取到目标神经网络的相关参数,将目标神经网络部署在本地设备301、本地设备302上,利用该目标神经网络进行行人再识别。In an implementation manner, the local device 301 and the local device 302 obtain the relevant parameters of the target neural network from the execution device 210, deploy the target neural network on the local device 301 and the local device 302, and use the target neural network to perform pedestrian reconstruction. Recognition.

在另一种实现中,执行设备210上可以直接部署目标神经网络,执行设备210通过从本地设备301和本地设备302获取行人图像(本地设备301和本地设备302可以将图行人图像上传给执行设备210),并根据目标神经网络对行人图像进行行人再识别,并将行人再识别得到的高质量图像发送给本地设备301和本地设备302。In another implementation, the target neural network can be directly deployed on the execution device 210, and the execution device 210 obtains pedestrian images from the local device 301 and the local device 302 (the local device 301 and the local device 302 can upload the image of the pedestrian to the execution device 210), and perform pedestrian re-identification on the pedestrian image according to the target neural network, and send the high-quality image obtained by the pedestrian re-identification to the local device 301 and the local device 302.

上述执行设备210也可以称为云端设备,此时执行设备210一般部署在云端。The above-mentioned execution device 210 may also be referred to as a cloud device. At this time, the execution device 210 is generally deployed in the cloud.

图5是本申请实施例的一种可能的应用场景的示意图。Fig. 5 is a schematic diagram of a possible application scenario of an embodiment of the present application.

如图5所示,在本申请中可以通过单图像拍摄设备标注数据对行人再识别网络进行训练,得到训练好的行人再识别网络,该训练好的行人再识别网络可以对行人图像进行处理,得到行人图像的特征向量,接下来,通过将该行人图像的特征向量与图像库中的特征向量进行特征比对,就可以得到要寻找的人。具体地,通过特征比对可以寻找到与行人图像的特征向量最相似的目标行人图像,并输出目标行人图像的拍摄时间、位置等基本信息。As shown in Figure 5, in this application, the pedestrian re-recognition network can be trained through the single-image shooting device annotation data, and a trained pedestrian re-recognition network can be obtained. The trained pedestrian re-recognition network can process pedestrian images. Obtain the feature vector of the pedestrian image. Next, by comparing the feature vector of the pedestrian image with the feature vector in the image library, the person you are looking for can be obtained. Specifically, through feature comparison, the target pedestrian image that is most similar to the feature vector of the pedestrian image can be found, and basic information such as the shooting time and location of the target pedestrian image can be output.

应理解,图像库中保存的有各个行人图像的特征向量以及行人图像对应的行人的相关信息。It should be understood that the feature vector of each pedestrian image and related information of the pedestrian corresponding to the pedestrian image are stored in the image library.

应理解,这里的单图像拍摄设备标注数据可以包括多个训练图像和多个训练图像的标注数据。单图像拍摄设备标注数据是指针对每个图像拍摄设备获取到的训练图像进行单独标注,而不需要在不同的图像拍摄设备之间寻找是否出现了相同的行人,这种标注方式不用关注不同图像拍摄设备拍摄到的训练图像之间的关系,可以节省大量的标记时间,减少标注的复杂度。上述多个训练图像和多个训练图像的标注数据也可以统称为训练数据。It should be understood that the annotation data of the single-image shooting device herein may include multiple training images and annotation data of multiple training images. The single-image capturing device labeling data refers to the individual labeling of the training images obtained by each image capturing device, without the need to search for the same pedestrians between different image capturing devices. This labeling method does not need to pay attention to different images. The relationship between the training images captured by the shooting device can save a lot of labeling time and reduce the complexity of labeling. The multiple training images and the labeled data of the multiple training images may also be collectively referred to as training data.

图6是本申请实施例的行人再识别网络的训练方法的总体流程示意图。FIG. 6 is a schematic diagram of the overall flow of the training method of the pedestrian re-identification network according to an embodiment of the present application.

如图6所示,通过对每个图像拍摄设备获取到的视频图像进行单独的数据标注,可以得到单图像拍摄设备标注数据。单图像拍摄设备标注数据的最大优点就是易于标注和收集,在本申请中,单图像拍摄设备标注数据并不要求同一个行人在多个图像拍摄设备下出现。As shown in FIG. 6, by individually data annotations on the video images acquired by each image capture device, single image capture device annotation data can be obtained. The biggest advantage of the label data of a single image capturing device is that it is easy to label and collect. In this application, the label data of a single image capturing device does not require the same pedestrian to appear under multiple image capturing devices.

在单图像拍摄设备标注数据中,假设每个行人只在一个图像拍摄设备(或一个图像拍摄设备组)中出现,这样,利用行人检测跟踪在视频中得到其行人图像后,只需要很少的人力就可以将相近的帧中同一个人的几张图关联起来,形成标注。而且每个图像拍摄设备的标注是相对独立的,不同的图像拍摄设备下行人编号不会有重叠。通过为不同图像拍摄设备设置不同的采集时间段,可以使得每个图像拍摄设备采集的视频中重复出现的人数很少,从而达成单图像拍摄设备标注数据的要求。In the single-image capture device annotation data, it is assumed that each pedestrian only appears in one image capture device (or one image capture device group). In this way, after pedestrian detection and tracking are used to obtain the pedestrian image in the video, only a small amount is required. Human power can associate several pictures of the same person in similar frames to form annotations. Moreover, the labels of each image capturing device are relatively independent, and there will be no overlap in the number of pedestrians of different image capturing devices. By setting different collection time periods for different image capturing devices, the number of people recurring in the video captured by each image capturing device can be reduced, thereby achieving the requirement of labeling data for a single image capturing device.

在某些比较小的场景(例如,一个办公园区)中,大多数人本身活动范围小,相当多的人只在某一个图像拍摄设备组出现,这样的数据能够天然满足这项要求。由于一个图像拍摄设备组中的相机视野相近或有重叠,光照条件也相似,这些相机基本可以等效于一个摄相机。In some relatively small scenes (for example, an office park), most people have a small range of activities, and a considerable number of people only appear in a certain image capturing equipment group. Such data can naturally meet this requirement. Since cameras in an image capturing device group have similar or overlapping fields of view and similar lighting conditions, these cameras can basically be equivalent to one camera.

在得到单图像拍摄设备标注数据之后就可以利用该单图像拍摄设备标注数据进行对行人再识别网络(模型)进行训练,训练得到的行人再识别网络就可以用于测试和部署了。具体地,训练得到的行人再识别网络就可以用于执行本申请实施例的行人再识别方法。After the single-image shooting device annotation data is obtained, the single-image shooting device annotation data can be used to train the pedestrian re-recognition network (model), and the trained pedestrian re-recognition network can be used for testing and deployment. Specifically, the pedestrian re-identification network obtained by training can be used to implement the pedestrian re-identification method of the embodiment of the present application.

图7是本申请实施例的行人再识别网络的训练方法的示意性流程图。图7所示的方法可以由本申请实施例的行人再识别网络的训练装置来执行(例如,可以由图10和图11所示的装置来执行),图7所示的方法包括步骤1001至1008,下面对这些步骤进行详细的 介绍。FIG. 7 is a schematic flowchart of a training method for a pedestrian re-identification network according to an embodiment of the present application. The method shown in FIG. 7 may be executed by the training device of the pedestrian re-recognition network of the embodiment of the present application (for example, it may be executed by the device shown in FIG. 10 and FIG. 11), and the method shown in FIG. 7 includes steps 1001 to 1008. , These steps are described in detail below.

1001、开始。1001. Start.

步骤1001表示开始行人再识别网络的训练过程。Step 1001 represents the start of the training process of the pedestrian re-recognition network.

1002、获取训练数据。1002. Obtain training data.

上述步骤1002中的训练数据包括M(M为大于1的整数)个训练图像以及M个训练图像的标注数据,其中,在M个训练图像中,每个训练图像包括行人,每个训练图像的标注数据包括每个训练图像中的行人所在的包围框和行人标识信息,不同的行人对应不同的行人标识信息,在M个训练图像中,具有相同的行人标识信息的训练图像来自于同一图像拍摄设备。The training data in the above step 1002 includes M (M is an integer greater than 1) training images and the annotation data of M training images. Among the M training images, each training image includes pedestrians, and each training image The annotation data includes the bounding box and pedestrian identification information of the pedestrian in each training image. Different pedestrians correspond to different pedestrian identification information. Among the M training images, the training images with the same pedestrian identification information are taken from the same image. equipment.

上述图像拍摄设备具体可以是摄像机、照相机等能够获取行人图像的设备。The aforementioned image capturing device may specifically be a device capable of acquiring images of pedestrians, such as a video camera and a camera.

上述步骤1002中的行人标识信息也可以称为行人身份标识信息,是用于表示标识行人身份的一种信息,每个行人可以对应唯一的行人标识信息,该行人标识信息的表示方式有多种,只要能够指示行人的身份信息即可,例如,该行人标识信息具体可以是行人身份(identity,ID),也就是说,可以为每一个行人分配一个唯一的ID。The pedestrian identification information in the above step 1002 can also be referred to as pedestrian identification information, which is a kind of information used to indicate the identity of a pedestrian. Each pedestrian can correspond to unique pedestrian identification information. There are many ways to express the pedestrian identification information. , As long as the identity information of the pedestrian can be indicated, for example, the pedestrian identification information may specifically be a pedestrian identity (identity, ID), that is, a unique ID can be assigned to each pedestrian.

1003、对行人再识别网络的网络参数进行初始化处理,以得到行人再识别网络的网络参数的初始值。1003. Perform initialization processing on the network parameters of the pedestrian re-identification network to obtain initial values of the network parameters of the pedestrian re-identification network.

上述步骤1003中可以随机设置行人再识别网络的网络参数,得到行人再识别网络的网络参数的初始值。In the above step 1003, the network parameters of the pedestrian re-identification network can be randomly set, and the initial values of the network parameters of the pedestrian re-identification network are obtained.

1004、将M个训练图像中的一批训练图像输入到行人再识别网络进行特征提取,得到一批训练图像中的每个训练图像的特征向量。1004. Input a batch of training images among the M training images to the pedestrian recognition network to perform feature extraction, and obtain a feature vector of each training image in the batch of training images.

上述一批训练图像是M个训练图像中的部分训练图像,在采用M个训练图像对行人再识别网络进行训练时,可以将M个训练图像分成不同的批次对行人再识别网络进行训练,每个批次的训练图像的数目可以相同也可以不同。The above batch of training images is part of the training images in M training images. When M training images are used to train the pedestrian re-recognition network, the M training images can be divided into different batches to train the pedestrian re-recognition network. The number of training images in each batch can be the same or different.

例如,共有5000个训练图像,可以在每个批次输入100个训练图像对行人再识别网络进行训练。For example, there are 5000 training images in total, and 100 training images can be input in each batch to train the pedestrian re-recognition network.

上述一批训练图像可以包括N个锚点图像,其中,该N个锚点图像是上述一批训练图像中的任意N个训练图像,该N个锚点图像中的每个锚点图像对应一个最难正样本图像,一个第一最难负样本图像和一个第二最难负样本图像,N为正整数,并且N小于M。The foregoing batch of training images may include N anchor point images, where the N anchor point images are any N training images in the foregoing batch of training images, and each anchor point image in the N anchor point images corresponds to one The most difficult positive sample image, a first most difficult negative sample image and a second most difficult negative sample image, N is a positive integer, and N is less than M.

下面对每个锚点图像对应的最难正样本图像,第一最难负样本图像和第二最难负样本图像进行说明。The following describes the most difficult positive sample image corresponding to each anchor point image, the first most difficult negative sample image, and the second most difficult negative sample image.

每个锚点图像对应的最难正样本图像:上述一批训练图像中与每个锚点图像的行人标识信息相同,并且与每个锚点图像的特征向量之间的距离最远的训练图像;The most difficult positive sample image corresponding to each anchor point image: the training image that has the same pedestrian identification information as each anchor point image and the farthest distance from the feature vector of each anchor point image in the above batch of training images ;

每个锚点图像对应的第一最难负样本图像:上述一批训练图像中与每个锚点图像来自于同一图像拍摄设备,并与每个锚点图像的行人标识信息不同且与每个锚点图像的特征向量之间的距离最近的训练图像;The first most difficult negative sample image corresponding to each anchor point image: the above batch of training images and each anchor point image come from the same image capture device, and are different from the pedestrian identification information of each anchor point image and are different from each anchor point image. The training image with the closest distance between the feature vectors of the anchor image;

每个锚点图像对应的第二最难负样本图像:上述一批训练图像中与每个锚点图像来自不同图像拍摄设备,并与每个锚点图像的行人标识信息不同且与每个锚点图像的特征向量之间的距离最近的训练图像。The second most difficult negative sample image corresponding to each anchor point image: the above batch of training images and each anchor point image come from different image capture equipment, and are different from the pedestrian identification information of each anchor point image and are different from each anchor point image. The training image with the closest distance between the feature vectors of the point image.

1005、根据上述一批训练图像的特征向量确定损失函数的函数值。1005. Determine the function value of the loss function according to the feature vector of the above batch of training images.

上述步骤1005中的损失函数的函数值是N个第一损失函数的函数值经过平均处理得到的。The function value of the loss function in the above step 1005 is obtained by averaging the function values of the N first loss functions.

其中,上述N个第一损失函数中的每个第一损失函数的函数值是根据N个锚点图像中的每个锚点图像对应的第一差值和第二差值计算得到的。Wherein, the function value of each first loss function in the above N first loss functions is calculated according to the first difference and the second difference corresponding to each of the N anchor point images.

可选地,上述每个第一损失函数的函数值是每个锚点图像对应的第一差值和第二差值的和。Optionally, the function value of each of the foregoing first loss functions is the sum of the first difference and the second difference corresponding to each anchor point image.

上述N为正整数,上述N小于M。当N=1时,只有一个第一损失函数的函数值,此时可以直接将该第一损失函数的函数值作为步骤1005中的损失函数的函数值。The above N is a positive integer, and the above N is less than M. When N=1, there is only one function value of the first loss function. At this time, the function value of the first loss function can be directly used as the function value of the loss function in step 1005.

例如,第一损失函数的函数值可以如公式(2)所示。For example, the function value of the first loss function may be as shown in formula (2).

L1=D1+D2     (2)L1=D1+D2 (2)

其中,L1表示第一损失函数的函数值,D1表示上述第一差值,D2表示上述第二差值。Wherein, L1 represents the function value of the first loss function, D1 represents the above-mentioned first difference value, and D2 represents the above-mentioned second difference value.

可选地,上述每个第一损失函数的函数值是每个锚点图像对应的第一差值的绝对值和第二差值绝对值的和。Optionally, the function value of each of the foregoing first loss functions is the sum of the absolute value of the first difference and the absolute value of the second difference corresponding to each anchor point image.

例如,第一损失函数的函数值可以如公式(3)所示。For example, the function value of the first loss function may be as shown in formula (3).

L1=|D1|+|D2|    (3)L1=|D1|+|D2| (3)

其中,L1表示第一损失函数的函数值,|D1|表示上述第一差值的绝对值,|D2|表示上述第二差值的绝对值。Wherein, L1 represents the function value of the first loss function, |D1| represents the absolute value of the above-mentioned first difference, and |D2| represents the absolute value of the above-mentioned second difference.

可选地,上述每个第一损失函数的函数值是每个锚点图像对应的第一差值、第二差值和其他常数项的和。Optionally, the function value of each of the foregoing first loss functions is the sum of the first difference, the second difference, and other constant items corresponding to each anchor point image.

例如,第一损失函数的函数值可以如公式(4)所示。For example, the function value of the first loss function may be as shown in formula (4).

L1=D1+D2+m    (4)L1=D1+D2+m (4)

其中,L1表示第一损失函数的函数值,D1表示上述第一差值,D2表示上述第二差值,m表示常数,m的大小可以根据经验来设置合适的数值。Wherein, L1 represents the function value of the first loss function, D1 represents the above-mentioned first difference value, D2 represents the above-mentioned second difference value, m represents a constant, and the size of m can be set to an appropriate value based on experience.

再如,第一损失函数的函数值可以如公式(5)所示。For another example, the function value of the first loss function may be as shown in formula (5).

L1=|m 1+D1|+|m 2+D2|      (5) L1=|m 1 +D1|+|m 2 +D2| (5)

其中,L1表示第一损失函数的函数值,|D1|表示上述第一差值,|D2|表示上述第二差值,m 1和m 2表示常数,m 1和m 2的大小可以根据经验来设置合适的数值。 Among them, L1 represents the function value of the first loss function, |D1| represents the above-mentioned first difference, |D2| represents the above-mentioned second difference, m 1 and m 2 represent constants, and the magnitudes of m 1 and m 2 can be based on experience To set the appropriate value.

应理解,上文中在计算第一损失函数的函数值时对D1和D2求绝对值只是一种可选的实现方式,实际上在确定第一损失函数的函数值时还可以对D1和D2进行其他操作,例如,可以对D1和D2进行[X] +操作(该操作可以称为对函数值取正部的操作)。 It should be understood that calculating the absolute value of D1 and D2 when calculating the function value of the first loss function is only an optional implementation method. In fact, when determining the function value of the first loss function, D1 and D2 can also be calculated. Other operations, for example, [X] + operation can be performed on D1 and D2 (this operation can be referred to as the operation of taking the positive part of the function value).

其中,当X大于0时,[X] +=X,而当X小于0时,[X] +=0。(具体可以参见https://en.wikipedia.org/wiki/Positive_and_negative_parts) Among them, when X is greater than 0, [X] + =X, and when X is less than 0, [X] + = 0. (For details, please refer to https://en.wikipedia.org/wiki/Positive_and_negative_parts)

下面对第一差值和第二差值的含义进行说明。The meaning of the first difference and the second difference will be described below.

每个锚点图像对应的第一差值:每个锚点图像对应的最难正样本距离与每个锚点图像对应的第二最难负样本距离的差;The first difference corresponding to each anchor image: the difference between the distance of the most difficult positive sample corresponding to each anchor image and the distance of the second most difficult negative sample corresponding to each anchor image;

每个锚点图像对应的第二差值:每个锚点图像对应的第二最难负样本距离与每个锚点图像对应的第一最难负样本距离的差。The second difference value corresponding to each anchor point image: the difference between the distance of the second most difficult negative sample corresponding to each anchor point image and the distance of the first most difficult negative sample corresponding to each anchor point image.

上述第一差值和第二差值是对不同的距离进行做差得到的。下面对这些距离的含义进 行说明。The above-mentioned first difference and second difference are obtained by performing difference on different distances. The meaning of these distances is explained below.

每个锚点图像对应的最难正样本距离:每个锚点图像对应的最难正样本图像的特征向量与每个锚点图像的特征向量的距离;The distance of the most difficult positive sample corresponding to each anchor point image: the distance between the feature vector of the most difficult positive sample image corresponding to each anchor point image and the feature vector of each anchor point image;

每个锚点图像对应的第二最难负样本距离:每个锚点图像对应的第二最难负样本图像的特征向量与每个锚点图像的特征向量的距离;The second most difficult negative sample distance corresponding to each anchor point image: the distance between the feature vector of the second most difficult negative sample image corresponding to each anchor point image and the feature vector of each anchor point image;

每个锚点图像对应的第一最难负样本距离:每个锚点图像对应的第一最难负样本图像的特征向量与每个锚点图像的特征向量的距离。The distance of the first most difficult negative sample corresponding to each anchor point image: the distance between the feature vector of the first most difficult negative sample image corresponding to each anchor point image and the feature vector of each anchor point image.

具体地,假设训练过程中一个批次的图像对应的图像拍摄设备数为C,每个图像拍摄设备下的行人数为P,每个行人的图像数为K,那么,一个批次的图像数就是C×P×K。记该批次的图像中的锚点图像为

Figure PCTCN2020113041-appb-000011
Figure PCTCN2020113041-appb-000012
为网络模型输出的特征(向量),记||f 1-f 2||为两个特征f 1和f 2的欧式距离,那么,上述最难正样本距离可以如公式(6)所示。 Specifically, assuming that the number of image capturing devices corresponding to a batch of images in the training process is C, the number of pedestrians under each image capturing device is P, and the number of images of each pedestrian is K, then the number of images in a batch It is C×P×K. Remember that the anchor image in the batch of images is
Figure PCTCN2020113041-appb-000011
Remember
Figure PCTCN2020113041-appb-000012
Is the feature (vector) output by the network model, denote ||f 1 -f 2 || as the Euclidean distance of the two features f 1 and f 2 , then the above-mentioned most difficult positive sample distance can be as shown in formula (6).

Figure PCTCN2020113041-appb-000013
Figure PCTCN2020113041-appb-000013

上述第二最难负样本距离可以如公式(7)所示。The above-mentioned second most difficult negative sample distance can be as shown in formula (7).

Figure PCTCN2020113041-appb-000014
Figure PCTCN2020113041-appb-000014

上述第一最难负样本距离可以如公式(8)所示。The above-mentioned first most difficult negative sample distance can be as shown in formula (8).

Figure PCTCN2020113041-appb-000015
Figure PCTCN2020113041-appb-000015

上述第一差值可以是公式(6)与公式(7)的差,上述第二差值可以是公式(7)与公式(8)的差。The above-mentioned first difference value may be the difference between formula (6) and formula (7), and the above-mentioned second difference value may be the difference between formula (7) and formula (8).

由上述第一差值和第二差值构成的损失函数可以如公式(9)所示。其中,L表示损失函数,m 1和m 2是两个常量,具体的取值可以根据经验来设置。例如,m 1=0.1,m 2=0.1。 The loss function composed of the above-mentioned first difference and the second difference can be as shown in formula (9). Among them, L represents the loss function, m 1 and m 2 are two constants, and the specific value can be set based on experience. For example, m 1 =0.1 and m 2 =0.1.

Figure PCTCN2020113041-appb-000016
Figure PCTCN2020113041-appb-000016

上述公式(9)中,

Figure PCTCN2020113041-appb-000017
表示对
Figure PCTCN2020113041-appb-000018
进行[X] +操作,当
Figure PCTCN2020113041-appb-000019
的取值大于或者等于0时,
Figure PCTCN2020113041-appb-000020
的取值就是
Figure PCTCN2020113041-appb-000021
Figure PCTCN2020113041-appb-000022
的取值小于0时,
Figure PCTCN2020113041-appb-000023
的取值就是0。 In the above formula (9),
Figure PCTCN2020113041-appb-000017
Means right
Figure PCTCN2020113041-appb-000018
Perform [X] + operation, when
Figure PCTCN2020113041-appb-000019
When the value of is greater than or equal to 0,
Figure PCTCN2020113041-appb-000020
The value of is
Figure PCTCN2020113041-appb-000021
when
Figure PCTCN2020113041-appb-000022
When the value of is less than 0,
Figure PCTCN2020113041-appb-000023
The value of is 0.

上述公式(9)中,

Figure PCTCN2020113041-appb-000024
表示对
Figure PCTCN2020113041-appb-000025
进行[X] +操作,当
Figure PCTCN2020113041-appb-000026
的取值大于或者等于0时,
Figure PCTCN2020113041-appb-000027
的取值就是
Figure PCTCN2020113041-appb-000028
而当
Figure PCTCN2020113041-appb-000029
小于0时,
Figure PCTCN2020113041-appb-000030
的取值就是0。 In the above formula (9),
Figure PCTCN2020113041-appb-000024
Means right
Figure PCTCN2020113041-appb-000025
Perform [X] + operation, when
Figure PCTCN2020113041-appb-000026
When the value of is greater than or equal to 0,
Figure PCTCN2020113041-appb-000027
The value of is
Figure PCTCN2020113041-appb-000028
And when
Figure PCTCN2020113041-appb-000029
When less than 0,
Figure PCTCN2020113041-appb-000030
The value of is 0.

1006、根据损失函数的函数值对行人再识别网络的网络参数进行更新。1006. Update the network parameters of the pedestrian re-identification network according to the function value of the loss function.

具体地,可以根据上述公式(9)所示的损失函数的函数值对行人再识别网络的网络参数进行更新。并且在更新的过程中使得公式(9)所示的损失函数的函数值越来越小。Specifically, the network parameters of the pedestrian re-identification network can be updated according to the function value of the loss function shown in the above formula (9). And in the process of updating, the function value of the loss function shown in formula (9) is getting smaller and smaller.

1007、确定行人再识别网络是否满足预设要求。1007. Determine whether the pedestrian re-identification network meets the preset requirements.

可选地,行人再识别网络满足预设要求,包括:行人再识别网络满足下列条件中的至少一种:Optionally, the pedestrian re-identification network meets preset requirements, including: the pedestrian re-identification network meets at least one of the following conditions:

(1)行人再识别网络的行人识别性能满足预设性能要求;(1) The pedestrian recognition performance of the pedestrian re-identification network meets the preset performance requirements;

(2)行人再识别网络的网络参数的更新次数大于或者等于预设次数;(2) The update times of the network parameters of the pedestrian re-identification network are greater than or equal to the preset times;

(3)损失函数的函数值小于或者等于预设阈值。(3) The function value of the loss function is less than or equal to the preset threshold.

在步骤1007中,当行人再识别网络满足上述条件(1)至(3)中的至少一个时,可以确定行人再识别网络满足预设要求,执行步骤1008,行人再识别网络的训练过程结束;而当行人再识别网络不满足上述条件(1)至(3)中的任意一个时,说明行人再识别网络尚未满足预设要求,需要继续对行人再识别网络进行训练,也就是重新执行步骤1004至1007,直到得到满足预设要求的行人再识别网络。In step 1007, when the pedestrian re-identification network meets at least one of the above conditions (1) to (3), it can be determined that the pedestrian re-identification network meets the preset requirements, and step 1008 is executed, and the training process of the pedestrian re-identification network ends; When the pedestrian re-recognition network does not meet any of the above conditions (1) to (3), it means that the pedestrian re-recognition network has not yet met the preset requirements, and the pedestrian re-recognition network needs to continue to be trained, that is, step 1004 is re-executed To 1007, the network will not be recognized until pedestrians meeting the preset requirements are obtained.

上述预设阈值可以是经验来灵活设置,当预设阈值设置的过大时训练得到的行人再识别网络的行人识别效果可能不够好,而当预设阈值设置的过小时在训练时损失函数的函数值可能难以收敛。The above preset threshold can be flexibly set by experience. When the preset threshold is set too large, the pedestrian recognition effect of the trained pedestrian re-recognition network may not be good enough, and when the preset threshold is set too small, the loss of the function during training The function value may be difficult to converge.

可选地,上述预设阈值的取值范围为[0,0.01]。Optionally, the value range of the foregoing preset threshold is [0, 0.01].

具体地,上述预设阈值的取值可以为0.01。Specifically, the value of the foregoing preset threshold may be 0.01.

上述损失函数的函数值小于或者等于预设阈值,具体包括:第一差值小于第一预设阈值,第二差值小于第二预设阈值。The function value of the aforementioned loss function is less than or equal to the preset threshold, which specifically includes: the first difference is less than the first preset threshold, and the second difference is less than the second preset threshold.

上述第一预设阈值和第二预设阈值也可以根据经验来确定,当第一预设阈值和第二预设阈值设置的过大时训练得到的行人再识别网络的行人识别效果可能不够好,而当第一预设阈值和第二预设阈值设置的过小时在训练时损失函数的函数值可能难以收敛。The above-mentioned first preset threshold and second preset threshold can also be determined based on experience. When the first preset threshold and the second preset threshold are set too large, the pedestrian recognition effect of the trained pedestrian re-recognition network may not be good enough , And when the first preset threshold and the second preset threshold are set too small, the function value of the loss function may be difficult to converge during training.

可选地,上述第一预设阈值的取值范围为[0,0.4]。Optionally, the value range of the foregoing first preset threshold is [0, 0.4].

可选地,上述第一预设阈值的取值范围为[0,0.4]。Optionally, the value range of the foregoing first preset threshold is [0, 0.4].

具体地,上述第一预设阈值和上述第二阈值均可以取0.1。Specifically, both the first preset threshold and the second threshold may be 0.1.

1008、训练结束。1008. The training is over.

另外,在本申请中,几个训练图像来自于同一图像拍摄设备是指这几个训练图像是通过同一个图像拍摄设备进行拍摄得到的。In addition, in this application, that several training images are from the same image capturing device means that these several training images are captured by the same image capturing device.

本申请中,在构造损失函数的过程中考虑到了来自于不同图像拍摄设备和相同图像拍摄设备的最难负样本图像,并在训练过程中使得第一差值和第二差值尽可能的减小,从而能够尽可能的消除图像拍摄设备本身信息对图像信息的干扰,使得训练出来的行人再识别网络能够更准确的从图像中进行特征的提取。In this application, the most difficult negative sample images from different image capturing devices and the same image capturing device are considered in the process of constructing the loss function, and the first difference and the second difference are reduced as much as possible during the training process. Small, which can eliminate as much as possible the interference of the image capturing device's own information on the image information, so that the trained pedestrian re-recognition network can more accurately extract features from the image.

具体地,在对行人再识别网络的训练过程中,通过优化行人再识别网络的网络参数使得第一差值和第二差值尽可能的小,从而使得最难正样本距离与第二最难负样本距离的差以及第二最难负样本距离和第一最难负样本距离的差尽可能的小,进而使得行人再识别网络能够尽可能的区分开最难本图像与第二最难负样本图像的特征,以及第二最难负样本图像与第一最难负样本图像的特征,从而使得训练出来的行人再识别网络能够更好更准确地对图像进行特征提取。Specifically, in the training process of the pedestrian re-recognition network, the network parameters of the pedestrian re-recognition network are optimized to make the first difference and the second difference as small as possible, so that the distance between the most difficult positive sample and the second hardest The difference between the distance of the negative sample and the distance between the second most difficult negative sample and the distance of the first most difficult negative sample are as small as possible, so that the pedestrian re-recognition network can distinguish the most difficult image from the second most difficult negative as much as possible. The characteristics of the sample image, and the characteristics of the second most difficult negative sample image and the first most difficult negative sample image, so that the trained pedestrian re-recognition network can perform feature extraction on the image better and more accurately.

下面结合图8对上述步骤1004和步骤1005中根据一批训练图像确定损失函数的函数值过程进行详细说明。The process of determining the function value of the loss function according to a batch of training images in the above step 1004 and step 1005 will be described in detail below with reference to FIG. 8.

如图8所示,将一批训练图像输入到行人再识别网络之后,可以得到这一批训练图像的特征向量。接下来,可以从这一批训练图像中选择出多个锚点图像,并为该多个锚点图像中的每个锚点图像确定对应的最难正样本图像、第一最难负样本图像和第二最难负样本 图像。As shown in Figure 8, after inputting a batch of training images into the pedestrian recognition network, the feature vectors of this batch of training images can be obtained. Next, you can select multiple anchor point images from this batch of training images, and determine the corresponding most difficult positive sample image and the first most difficult negative sample image for each of the multiple anchor point images. And the second most difficult negative sample image.

这样就可以得到很多个由四个训练图像(分别是锚点图像,锚点图像对应的最难正样本图像,锚点图像对应的第一最难负样本图像和锚点图像对应的第二最难负样本图像)组成的训练图像组,然后可以根据每个训练图像组中的训练图像的特征向量之间的距离关系可以确定一个第一损失函数。In this way, a lot of four training images (an anchor image, the most difficult positive sample image corresponding to the anchor image, the first most difficult negative sample image corresponding to the anchor image, and the second most difficult image corresponding to the anchor image) can be obtained. Then, a first loss function can be determined according to the distance relationship between the feature vectors of the training images in each training image group.

如图8所示,一共有N个训练图像组,根据该N个训练图像组一共可以确定N个第一损失函数,接下来,对这N个第一损失函数的函数值进行平均处理,就可以得到上述步骤1005中的损失函数的函数值。As shown in Figure 8, there are a total of N training image groups. According to the N training image groups, a total of N first loss functions can be determined. Next, the function values of these N first loss functions are averaged, and then The function value of the loss function in the above step 1005 can be obtained.

应理解,上述N个训练图像组共包含N个锚点图像,该N个锚点图像各不相同,也就是说,每个训练图像组对应唯一的一个锚点图像。但是,不同的训练图像组中包含的其他训练图像(除锚点图像之外的其他图像)可以相同。例如,第一个训练图像组中的最难正样本图像与第二个训练组中的最难正样本图像相同。It should be understood that the foregoing N training image groups include a total of N anchor point images, and the N anchor point images are different from each other, that is, each training image group corresponds to a unique anchor point image. However, other training images (images other than the anchor point image) contained in different training image groups may be the same. For example, the hardest positive sample image in the first training image group is the same as the hardest positive sample image in the second training group.

再如,假设上述一批训练图像的数目为100,那么,就可以从该100个训练图像中的选择出10个(也可以是其他的数量,这里仅仅以10为例进行说明)锚点图像,然后从该100个训练图像中分别为每个锚点图像选择相应的最难正样本图像,第一最难负样本图像和第二最难负样本图像。从而得到10个训练图像组,根据该10个训练图像组可以得到10个第一损失函数的函数值,接下来,通过对该10个第一损失函数的函数值进行平均处理,就可以得到上述步骤1005中的损失函数的函数值。As another example, suppose the number of the above batch of training images is 100, then 10 can be selected from the 100 training images (it can also be other numbers, here only 10 is taken as an example) anchor image , And then select the corresponding most difficult positive sample image, the first most difficult negative sample image and the second most difficult negative sample image for each anchor image from the 100 training images. Thus, 10 training image groups are obtained. According to the 10 training image groups, the function values of 10 first loss functions can be obtained. Next, by averaging the function values of the 10 first loss functions, the above can be obtained. The function value of the loss function in step 1005.

下面对行人再识别网络的设计和训练过程进行详细的介绍。The following is a detailed introduction to the design and training process of the pedestrian re-identification network.

本申请中的行人再识别网络可以采用现有的残差网络(例如,采用ResNet50)作为网络主体,并将最后的全连接层移除,在最后一层残差块(ResBlock)之后添加全局均值池化(global average pooling)层,并将获得2048维(也可以是其他的数值)的特征向量作为网络模型的输出。The pedestrian re-identification network in this application can use the existing residual network (for example, ResNet50) as the main body of the network, remove the last fully connected layer, and add the global mean after the last layer of residual block (ResBlock) Pooling (global average pooling) layer, and obtain the feature vector of 2048 dimensions (or other values) as the output of the network model.

在每个批次的训练图像中,每个摄像机可以采集4个人,每个人采集8张图,如果一个人的图像少于8张,就重复采集补满8张。In each batch of training images, each camera can collect 4 people, and each person collects 8 images. If there are fewer than 8 images of one person, repeat the collection to fill up 8 images.

在对训练得到的行人再识别网络进行训练时,可以采用上述公式(9)作为损失函数,在进行测试时,不同的数据集的摄像机数可以不同,例如,对于DukeMTMC-reID数据集,摄像机有8个,这时公式(9)中的C=8;对于Market-1501数据集,摄像机有6个,这时公式(9)中的C=6。When training the trained pedestrian re-recognition network, the above formula (9) can be used as the loss function. When testing, the number of cameras in different data sets can be different. For example, for the DukeMTMC-reID data set, the cameras have 8, at this time, C=8 in formula (9); for the Market-1501 data set, there are 6 cameras, at this time, C=6 in formula (9).

上述公式(9)所示的损失函数中的两个参数可以分别是m 1=0.1,m 2=0.1。输入的训练图像可以被缩放为256x128像素大小,在训练时可以使用自适应矩估计(Adam)优化器来训练网络参数,学习率可以设置为2×10 -4。在100轮训练后,学习率指数衰减,直到200轮学习后学习率可以设置为2×10 -7,这时可以停止训练。 The two parameters in the loss function shown in the above formula (9) may be m 1 =0.1 and m 2 =0.1, respectively. The input training image can be scaled to a size of 256x128 pixels, the adaptive moment estimation (Adam) optimizer can be used to train the network parameters during training, and the learning rate can be set to 2×10 -4 . After 100 rounds of training, the learning rate decays exponentially, until after 200 rounds of learning, the learning rate can be set to 2×10 -7 , then training can be stopped.

根据本申请实施例的行人再识别网络的训练方法训练得到的行人再识别网络可以用于执行本申请实施例的行人再识别方法,下面结合附图对本申请实施例的行人再识别方法进行描述。The pedestrian re-identification network trained according to the pedestrian re-identification network training method of the embodiment of the present application can be used to implement the pedestrian re-identification method of the embodiment of the present application. The pedestrian re-identification method of the embodiment of the present application will be described below with reference to the accompanying drawings.

图9是本申请实施例的行人再识别方法的示意性流程图。图9所示的行人再识别方法可以由本申请实施例的行人再识别装置执行(例如,可以由图12和图13所示的装置执行),图9所示的行人再识别方法包括步骤2001至2003,下面对步骤2001至2003进行详细的 介绍。FIG. 9 is a schematic flowchart of a pedestrian re-identification method according to an embodiment of the present application. The pedestrian re-identification method shown in FIG. 9 may be executed by the pedestrian re-identification device of the embodiment of the present application (for example, it may be executed by the device shown in FIG. 12 and FIG. 13), and the pedestrian re-identification method shown in FIG. 9 includes steps 2001 to In 2003, the steps 2001 to 2003 will be described in detail below.

2001、获取待识别图像。2001. Obtain the image to be recognized.

2002、利用行人再识别网络对待识别图像进行处理,得到待识别图像的特征向量。2002. Use the pedestrian re-recognition network to process the image to be recognized, and obtain the feature vector of the image to be recognized.

其中,步骤2002中采用的行人再识别网络可以是根据本申请实施例的行人再识别网络的训练方法训练得到的,具体地,步骤2002中的行人再识别网络可以是通过图7所示的方法训练得到的。Wherein, the pedestrian re-identification network used in step 2002 may be obtained by training according to the training method of the pedestrian re-identification network of the embodiment of the present application. Specifically, the pedestrian re-identification network in step 2002 may be obtained by the method shown in FIG. 7 Get it through training.

2003、根据待识别图像的特征向量与已有的行人图像的特征向量进行比对,得到待识别图像的识别结果。2003. Compare the feature vector of the image to be recognized with the feature vector of the existing pedestrian image to obtain the recognition result of the image to be recognized.

本申请中,采用本申请实施例的行人再识别网络的训练方法训练得到的行人再识别网络能够更好的进行特征的提取,因此,采用该行人再识别网络对待识别图像进行处理,能够取得更好的行人识别结果。In this application, the pedestrian re-recognition network trained by the training method of the pedestrian re-recognition network in the embodiments of this application can better perform feature extraction. Therefore, using the pedestrian re-recognition network to process the image to be recognized can obtain more information. Good pedestrian recognition results.

可选地,上述步骤2003具体包括:根据待识别图像的特征向量与已有的行人图像的特征向量进行比对,确定输出目标行人图像;输出目标行人图像以及目标行人图像的属性信息。Optionally, the above step 2003 specifically includes: comparing the feature vector of the image to be recognized with the feature vector of the existing pedestrian image to determine the output target pedestrian image; outputting the target pedestrian image and the attribute information of the target pedestrian image.

其中,上述目标行人图像可以是已有的行人图像中特征向量与待识别图像的特征向量最相似的行人图像,该目标行人图像的属性信息包括该目标行人图像的拍摄时间,拍摄位置。另外,上述目标行人图像的属性信息中还可以包括行人的身份信息等。The above-mentioned target pedestrian image may be a pedestrian image whose feature vector is most similar to the feature vector of the image to be recognized in the existing pedestrian image, and the attribute information of the target pedestrian image includes the shooting time and shooting location of the target pedestrian image. In addition, the attribute information of the target pedestrian image may also include the identity information of the pedestrian and the like.

下面结合具体的测试结果对本申请实施例的行人再识别网络的行人识别的效果进行说明。The following describes the pedestrian recognition effect of the pedestrian re-identification network in the embodiment of the present application in combination with specific test results.

表1Table 1

Figure PCTCN2020113041-appb-000031
Figure PCTCN2020113041-appb-000031

表1示出了不同的方案在不同的数据集进行测试的结果,其中,测试结果包括Rank-1和平均精度均值(mean average precision,mAP),其中,Rank-1表示已有图像中特征向量与待识别图像的特征向量距离最近的图像与待识别图像属于同一个行人的概率。Table 1 shows the test results of different schemes on different data sets. Among them, the test results include Rank-1 and mean average precision (mAP), where Rank-1 represents the feature vector in the existing image The probability that the image with the closest distance to the feature vector of the image to be recognized belongs to the same pedestrian as the image to be recognized.

在上述表1中,数据集1为Duke-SCT,数据集2为Market-SCT。In Table 1 above, data set 1 is Duke-SCT, and data set 2 is Market-SCT.

其中,Duke-SCT是DukeMTMC-reID数据集的子集,Market-SCT是Market-1501数据集的子集。在获取Duke-SCT和Market-SCT时,我们从原有的数据集(DukeMTMC-reID和Market-1501)中,对训练数据做了如下处理:每个行人随机选某一个摄像机下的图像进行保留(不同行人可能选择到不同的摄像机),从而得到形成了新的数据集Duke-SCT和Market-SCT。同时,测试集保持不变。Among them, Duke-SCT is a subset of the DukeMTMC-reID data set, and Market-SCT is a subset of the Market-1501 data set. When acquiring Duke-SCT and Market-SCT, we processed the training data from the original data set (DukeMTMC-reID and Market-1501) as follows: each pedestrian randomly selects the image under a certain camera for preservation (Different pedestrians may choose different cameras), resulting in the formation of new data sets Duke-SCT and Market-SCT. At the same time, the test set remains unchanged.

现有方案1:深层次的判别特征学习方法(a discriminative feature learning approach for deep face),该方案是2016年在欧洲计算机视觉国际会议(european conference on computer vision,ECCV)发表的;Existing solution 1: a discriminative feature learning approach for deep face, which was published at the European Conference on Computer Vision (ECCV) in 2016;

现有方案2:深层超球面嵌入人脸识别(deep hypersphere embedding for face recognition),该方案是2017年在国际计算机视觉与模式识别会议(conference on computer vision and pattern recognition,CVPR)发表的;Existing solution 2: Deep hypersphere embedding for face recognition (deep hypersphere embedding for face recognition), which was published at the International Conference on Computer Vision and Pattern Recognition (CVPR) in 2017;

现有方案3:深层人脸识别的附加角度边缘损失(additive angular margin loss for deep face recognition),该方案是2019年在CVPR发表的;Existing solution 3: Additional angular margin loss for deep face recognition for deep face recognition, which was published in CVPR in 2019;

现有方案4:精炼部分池的人员检索(person retrieval with refined part pooling),该方案是2018年在ECCV发表的;Existing plan 4: person retrieval with refined part pooling, which was published in ECCV in 2018;

现有方案5:用于人员重新识别的部分对齐双线性表示(part-aligned bilinear representations for person re-identification),该方案是2018年在ECCV发表的;Existing scheme 5: Part-aligned bilinear representations for person re-identification, which was published in ECCV in 2018;

现有方案6:学习具有多重粒度的判别特征进行人员重新识别(learning discriminative features with multiple granularities for person re-Identification),该方案是2018年在美国计算机协会多媒体国际会议(association for computing machinery international conference on multimedia,ACMMM)发表的。Existing solution 6: Learning discriminative features with multiple granularities for person re-Identification multimedia, ACMMM) published.

由表1可知,本申请方案无论在数据集1还是在数据集2的Rank-1和mAP均优于现有方案,具有较好的识别效果。It can be seen from Table 1 that the Rank-1 and mAP of the scheme of the present application are superior to the existing schemes in both dataset 1 and dataset 2, and have a better recognition effect.

图10是本申请实施例的行人再识别网络的训练装置的示意性框图。图10所示的行人再识别网络的训练装置8000包括获取单元8001和训练单元8002。Fig. 10 is a schematic block diagram of a training device for a pedestrian re-identification network according to an embodiment of the present application. The training device 8000 of the pedestrian re-identification network shown in FIG. 10 includes an acquisition unit 8001 and a training unit 8002.

获取单元8001和训练单元8002可以用于执行本申请实施例的行人再识别网络的训练方法。The acquisition unit 8001 and the training unit 8002 may be used to execute the pedestrian re-identification network training method in the embodiment of the present application.

具体地,获取单元8001可以执行上述步骤1001和1002,训练单元8002可以执行上述步骤1003至1008。Specifically, the acquiring unit 8001 may perform the above steps 1001 and 1002, and the training unit 8002 may perform the above steps 1003 to 1008.

上述图10所示的装置8000中的获取单元8001可以相当于图11所示的装置9000中的通信接口9003,通过该通信接口9003可以获得相应的训练图像,或者,上述获取单元8001也可以提相当于处理器9002,此时可以通过处理器9002从存储器9001中获取训练图像,或者通过通信接口9003从外部获取训练图像。另外,装置8000中的训练单元8002可以相当于装置9000中的处理器9002。The acquisition unit 8001 in the device 8000 shown in FIG. 10 may be equivalent to the communication interface 9003 in the device 9000 shown in FIG. 11, through which the corresponding training images can be obtained, or the acquisition unit 8001 may also provide It is equivalent to the processor 9002. At this time, the training image can be obtained from the memory 9001 through the processor 9002, or the training image can be obtained from the outside through the communication interface 9003. In addition, the training unit 8002 in the device 8000 may be equivalent to the processor 9002 in the device 9000.

图11是本申请实施例的行人再识别网络的训练装置的硬件结构示意图。图11所示的行人再识别网络的训练装置9000(该装置9000具体可以是一种计算机设备)包括存储器9001、处理器9002、通信接口9003以及总线9004。其中,存储器9001、处理器9002、通信接口9003通过总线9004实现彼此之间的通信连接。FIG. 11 is a schematic diagram of the hardware structure of a training device for a pedestrian re-identification network according to an embodiment of the present application. The training device 9000 of the pedestrian re-identification network shown in FIG. 11 (the device 9000 may specifically be a computer device) includes a memory 9001, a processor 9002, a communication interface 9003, and a bus 9004. Among them, the memory 9001, the processor 9002, and the communication interface 9003 implement communication connections between each other through the bus 9004.

存储器9001可以是只读存储器(read only memory,ROM),静态存储设备,动态存储设备或者随机存取存储器(random access memory,RAM)。存储器9001可以存储程序,当存储器9001中存储的程序被处理器9002执行时,处理器9002用于执行本申请实施例的行人再识别网络的训练方法的各个步骤。The memory 9001 may be a read only memory (ROM), a static storage device, a dynamic storage device, or a random access memory (RAM). The memory 9001 may store a program. When the program stored in the memory 9001 is executed by the processor 9002, the processor 9002 is configured to execute each step of the pedestrian re-identification network training method in the embodiment of the present application.

处理器9002可以采用通用的中央处理器(central processing unit,CPU),微处理器,应用专用集成电路(application specific integrated circuit,ASIC),图形处理器(graphics  processing unit,GPU)或者一个或多个集成电路,用于执行相关程序,以实现本申请实施例的行人再识别网络的训练方法。The processor 9002 may adopt a general-purpose central processing unit (CPU), a microprocessor, an application specific integrated circuit (ASIC), a graphics processing unit (GPU), or one or more The integrated circuit is used to execute related programs to implement the pedestrian re-identification network training method in the embodiment of the present application.

处理器9002还可以是一种集成电路芯片,具有信号的处理能力。在实现过程中,本申请的行人再识别网络的训练方法的各个步骤可以通过处理器9002中的硬件的集成逻辑电路或者软件形式的指令完成。The processor 9002 may also be an integrated circuit chip with signal processing capabilities. In the implementation process, the steps of the training method of the pedestrian re-recognition network of the present application can be completed by the integrated logic circuit of the hardware in the processor 9002 or the instructions in the form of software.

上述处理器9002还可以是通用处理器、数字信号处理器(digital signal processing,DSP)、专用集成电路(ASIC)、现成可编程门阵列(field programmable gate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器9001,处理器9002读取存储器9001中的信息,结合其硬件完成本行人再识别网络的训练装置中包括的单元所需执行的功能,或者执行本申请实施例的行人再识别网络的训练方法。The aforementioned processor 9002 may also be a general-purpose processor, a digital signal processing (digital signal processing, DSP), an application specific integrated circuit (ASIC), a ready-made programmable gate array (field programmable gate array, FPGA) or other programmable logic devices, Discrete gates or transistor logic devices, discrete hardware components. The methods, steps, and logical block diagrams disclosed in the embodiments of the present application can be implemented or executed. The general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like. The steps of the method disclosed in the embodiments of the present application can be directly embodied as being executed and completed by a hardware decoding processor, or executed and completed by a combination of hardware and software modules in the decoding processor. The software module can be located in a mature storage medium in the field such as random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers. The storage medium is located in the memory 9001, and the processor 9002 reads the information in the memory 9001, and combines its hardware to complete the functions required by the units included in the training device of the pedestrian re-identification network, or execute the pedestrian re-identification in the embodiment of the application The training method of the network.

通信接口9003使用例如但不限于收发器一类的收发装置,来实现装置9000与其他设备或通信网络之间的通信。例如,可以通过通信接口9003获取待识别图像。The communication interface 9003 uses a transceiver device such as but not limited to a transceiver to implement communication between the device 9000 and other devices or a communication network. For example, the image to be recognized can be acquired through the communication interface 9003.

总线9004可包括在装置9000各个部件(例如,存储器9001、处理器9002、通信接口9003)之间传送信息的通路。The bus 9004 may include a path for transferring information between various components of the device 9000 (for example, the memory 9001, the processor 9002, and the communication interface 9003).

图12是本申请实施例的行人再识别装置的示意性框图。图12所示的行人再识别装置10000包括获取单元10001和识别单元10002。Fig. 12 is a schematic block diagram of a pedestrian re-identification device according to an embodiment of the present application. The pedestrian re-identification device 10000 shown in FIG. 12 includes an acquisition unit 10001 and an identification unit 10002.

获取单元10001和识别单元10002可以用于执行本申请实施例的行人再识别方法。The acquiring unit 10001 and the identifying unit 10002 may be used to execute the pedestrian re-identification method in the embodiment of the present application.

具体地,获取单元10001可以执行上述步骤6001,识别单元10002可以执行上述步骤6002。Specifically, the acquiring unit 10001 may perform the foregoing step 6001, and the identifying unit 10002 may perform the foregoing step 6002.

上述图12所示的装置10000中的获取单元10001可以相当于图13所示的装置11000中的通信接口11003,通过该通信接口11003可以获得待识别图像,或者,上述获取单元10001也可以提相当于处理器11002,此时可以通过处理器11002从存储器11001中获取待识别图像,或者通过通信接口11003从外部获取待识别图像。The acquisition unit 10001 in the device 10000 shown in FIG. 12 may be equivalent to the communication interface 11003 in the device 11000 shown in FIG. 13, through which the image to be recognized can be obtained, or the acquisition unit 10001 may also be equivalent. In the processor 11002, the image to be recognized can be obtained from the memory 11001 through the processor 11002 at this time, or the image to be recognized can be obtained from the outside through the communication interface 11003.

另外,上述图12所示的装置10000中的识别单元10002相当于图13所示的装置11000中处理器11002。In addition, the recognition unit 10002 in the device 10000 shown in FIG. 12 is equivalent to the processor 11002 in the device 11000 shown in FIG. 13.

图13是本申请实施例的行人再识别装置的硬件结构示意图。与上述装置10000类似,图13所示的行人再识别装置11000包括存储器11001、处理器11002、通信接口11003以及总线11004。其中,存储器11001、处理器11002、通信接口11003通过总线11004实现彼此之间的通信连接。FIG. 13 is a schematic diagram of the hardware structure of a pedestrian re-identification device according to an embodiment of the present application. Similar to the above device 10000, the pedestrian re-identification device 11000 shown in FIG. 13 includes a memory 11001, a processor 11002, a communication interface 11003, and a bus 11004. Among them, the memory 11101, the processor 11102, and the communication interface 11003 implement communication connections between each other through the bus 11004.

存储器11001可以是ROM,静态存储设备和RAM。存储器11001可以存储程序,当存储器11001中存储的程序被处理器11002执行时,处理器11002和通信接口11003用于执行本申请实施例的行人再识别方法的各个步骤。The memory 11001 may be ROM, static storage device and RAM. The memory 11001 may store a program. When the program stored in the memory 11001 is executed by the processor 11002, the processor 11002 and the communication interface 11003 are used to execute the steps of the pedestrian re-identification method in the embodiment of the present application.

处理器11002可以采用通用的,CPU,微处理器,ASIC,GPU或者一个或多个集成电路,用于执行相关程序,以实现本申请实施例的行人再识别装置中的单元所需执行的功 能,或者执行本申请实施例的行人再识别方法。The processor 11002 may adopt a general-purpose CPU, a microprocessor, an ASIC, a GPU or one or more integrated circuits to execute related programs to realize the functions required by the units in the pedestrian re-identification device of the embodiment of the present application. , Or implement the pedestrian re-identification method in the embodiment of this application.

处理器11002还可以是一种集成电路芯片,具有信号的处理能力。在实现过程中,本申请实施例的行人再识别方法的各个步骤可以通过处理器11002中的硬件的集成逻辑电路或者软件形式的指令完成。The processor 11002 may also be an integrated circuit chip with signal processing capabilities. In the implementation process, the steps of the pedestrian re-identification method in the embodiment of the present application can be completed by the integrated logic circuit of the hardware in the processor 11002 or the instructions in the form of software.

上述处理器11002还可以是通用处理器、DSP、ASIC、FPGA或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器11001,处理器11002读取存储器11001中的信息,结合其硬件完成本申请实施例的行人再识别装置中包括的单元所需执行的功能,或者执行本申请实施例的行人再识别方法。The aforementioned processor 11002 may also be a general-purpose processor, DSP, ASIC, FPGA or other programmable logic device, discrete gate or transistor logic device, or discrete hardware component. The methods, steps, and logical block diagrams disclosed in the embodiments of the present application can be implemented or executed. The general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like. The steps of the method disclosed in the embodiments of the present application can be directly embodied as being executed and completed by a hardware decoding processor, or executed and completed by a combination of hardware and software modules in the decoding processor. The software module can be located in a mature storage medium in the field such as random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers. The storage medium is located in the memory 11001, and the processor 11002 reads the information in the memory 11001, and combines its hardware to complete the functions required by the units included in the pedestrian re-identification device of the embodiment of the present application, or execute the pedestrian re- recognition methods.

通信接口11003使用例如但不限于收发器一类的收发装置,来实现装置11000与其他设备或通信网络之间的通信。例如,可以通过通信接口11003获取待识别图像。The communication interface 11003 uses a transceiver device such as but not limited to a transceiver to implement communication between the device 11000 and other devices or a communication network. For example, the image to be recognized can be acquired through the communication interface 11003.

总线11004可包括在装置11000各个部件(例如,存储器11001、处理器11002、通信接口11003)之间传送信息的通路。The bus 11004 may include a path for transferring information between various components of the device 11000 (for example, the memory 11001, the processor 11002, and the communication interface 11003).

应注意,尽管上述装置9000和装置11000仅仅示出了存储器、处理器、通信接口,但是在具体实现过程中,本领域的技术人员应当理解,装置9000和装置11000还可以包括实现正常运行所必须的其他器件。同时,根据具体需要,本领域的技术人员应当理解,装置9000和装置11000还可包括实现其他附加功能的硬件器件。此外,本领域的技术人员应当理解,装置9000和装置11000也可仅仅包括实现本申请实施例所必须的器件,而不必包括图11和图13中所示的全部器件。It should be noted that although the foregoing device 9000 and device 11000 only show the memory, processor, and communication interface, in the specific implementation process, those skilled in the art should understand that the device 9000 and device 11000 may also include those necessary for normal operation. Other devices. At the same time, according to specific needs, those skilled in the art should understand that the device 9000 and the device 11000 may also include hardware devices that implement other additional functions. In addition, those skilled in the art should understand that the device 9000 and the device 11000 may also only include the components necessary to implement the embodiments of the present application, and not necessarily include all the components shown in FIG. 11 and FIG. 13.

本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。A person of ordinary skill in the art may realize that the units and algorithm steps of the examples described in combination with the embodiments disclosed herein can be implemented by electronic hardware or a combination of computer software and electronic hardware. Whether these functions are performed by hardware or software depends on the specific application and design constraint conditions of the technical solution. Professionals and technicians can use different methods for each specific application to implement the described functions, but such implementation should not be considered beyond the scope of this application.

所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that, for the convenience and conciseness of description, the specific working process of the system, device and unit described above can refer to the corresponding process in the foregoing method embodiment, which will not be repeated here.

在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。In the several embodiments provided in this application, it should be understood that the disclosed system, device, and method may be implemented in other ways. For example, the device embodiments described above are merely illustrative, for example, the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or It can be integrated into another system, or some features can be ignored or not implemented. In addition, the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.

所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.

另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。In addition, the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.

所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。If the function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium. Based on this understanding, the technical solution of the present application essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application. The aforementioned storage media include: U disk, mobile hard disk, read-only memory (read-only memory, ROM), random access memory (random access memory, RAM), magnetic disks or optical disks and other media that can store program codes. .

以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。The above are only specific implementations of this application, but the protection scope of this application is not limited to this. Any person skilled in the art can easily think of changes or substitutions within the technical scope disclosed in this application. Should be covered within the scope of protection of this application. Therefore, the protection scope of this application should be subject to the protection scope of the claims.

Claims (14)

一种行人再识别网络的训练方法,其特征在于,包括:A training method for a pedestrian re-recognition network, which is characterized in that it includes: 步骤1:获取M个训练图像以及所述M个训练图像的标注数据,所述M个训练图像中的每个训练图像包括行人,所述每个训练图像的标注数据包括所述每个训练图像中的行人所在的包围框和行人标识信息,其中,不同的行人对应不同的行人标识信息,在所述M个训练图像中,具有相同的行人标识信息的训练图像来自于同一图像拍摄设备,M为大于1的整数;Step 1: Obtain M training images and annotation data of the M training images, each of the M training images includes a pedestrian, and the annotation data of each training image includes each of the training images The bounding frame and pedestrian identification information in which the pedestrian is located in, where different pedestrians correspond to different pedestrian identification information. Among the M training images, the training images with the same pedestrian identification information come from the same image capturing device, M Is an integer greater than 1; 步骤2:对所述行人再识别网络的网络参数进行初始化处理,以得到所述行人再识别网络的网络参数的初始值;Step 2: Initialize the network parameters of the pedestrian re-identification network to obtain the initial values of the network parameters of the pedestrian re-identification network; 步骤3:将所述M个训练图像中的一批训练图像输入到所述行人再识别网络进行特征提取,得到所述一批训练图像中的每个训练图像的特征向量;Step 3: Input a batch of training images of the M training images to the pedestrian re-recognition network for feature extraction, and obtain a feature vector of each training image in the batch of training images; 其中,所述一批训练图像包括N个锚点图像,所述N个锚点图像是所述一批训练图像中的任意N个训练图像,所述N个锚点图像中的每个锚点图像对应一个最难正样本图像,一个第一最难负样本图像和一个第二最难负样本图像,N为正整数;Wherein, the batch of training images includes N anchor point images, the N anchor point images are any N training images in the batch of training images, and each anchor point in the N anchor point images The image corresponds to a hardest positive sample image, a first hardest negative sample image and a second hardest negative sample image, N is a positive integer; 所述每个锚点图像对应的最难正样本图像是所述一批训练图像中与所述每个锚点图像的行人标识信息相同,并且与所述每个锚点图像的特征向量之间的距离最远的训练图像,所述每个锚点图像对应的第一最难负样本图像是所述一批训练图像中与所述每个锚点图像来自于同一图像拍摄设备,并与所述每个锚点图像的行人标识信息不同且与所述每个锚点图像的特征向量之间的距离最近的训练图像,所述每个锚点图像对应的第二最难负样本图像是所述一批训练图像中与所述每个锚点图像来自不同图像拍摄设备,并与所述每个锚点图像的行人标识信息不同且与所述每个锚点图像的特征向量之间的距离最近的训练图像;The most difficult positive sample image corresponding to each anchor point image is the same as the pedestrian identification information of each anchor point image in the batch of training images, and is different from the feature vector of each anchor point image. The training image with the farthest distance, the first most difficult negative sample image corresponding to each anchor point image is from the same image capturing device as each anchor point image in the batch of training images, and the The pedestrian identification information of each anchor image is different and the distance between the feature vector of each anchor image is the closest to the training image, and the second most difficult negative sample image corresponding to each anchor image is the training image. The distance between the batch of training images and each of the anchor point images is from a different image capturing device, and is different from the pedestrian identification information of each anchor point image and from the feature vector of each anchor point image The most recent training image; 步骤4:根据所述一批训练图像的特征向量确定损失函数的函数值,所述损失函数的函数值为N个第一损失函数的函数值经过平均处理得到的;Step 4: Determine the function value of the loss function according to the feature vectors of the batch of training images, and the function value of the loss function is obtained by averaging the function values of the N first loss functions; 其中,所述N个第一损失函数中的每个第一损失函数的函数值是根据所述N个锚点图像中的每个锚点图像对应的第一差值和第二差值计算得到的,所述每个锚点图像对应的第一差值是所述每个锚点图像对应的最难正样本距离与所述每个锚点图像对应的第二最难负样本距离的差,所述每个锚点图像对应的第二差值是所述每个锚点图像对应的第二最难负样本距离与所述每个锚点图像对应的第一最难负样本距离的差,所述每个锚点图像对应的最难正样本距离为所述每个锚点图像对应的最难正样本图像的特征向量与所述每个锚点图像的特征向量的距离,所述每个锚点图像对应的第二最难负样本距离为所述每个锚点图像对应的第二最难负样本图像的特征向量与所述每个锚点图像的特征向量的距离,所述每个锚点图像对应的第一最难负样本距离为所述每个锚点图像对应的第一最难负样本图像的特征向量与所述每个锚点图像的特征向量的距离;Wherein, the function value of each first loss function in the N first loss functions is calculated according to the first difference and the second difference corresponding to each of the N anchor point images , The first difference value corresponding to each anchor point image is the difference between the most difficult positive sample distance corresponding to each anchor point image and the second most difficult negative sample distance corresponding to each anchor point image, The second difference value corresponding to each anchor point image is the difference between the second most difficult negative sample distance corresponding to each anchor point image and the first most difficult negative sample distance corresponding to each anchor point image, The most difficult positive sample distance corresponding to each anchor point image is the distance between the feature vector of the most difficult positive sample image corresponding to each anchor point image and the feature vector of each anchor point image. The second most difficult negative sample distance corresponding to the anchor point image is the distance between the feature vector of the second most difficult negative sample image corresponding to each anchor point image and the feature vector of each anchor point image. The first most difficult negative sample distance corresponding to the anchor point image is the distance between the feature vector of the first most difficult negative sample image corresponding to each anchor point image and the feature vector of each anchor point image; 步骤5:根据所述损失函数的函数值对所述行人再识别网络的网络参数进行更新;Step 5: Update the network parameters of the pedestrian re-identification network according to the function value of the loss function; 重复上述步骤3至步骤5,直到所述行人再识别网络满足预设要求。Repeat the above steps 3 to 5 until the pedestrian re-identification network meets the preset requirements. 如权利要求1所述的训练方法,其特征在于,所述行人再识别网络满足预设要求, 包括:The training method of claim 1, wherein the pedestrian re-identification network meets a preset requirement, comprising: 在满足下列条件中的至少一种时,所述行人再识别网络满足预设要求:When at least one of the following conditions is met, the pedestrian re-identification network meets the preset requirements: 所述行人再识别网络的训练次数大于或者等于预设次数;The number of times of training of the pedestrian re-recognition network is greater than or equal to a preset number of times; 所述损失函数的函数值小于或者等于预设阈值;The function value of the loss function is less than or equal to a preset threshold; 所述行人再识别网络的识别性能达到预设要求。The recognition performance of the pedestrian re-identification network meets the preset requirements. 如权利要求2所述的训练方法,其特征在于,所述损失函数的函数值小于或者等于预设阈值,包括:The training method according to claim 2, wherein the function value of the loss function is less than or equal to a preset threshold value, comprising: 所述第一差值小于第一预设阈值,所述第二差值小于第二预设阈值。The first difference is less than a first preset threshold, and the second difference is less than a second preset threshold. 如权利要求1-3中任一项所述的训练方法,其特征在于,所述M个训练图像为来自多个图像拍摄设备的训练图像,其中,来自不同图像拍摄设备的训练图像的标记数据是单独标记得到的。The training method according to any one of claims 1 to 3, wherein the M training images are training images from multiple image capturing devices, wherein the labeled data of the training images from different image capturing devices It is separately marked. 一种行人再识别方法,其特征在于,包括:A pedestrian re-identification method, which is characterized in that it includes: 获取待识别图像;Obtain the image to be recognized; 利用行人再识别网络对待识别图像进行处理,得到所述待识别图像的特征向量,其中,所述行人再识别网络是根据如权权利要求1-4中的任一项所述的训练方法训练得到的;The pedestrian re-recognition network is used to process the image to be recognized to obtain the feature vector of the image to be recognized, wherein the pedestrian re-recognition network is obtained by training according to the training method of any one of claims 1-4 of; 根据所述待识别图像的特征向量与已有的行人图像的特征向量进行比对,得到所述待识别图像的识别结果。The recognition result of the image to be recognized is obtained by comparing the feature vector of the image to be recognized with the feature vector of the existing pedestrian image. 一种行人再识别网络的训练装置,其特征在于,包括:A training device for pedestrian re-identification network, which is characterized in that it comprises: 获取单元,用于执行步骤1;The acquisition unit is used to perform step 1; 步骤1:获取M个训练图像以及所述M个训练图像的标注数据,所述M个训练图像中的每个训练图像包括行人,所述每个训练图像的标注数据包括所述每个训练图像中的行人所在的包围框和行人标识信息,其中,不同的行人对应不同的行人标识信息,在所述M个训练图像中,具有相同的行人标识信息的训练图像来自于同一图像拍摄设备,M为大于1的整数;Step 1: Obtain M training images and annotation data of the M training images, each of the M training images includes a pedestrian, and the annotation data of each training image includes each of the training images The bounding frame and pedestrian identification information in which the pedestrian is located in, where different pedestrians correspond to different pedestrian identification information. Among the M training images, the training images with the same pedestrian identification information come from the same image capturing device, M Is an integer greater than 1; 训练单元,用于执行步骤2;Training unit, used to perform step 2; 步骤2:对所述行人再识别网络的网络参数进行初始化处理,以得到所述行人再识别网络的网络参数的初始值;Step 2: Initialize the network parameters of the pedestrian re-identification network to obtain the initial values of the network parameters of the pedestrian re-identification network; 所述训练单元还用于重复执行步骤3至步骤5,直到所述行人再识别网络满足预设要求;The training unit is also used to repeat steps 3 to 5 until the pedestrian re-identification network meets the preset requirements; 步骤3:将所述M个训练图像中的一批训练图像输入到所述行人再识别网络进行特征提取,得到所述一批训练图像中的每个训练图像的特征向量;Step 3: Input a batch of training images of the M training images to the pedestrian re-recognition network for feature extraction, and obtain a feature vector of each training image in the batch of training images; 其中,所述一批训练图像包括N个锚点图像,所述N个锚点图像是所述一批训练图像中的任意N个训练图像,所述N个锚点图像中的每个锚点图像对应一个最难正样本图像,一个第一最难负样本图像和一个第二最难负样本图像,N为正整数;Wherein, the batch of training images includes N anchor point images, the N anchor point images are any N training images in the batch of training images, and each anchor point in the N anchor point images The image corresponds to a hardest positive sample image, a first hardest negative sample image and a second hardest negative sample image, N is a positive integer; 所述每个锚点图像对应的最难正样本图像是所述一批训练图像中与所述每个锚点图像的行人标识信息相同,并且与所述每个锚点图像的特征向量之间的距离最远的训练图像,所述每个锚点图像对应的第一最难负样本图像是所述一批训练图像中与所述每个锚点图像来自于同一图像拍摄设备,并与所述每个锚点图像的行人标识信息不同且与所述每个锚点图像的特征向量之间的距离最近的训练图像,所述每个锚点图像对应的第二最难负样 本图像是所述一批训练图像中与所述每个锚点图像来自不同图像拍摄设备,并与所述每个锚点图像的行人标识信息不同且与所述每个锚点图像的特征向量之间的距离最近的训练图像;The most difficult positive sample image corresponding to each anchor point image is the same as the pedestrian identification information of each anchor point image in the batch of training images, and is different from the feature vector of each anchor point image. The training image with the farthest distance, the first most difficult negative sample image corresponding to each anchor point image is from the same image capturing device as each anchor point image in the batch of training images, and the The pedestrian identification information of each anchor point image is different and the training image with the closest distance to the feature vector of each anchor point image, the second most difficult negative sample image corresponding to each anchor point image is the The distance between the batch of training images and each anchor point image is from a different image capturing device, and is different from the pedestrian identification information of each anchor point image, and the distance from the feature vector of each anchor point image The most recent training image; 步骤4:根据所述一批训练图像的特征向量确定损失函数的函数值,所述损失函数的函数值为N个第一损失函数的函数值经过平均处理得到的;Step 4: Determine the function value of the loss function according to the feature vectors of the batch of training images, and the function value of the loss function is obtained by averaging the function values of the N first loss functions; 其中,所述N个第一损失函数中的每个第一损失函数的函数值是根据所述N个锚点图像中的每个锚点图像对应的第一差值和第二差值计算得到的,所述每个锚点图像对应的第一差值是所述每个锚点图像对应的最难正样本距离与所述每个锚点图像对应的第二最难负样本距离的差,所述每个锚点图像对应的第二差值是所述每个锚点图像对应的第二最难负样本距离与所述每个锚点图像对应的第一最难负样本距离的差,所述每个锚点图像对应的最难正样本距离为所述每个锚点图像对应的最难正样本图像的特征向量与所述每个锚点图像的特征向量的距离,所述每个锚点图像对应的第二最难负样本距离为所述每个锚点图像对应的第二最难负样本图像的特征向量与所述每个锚点图像的特征向量的距离,所述每个锚点图像对应的第一最难负样本距离为所述每个锚点图像对应的第一最难负样本图像的特征向量与所述每个锚点图像的特征向量的距离;Wherein, the function value of each first loss function in the N first loss functions is calculated according to the first difference and the second difference corresponding to each of the N anchor point images , The first difference value corresponding to each anchor point image is the difference between the most difficult positive sample distance corresponding to each anchor point image and the second most difficult negative sample distance corresponding to each anchor point image, The second difference value corresponding to each anchor point image is the difference between the second most difficult negative sample distance corresponding to each anchor point image and the first most difficult negative sample distance corresponding to each anchor point image, The most difficult positive sample distance corresponding to each anchor point image is the distance between the feature vector of the most difficult positive sample image corresponding to each anchor point image and the feature vector of each anchor point image. The second most difficult negative sample distance corresponding to the anchor point image is the distance between the feature vector of the second most difficult negative sample image corresponding to each anchor point image and the feature vector of each anchor point image. The first most difficult negative sample distance corresponding to the anchor point image is the distance between the feature vector of the first most difficult negative sample image corresponding to each anchor point image and the feature vector of each anchor point image; 步骤5:根据所述损失函数的函数值对所述行人再识别网络的网络参数进行更新。Step 5: Update the network parameters of the pedestrian re-identification network according to the function value of the loss function. 如权利要求6所述的训练装置,其特征在于,所述行人再识别网络满足预设要求,包括:The training device according to claim 6, wherein the pedestrian re-identification network meets preset requirements, comprising: 在满足下列条件中的至少一种时,所述行人再识别网络满足预设要求:When at least one of the following conditions is met, the pedestrian re-identification network meets the preset requirements: 所述行人再识别网络的训练次数大于或者等于预设次数;The number of times of training of the pedestrian re-recognition network is greater than or equal to a preset number of times; 所述损失函数的函数值小于或者等于预设阈值;The function value of the loss function is less than or equal to a preset threshold; 所述行人再识别网络的识别性能达到预设要求。The recognition performance of the pedestrian re-identification network meets the preset requirements. 如权利要求7所述的训练装置,其特征在于,所述损失函数的函数值小于或者等于预设阈值,包括:8. The training device according to claim 7, wherein the function value of the loss function is less than or equal to a preset threshold, comprising: 所述第一差值小于第一预设阈值,所述第二差值小于第二预设阈值。The first difference is less than a first preset threshold, and the second difference is less than a second preset threshold. 如权利要求6-8中任一项所述的训练装置,其特征在于,所述M个训练图像为来自多个图像拍摄设备的训练图像,其中,来自不同图像拍摄设备的训练图像的标记数据是单独标记得到的。The training device according to any one of claims 6-8, wherein the M training images are training images from a plurality of image capturing devices, wherein the labeled data of the training images from different image capturing devices It is separately marked. 一种行人再识别装置,其特征在于,包括:A pedestrian re-identification device, which is characterized in that it comprises: 获取单元,用于获取待识别图像;The acquiring unit is used to acquire the image to be recognized; 识别单元,用于利用行人再识别网络对待识别图像进行处理,得到所述待识别图像的特征向量,其中,所述行人再识别网络是根据如权权利要求1-4中的任一项所述的训练方法训练得到的;The recognition unit is configured to process the image to be recognized by using the pedestrian re-recognition network to obtain the feature vector of the image to be recognized, wherein the pedestrian re-recognition network is according to any one of claims 1-4 Training method of training; 所述识别单元还用于根据所述待识别图像的特征向量与已有的行人图像的特征向量进行比对,得到所述待识别图像的识别结果。The recognition unit is further configured to compare the feature vector of the image to be recognized with the feature vector of the existing pedestrian image to obtain the recognition result of the image to be recognized. 一种计算机可读存储介质,其特征在于,所述计算机可读介质存储用于设备执行的程序代码,该程序代码包括用于执行如权利要求1-4中任一项所述的训练方法。A computer-readable storage medium, wherein the computer-readable medium stores program code for device execution, and the program code includes a program code for executing the training method according to any one of claims 1-4. 一种计算机可读存储介质,其特征在于,所述计算机可读介质存储用于设备执行的程序代码,该程序代码包括用于执行如权利要求5所述的行人再识别方法。A computer-readable storage medium, wherein the computer-readable medium stores a program code for device execution, and the program code includes a program code for executing the pedestrian re-identification method according to claim 5. 一种芯片,其特征在于,所述芯片包括处理器与数据接口,所述处理器通过所述数据接口读取存储器上存储的指令,以执行如权利要求1-4中任一项所述的训练方法。A chip, characterized in that the chip comprises a processor and a data interface, and the processor reads the instructions stored on the memory through the data interface to execute the method according to any one of claims 1-4 Training method. 一种芯片,其特征在于,所述芯片包括处理器与数据接口,所述处理器通过所述数据接口读取存储器上存储的指令,以执行如权利要求5所述的行人再识别方法。A chip, characterized in that the chip includes a processor and a data interface, and the processor reads instructions stored on a memory through the data interface to execute the pedestrian re-identification method according to claim 5.
PCT/CN2020/113041 2019-09-05 2020-09-02 Person re-identification network training method and person re-identification method and apparatus Ceased WO2021043168A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910839017.9A CN112446270B (en) 2019-09-05 2019-09-05 Training method of pedestrian re-recognition network, pedestrian re-recognition method and device
CN201910839017.9 2019-09-05

Publications (1)

Publication Number Publication Date
WO2021043168A1 true WO2021043168A1 (en) 2021-03-11

Family

ID=74733092

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/113041 Ceased WO2021043168A1 (en) 2019-09-05 2020-09-02 Person re-identification network training method and person re-identification method and apparatus

Country Status (2)

Country Link
CN (1) CN112446270B (en)
WO (1) WO2021043168A1 (en)

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112861825A (en) * 2021-04-07 2021-05-28 北京百度网讯科技有限公司 Model training method, pedestrian re-identification method, device and electronic equipment
CN112949534A (en) * 2021-03-15 2021-06-11 鹏城实验室 Pedestrian re-identification method, intelligent terminal and computer readable storage medium
CN113095174A (en) * 2021-03-29 2021-07-09 深圳力维智联技术有限公司 Re-recognition model training method, device, equipment and readable storage medium
CN113096080A (en) * 2021-03-30 2021-07-09 四川大学华西第二医院 Image analysis method and system
CN113177469A (en) * 2021-04-27 2021-07-27 北京百度网讯科技有限公司 Training method and device for human body attribute detection model, electronic equipment and medium
CN113408492A (en) * 2021-07-23 2021-09-17 四川大学 Pedestrian re-identification method based on global-local feature dynamic alignment
CN113449601A (en) * 2021-05-28 2021-09-28 国家计算机网络与信息安全管理中心 Pedestrian re-recognition model training and recognition method and device based on progressive smooth loss
CN113536891A (en) * 2021-05-10 2021-10-22 新疆爱华盈通信息技术有限公司 Pedestrian traffic statistical method, storage medium and electronic equipment
CN113591545A (en) * 2021-06-11 2021-11-02 北京师范大学珠海校区 Deep learning-based multistage feature extraction network pedestrian re-identification method
CN113762153A (en) * 2021-09-07 2021-12-07 北京工商大学 Novel tailing pond detection method and system based on remote sensing data
CN114240997A (en) * 2021-11-16 2022-03-25 南京云牛智能科技有限公司 Intelligent building online cross-camera multi-target tracking method
CN114298961A (en) * 2021-08-04 2022-04-08 腾讯科技(深圳)有限公司 Image processing method, device, equipment and storage medium
CN114359665A (en) * 2021-12-27 2022-04-15 北京奕斯伟计算技术有限公司 Training method and device of full-task face recognition model and face recognition method
CN114494930A (en) * 2021-09-09 2022-05-13 马上消费金融股份有限公司 Training method and device for voice and image synchronism measurement model
CN114863488A (en) * 2022-06-08 2022-08-05 电子科技大学成都学院 Public place multi-state pedestrian target identification and tracking method based on pedestrian re-identification, electronic equipment and storage medium
CN115147871A (en) * 2022-07-19 2022-10-04 北京龙智数科科技服务有限公司 Pedestrian re-identification method under shielding environment
WO2023272994A1 (en) * 2021-06-29 2023-01-05 苏州浪潮智能科技有限公司 Person re-identification method and apparatus based on deep learning network, device, and medium
CN115641559A (en) * 2022-12-23 2023-01-24 深圳佑驾创新科技有限公司 Target matching method and device for panoramic camera group and storage medium
CN115641490A (en) * 2022-10-11 2023-01-24 华为技术有限公司 Data processing method and device
CN113449966B (en) * 2021-06-03 2023-04-07 湖北北新建材有限公司 Gypsum board equipment inspection method and system
CN115952731A (en) * 2022-12-20 2023-04-11 哈尔滨工业大学 A wind turbine blade active vibration control method, device and equipment
CN116824695A (en) * 2023-06-07 2023-09-29 南通大学 A non-local defense method for pedestrian re-identification based on feature denoising

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114943909B (en) * 2021-03-31 2023-04-18 华为技术有限公司 Method, device, equipment and system for identifying motion area
CN115546583B (en) * 2022-10-10 2025-06-24 广州大学 Data augmentation and training method and training device for person re-identification network model

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108108754A (en) * 2017-12-15 2018-06-01 北京迈格威科技有限公司 The training of identification network, again recognition methods, device and system again
US20180374233A1 (en) * 2017-06-27 2018-12-27 Qualcomm Incorporated Using object re-identification in video surveillance
CN109344787A (en) * 2018-10-15 2019-02-15 浙江工业大学 A specific target tracking method based on face recognition and pedestrian re-identification

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109784166A (en) * 2018-12-13 2019-05-21 北京飞搜科技有限公司 The method and device that pedestrian identifies again
CN109800794B (en) * 2018-12-27 2021-10-22 上海交通大学 A cross-camera re-identification fusion method and system for similar-looking targets
CN109977798B (en) * 2019-03-06 2021-06-04 中山大学 Mask pooling model training and pedestrian re-identification method for pedestrian re-identification
CN110046579B (en) * 2019-04-18 2023-04-07 重庆大学 Deep Hash pedestrian re-identification method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180374233A1 (en) * 2017-06-27 2018-12-27 Qualcomm Incorporated Using object re-identification in video surveillance
CN108108754A (en) * 2017-12-15 2018-06-01 北京迈格威科技有限公司 The training of identification network, again recognition methods, device and system again
CN109344787A (en) * 2018-10-15 2019-02-15 浙江工业大学 A specific target tracking method based on face recognition and pedestrian re-identification

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
FUQING ZHU, XIANGWEI KONG, HAIYAN FU, QI TIAN: "Two-stream complementary symmetrical CNN architecture for person re-identification", JOURNAL OF IMAGE AND GRAPHICS, vol. 23, no. 7, 1 January 2018 (2018-01-01), pages 1052 - 1060, XP055787514, DOI: 10.11834 /jig.170557 *
TIANYU ZHANG; LINGXI XIE; LONGHUI WEI; YONGFEI ZHANG; BO LI; QI TIAN: "Single Camera Training for Person Re-identification", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 24 September 2019 (2019-09-24), 201 Olin Library Cornell University Ithaca, NY 14853, XP081481046 *

Cited By (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112949534A (en) * 2021-03-15 2021-06-11 鹏城实验室 Pedestrian re-identification method, intelligent terminal and computer readable storage medium
CN113095174A (en) * 2021-03-29 2021-07-09 深圳力维智联技术有限公司 Re-recognition model training method, device, equipment and readable storage medium
CN113096080B (en) * 2021-03-30 2024-01-16 四川大学华西第二医院 Image analysis method and system
CN113096080A (en) * 2021-03-30 2021-07-09 四川大学华西第二医院 Image analysis method and system
CN112861825A (en) * 2021-04-07 2021-05-28 北京百度网讯科技有限公司 Model training method, pedestrian re-identification method, device and electronic equipment
CN112861825B (en) * 2021-04-07 2023-07-04 北京百度网讯科技有限公司 Model training method, pedestrian re-recognition method, device and electronic equipment
CN113177469B (en) * 2021-04-27 2024-04-12 北京百度网讯科技有限公司 Training method, device, electronic equipment and medium for human attribute detection model
CN113177469A (en) * 2021-04-27 2021-07-27 北京百度网讯科技有限公司 Training method and device for human body attribute detection model, electronic equipment and medium
CN113536891A (en) * 2021-05-10 2021-10-22 新疆爱华盈通信息技术有限公司 Pedestrian traffic statistical method, storage medium and electronic equipment
CN113536891B (en) * 2021-05-10 2023-01-03 新疆爱华盈通信息技术有限公司 Pedestrian traffic statistical method, storage medium and electronic equipment
CN113449601B (en) * 2021-05-28 2023-05-16 国家计算机网络与信息安全管理中心 Pedestrian re-recognition model training and recognition method and device based on progressive smooth loss
CN113449601A (en) * 2021-05-28 2021-09-28 国家计算机网络与信息安全管理中心 Pedestrian re-recognition model training and recognition method and device based on progressive smooth loss
CN113449966B (en) * 2021-06-03 2023-04-07 湖北北新建材有限公司 Gypsum board equipment inspection method and system
CN113591545A (en) * 2021-06-11 2021-11-02 北京师范大学珠海校区 Deep learning-based multistage feature extraction network pedestrian re-identification method
CN113591545B (en) * 2021-06-11 2024-05-24 北京师范大学珠海校区 Deep learning-based multi-level feature extraction network pedestrian re-identification method
US11810388B1 (en) 2021-06-29 2023-11-07 Inspur Suzhou Intelligent Technology Co., Ltd. Person re-identification method and apparatus based on deep learning network, device, and medium
WO2023272994A1 (en) * 2021-06-29 2023-01-05 苏州浪潮智能科技有限公司 Person re-identification method and apparatus based on deep learning network, device, and medium
CN113408492B (en) * 2021-07-23 2022-06-14 四川大学 A pedestrian re-identification method based on global-local feature dynamic alignment
CN113408492A (en) * 2021-07-23 2021-09-17 四川大学 Pedestrian re-identification method based on global-local feature dynamic alignment
CN114298961A (en) * 2021-08-04 2022-04-08 腾讯科技(深圳)有限公司 Image processing method, device, equipment and storage medium
CN113762153A (en) * 2021-09-07 2021-12-07 北京工商大学 Novel tailing pond detection method and system based on remote sensing data
CN113762153B (en) * 2021-09-07 2024-04-02 北京工商大学 Novel tailing pond detection method and system based on remote sensing data
CN114494930B (en) * 2021-09-09 2023-09-22 马上消费金融股份有限公司 Training method and device for voice and image synchronism measurement model
CN114494930A (en) * 2021-09-09 2022-05-13 马上消费金融股份有限公司 Training method and device for voice and image synchronism measurement model
CN114240997B (en) * 2021-11-16 2023-07-28 南京云牛智能科技有限公司 Intelligent building online trans-camera multi-target tracking method
CN114240997A (en) * 2021-11-16 2022-03-25 南京云牛智能科技有限公司 Intelligent building online cross-camera multi-target tracking method
CN114359665B (en) * 2021-12-27 2024-03-26 北京奕斯伟计算技术股份有限公司 Full-task face recognition model training method and device, face recognition method
CN114359665A (en) * 2021-12-27 2022-04-15 北京奕斯伟计算技术有限公司 Training method and device of full-task face recognition model and face recognition method
CN114863488A (en) * 2022-06-08 2022-08-05 电子科技大学成都学院 Public place multi-state pedestrian target identification and tracking method based on pedestrian re-identification, electronic equipment and storage medium
CN115147871A (en) * 2022-07-19 2022-10-04 北京龙智数科科技服务有限公司 Pedestrian re-identification method under shielding environment
CN115147871B (en) * 2022-07-19 2024-06-11 北京龙智数科科技服务有限公司 Pedestrian re-identification method in shielding environment
CN115641490A (en) * 2022-10-11 2023-01-24 华为技术有限公司 Data processing method and device
CN115952731B (en) * 2022-12-20 2024-01-16 哈尔滨工业大学 Active vibration control method, device and equipment for wind turbine blade
CN115952731A (en) * 2022-12-20 2023-04-11 哈尔滨工业大学 A wind turbine blade active vibration control method, device and equipment
CN115641559A (en) * 2022-12-23 2023-01-24 深圳佑驾创新科技有限公司 Target matching method and device for panoramic camera group and storage medium
CN116824695A (en) * 2023-06-07 2023-09-29 南通大学 A non-local defense method for pedestrian re-identification based on feature denoising
CN116824695B (en) * 2023-06-07 2024-07-19 南通大学 Pedestrian re-identification non-local defense method based on feature denoising

Also Published As

Publication number Publication date
CN112446270B (en) 2024-05-14
CN112446270A (en) 2021-03-05

Similar Documents

Publication Publication Date Title
WO2021043168A1 (en) Person re-identification network training method and person re-identification method and apparatus
US12314343B2 (en) Image classification method, neural network training method, and apparatus
US12039440B2 (en) Image classification method and apparatus, and image classification model training method and apparatus
US12380687B2 (en) Object detection method and apparatus, and computer storage medium
CN112446398B (en) Image classification method and device
US20220375213A1 (en) Processing Apparatus and Method and Storage Medium
US12131521B2 (en) Image classification method and apparatus
CN111797882B (en) Image classification method and device
CN111914997B (en) Method for training neural network, image processing method and device
WO2021073311A1 (en) Image recognition method and apparatus, computer-readable storage medium and chip
WO2021022521A1 (en) Method for processing data, and method and device for training neural network model
CN113065645B (en) Twin attention network, image processing method and device
WO2020177607A1 (en) Image denoising method and apparatus
CN110222718B (en) Image processing methods and devices
CN113011562A (en) Model training method and device
WO2021047587A1 (en) Gesture recognition method, electronic device, computer-readable storage medium, and chip
WO2021175278A1 (en) Model updating method and related device
WO2021227787A1 (en) Neural network predictor training method and apparatus, and image processing method and apparatus
WO2022179606A1 (en) Image processing method and related apparatus
Grigorev et al. Depth estimation from single monocular images using deep hybrid network
CN114693986A (en) Training method of active learning model, image processing method and device
CN113256556B (en) Image selection method and device
Aleksei et al. Depth estimation from single monocular images using deep hybrid network

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20859742

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20859742

Country of ref document: EP

Kind code of ref document: A1