[go: up one dir, main page]

WO2023024787A1 - Procédé et appareil de traitement d'images, dispositif électronique, support d'enregistrement lisible par ordinateur, et produit programme d'ordinateur - Google Patents

Procédé et appareil de traitement d'images, dispositif électronique, support d'enregistrement lisible par ordinateur, et produit programme d'ordinateur Download PDF

Info

Publication number
WO2023024787A1
WO2023024787A1 PCT/CN2022/107671 CN2022107671W WO2023024787A1 WO 2023024787 A1 WO2023024787 A1 WO 2023024787A1 CN 2022107671 W CN2022107671 W CN 2022107671W WO 2023024787 A1 WO2023024787 A1 WO 2023024787A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
camera
image
target vehicle
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2022/107671
Other languages
English (en)
Chinese (zh)
Inventor
熊鹏
朱斌
周俊竹
钱能胜
阮佳彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Intelligent Technology Co Ltd
Original Assignee
Shanghai Sensetime Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Intelligent Technology Co Ltd filed Critical Shanghai Sensetime Intelligent Technology Co Ltd
Publication of WO2023024787A1 publication Critical patent/WO2023024787A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/787Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30236Traffic on road, railway or crossing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Definitions

  • the present disclosure relates to the technical field of image processing, and in particular to an image processing method and device, electronic equipment, computer-readable storage media, and computer program products.
  • tracking vehicles entering a specific area is one of the application scenarios.
  • the video captured by the camera will be displayed. But after the vehicle drives out of the shooting area of the camera, the video captured by the camera is still displayed instead of the real-time video of the vehicle.
  • the disclosure provides an image processing method and device, electronic equipment, a computer-readable storage medium, and a computer program product.
  • an image processing method comprising:
  • the first video source device of the first target vehicle is the first camera;
  • the shooting area of the first camera enters the shooting area of the second camera, the first video source device of the first target vehicle is switched from the first camera to the second camera, and the first The camera is different from said second camera.
  • an image processing device comprising:
  • the first processing unit is configured to determine that the first video source device of the first target vehicle is the first camera when it is determined that the first target vehicle is within the shooting area of the first camera; the second processing A unit configured to transfer the first video source device of the first target vehicle from the first video source device of the first target vehicle to the A camera is switched to the second camera, and the first camera is different from the second camera.
  • an electronic device including: a processor and a memory, the memory is used to store computer program codes, the computer program codes include computer instructions, and when the processor executes the computer instructions , the electronic device executes the method according to the foregoing first aspect and any possible implementation manner thereof.
  • a computer-readable storage medium is provided.
  • a computer program is stored in the computer-readable storage medium, and the computer program includes program instructions.
  • the program instructions are executed by a processor, the The processor executes the method according to the first aspect and any possible implementation manner thereof.
  • a computer program product in a fifth aspect, includes a computer program or an instruction, and when the computer program or instruction is run on a computer, the computer executes the first aspect and any of the above-mentioned aspects.
  • the first video source device of the first target vehicle is transferred from the Switching the first camera to the second camera can improve the accuracy of the determined real-time first video source device, thereby improving the effect of video relay.
  • FIG. 1 is a schematic diagram of the architecture of an image processing system provided by an embodiment of the present disclosure
  • FIG. 2 is a schematic flowchart of an image processing method provided by an embodiment of the present disclosure
  • FIG. 3 is a schematic diagram of a display page provided by an embodiment of the present disclosure.
  • FIG. 4 is a schematic structural diagram of an image processing device provided by an embodiment of the present disclosure.
  • FIG. 5 is a schematic diagram of a hardware structure of an image processing device provided by an embodiment of the present disclosure.
  • At least one (item) means one or more
  • “multiple” means two or more
  • at least two (items) means two or three And three or more
  • "and/or” is used to describe the association relationship of associated objects, indicating that there can be three types of relationships, for example, "A and/or B” can mean: only A exists, only B exists, and A exists at the same time and B, where A and B can be singular or plural.
  • the character "/" can indicate that the contextual objects are an "or” relationship, which refers to any combination of these items, including any combination of single items (items) or plural items (items).
  • At least one item (piece) of a, b or c can mean: a, b, c, "a and b", “a and c", “b and c", or "a and b and c ", where a, b, c can be single or multiple.
  • tracking vehicles entering a specific area is one of the application scenarios.
  • the embodiment of the present disclosure provides a technical solution to display the real-time video of the vehicle.
  • the execution subject of the embodiments of the present disclosure is an image processing apparatus, where the image processing apparatus may be any electronic device capable of executing the technical solutions disclosed in the method embodiments of the present disclosure.
  • the image processing device may be one of the following: a mobile phone, a computer, a tablet computer, and a wearable smart device.
  • FIG. 1 is a schematic diagram of an image processing system 1000 provided by an embodiment of the present disclosure.
  • the image processing apparatus 1001 also has a communication connection with the display device 1003 , and the image processing apparatus 1001 can display through the display device 1003 based on the communication connection.
  • the image processing apparatus 1001 is a server.
  • the image processing device 1001 can be deployed in the control center of the management area.
  • At least two cameras 1002 are deployed in the management area for collecting images and/or videos in the management area.
  • the image processing device 1001 processes the images and/or videos collected by at least two cameras 1002 based on the technical solutions provided below, determines the first video source device of the first target vehicle, and then can display the first video through the display device 1003 The capture screen of the source device.
  • the at least two cameras 1002 include a first camera and a second camera.
  • the image processing device 1001 determines that the first target vehicle appears in the shooting area of the first camera, it determines that the first video source device is the first camera, and then displays the picture taken by the first camera through the display device 1003 .
  • the image processing device 1001 determines that the first target vehicle has entered the shooting area of the second camera from the shooting area of the first camera, it switches the first video source device from the first camera to the second camera, and then displays the display device 1003 The display screen of the first camera is switched from the shooting screen of the first camera to the shooting screen of the second camera.
  • the image processing device 1001 and the display device 1003 may be the same terminal device, and the terminal device includes both image processing and analysis capabilities and display capabilities, and can simultaneously implement the method steps performed by the image processing device 1001 and the display device 1003 .
  • the image processing apparatus 1001 and the display device 1003 are computers.
  • FIG. 2 is a schematic flowchart of an image processing method provided by an embodiment of the present disclosure, the method is executed by an electronic device, and the method includes the following steps 201 and 202, wherein:
  • Step 201 When it is determined that the first target vehicle is within the shooting area of the first camera, determine that the first video source device of the first target vehicle is the first camera.
  • the vehicle (including the above-mentioned first target vehicle, and the second target vehicle and the vehicle to be identified which will be mentioned below) may be any vehicle.
  • the first target vehicle is a sedan.
  • the first target vehicle is a dump truck.
  • the first target vehicle is a bus.
  • the camera (including the above-mentioned first camera and the second camera to be mentioned below) may be a camera, and the camera may also be a capture machine.
  • the video source device of the vehicle (including the first video source device of the above-mentioned first target vehicle and the second video source device of the second target vehicle to be mentioned below) is, when displaying the real-time video of the vehicle The source of the video shown.
  • the first video source device of the first target vehicle is camera a
  • the real-time video of the first target vehicle is displayed.
  • the first target vehicle is in the shooting area of the first camera, indicating that when displaying the real-time video of the first target vehicle, the real-time video collected by the first camera should be displayed. Therefore, in this case, the image processing device determines the first target
  • the first video source device of the vehicle is the first camera.
  • Step 202 When it is determined that the first target vehicle has entered the shooting area of the second camera from the shooting area of the first camera, switch the first video source device of the first target vehicle from the first camera to the above-mentioned The second camera, the above-mentioned first camera is different from the above-mentioned second camera.
  • the first target vehicle drives from the shooting area of the first camera into the shooting area of the second camera.
  • the displayed video should be switched from the real-time video collected by the first camera to the second camera. Captured real-time video. Therefore, when it is determined that the first target vehicle is driving from the shooting area of the first camera into the shooting area of the second camera, the first video source device of the first target vehicle is switched from the first camera to the second camera.
  • the image processing device switches the first video source device of the first target vehicle from the first camera when the first target vehicle enters the shooting area of the second camera from the shooting area of the first camera is the second camera, so that the first video source device of the first target vehicle can be determined in real time.
  • the real-time video of the first target vehicle can be displayed by displaying the real-time video collected by the first video source device.
  • the first video source device of the first target vehicle is switched from the first camera to the second camera, which is called video relay.
  • the image processing device determines that the above-mentioned first target vehicle enters the shooting area of the second camera from the shooting area of the above-mentioned first camera, it also performs the following steps:
  • Step 1 Acquire a time difference threshold and at least two first images.
  • the time difference threshold is a positive number.
  • the at least two first images may be one first image, and the at least two first images may also be two or more first images.
  • Each of the at least two first images includes a first target vehicle.
  • the at least one first image includes an image captured by the first camera and an image captured by the second camera.
  • the at least two first images include image a and image b.
  • image a is captured by the first camera
  • image b is captured by the second camera.
  • both image a and image b include the first target vehicle.
  • the image processing apparatus receives a time difference threshold input by a user through an input component.
  • the above-mentioned input components include: a keyboard, a mouse, a touch screen, a touch pad, an audio input device, and the like.
  • the image processing apparatus receives the time difference threshold sent by the terminal.
  • the above-mentioned terminals include: mobile phones, computers, tablet computers, and servers.
  • the image processing device receives at least two first images input by a user through an input component.
  • the image processing apparatus receives at least two first images sent by the terminal.
  • the step of acquiring at least two first images and the step of acquiring the time difference threshold may be performed separately, or may be performed simultaneously.
  • the image processing device may first acquire at least two first images, and then acquire the time difference threshold.
  • the image processing device may acquire the time difference threshold first, and then acquire at least two first images.
  • the image processing apparatus acquires a time difference threshold during the process of acquiring at least two first images, or acquires at least two first images during the process of acquiring the time difference threshold.
  • Step 2 Determine the difference between the time stamp of the second image and the time stamp of the third image.
  • the second image is the image with the smallest time stamp collected by the second camera among the at least two first images
  • the third image is the time stamp collected by the first camera among the at least two first images minimal image.
  • the at least two first images include image a, image b, image c, and image d, where image a and image d are captured by the first camera, and image b and image c are captured by the second camera. If the acquisition time of image a is earlier than the acquisition time of image d, that is, the time stamp of image a is smaller than the time stamp of image d, then the second image is image a. If the acquisition time of image b is earlier than the acquisition time of image c, that is, the time stamp of image b is smaller than the time stamp of image c, then the third image is image b.
  • the difference between the time stamp of the second image and the time stamp of the third image is the difference obtained by subtracting the time stamp of the third image from the time stamp of the second image.
  • the difference characterizes the time difference between the acquisition time of the second image and the acquisition time of the third image.
  • the image processing device determines that the first target vehicle enters the shooting area of the second camera from the shooting area of the first camera by performing the following steps:
  • Step 3 If the difference is greater than the time difference threshold, determine that the first target vehicle has entered the shooting area of the second camera from the shooting area of the first camera.
  • the first target vehicle drives from the shooting area of the first camera into the shooting area of the second camera, that is, the first target vehicle first appears in the shooting area of the first camera image, and then appears in the shooting area of the second camera. That is to say, the first camera captures the first target vehicle first, and the second camera captures the first target vehicle afterwards. In addition, it takes a certain amount of time for the vehicle to drive into the shooting area of the first camera and leave the shooting area of the first camera, that is, there is a gap between the first target vehicle captured by the first camera and the first target vehicle captured by the second camera. Time difference. Therefore, it can be judged whether the first target vehicle drives into the shooting area of the second camera from the shooting area of the first camera by the time difference between the collection time of the second image and the collection time of the third image.
  • the time difference threshold may be understood as the time required for the vehicle to drive out of the shooting area of the camera from entering the shooting area of the camera. It should be understood that the time difference threshold is an empirical value, and the value of the time difference threshold may be set according to actual conditions.
  • step 2 If the difference obtained in step 2 is greater than the time difference threshold, on the one hand, because the time difference threshold is a positive number, the first camera captures the first target vehicle first, and the second camera captures the first target vehicle later. On the other hand, if the first target vehicle is greater than the time difference threshold, it means that the first target vehicle has driven out of the shooting area of the first camera. That is to say, the first target vehicle has driven into the shooting area of the second camera from the shooting area of the first camera.
  • the image processing device determines that the first target vehicle enters the shooting area of the second camera from the shooting area of the first camera when the difference value is greater than the time difference threshold.
  • step 2 If the difference obtained in step 2 is less than or equal to the time difference threshold, it includes the following two situations:
  • the second camera captures the first target vehicle first, and then the first camera captures the first target vehicle. At this time, the first target vehicle is in the shooting area of the first camera;
  • the first camera captures the first target vehicle first, and then the second camera captures the first target vehicle.
  • the first target vehicle does not drive out of the shooting area of the first camera, that is, the first target vehicle is still in the shooting area of the first camera.
  • the image processing device determines that the target vehicle is within the shooting area of the first camera when the difference value is less than or equal to the time difference threshold.
  • step 1 and step 2 the image processing device judges whether the first target vehicle has driven from the shooting area of the first camera into the shooting area of the second camera through the difference between the time stamp of the second image and the time stamp of the third image.
  • the shooting area can improve the accuracy of judgment, thereby improving the effect of video relay.
  • the at least two first images include n images with the largest timestamp in the target vehicle image set, the images in the target vehicle image set all include the first target vehicle, and the images in the target vehicle image set Both include images collected by the first camera and images collected by the second camera, and n is an integer greater than 1.
  • the images collected by the camera are time-sensitive, and the video relay needs to determine the real-time first video source device, so when at least two first images include the n images with the largest time stamp in the target vehicle image set, the determination can be improved.
  • the accuracy of the first video source device in real time thereby improving the effect of video relay.
  • the image processing device determines that the above-mentioned difference value is greater than the above-mentioned time difference threshold, before determining that the first target vehicle has entered the shooting area of the second camera from the shooting area of the above-mentioned first camera, Also perform the following steps:
  • Step 4. Determine the first size of the first target vehicle in the second image and the second size of the first target vehicle in the third image.
  • the size of the first target vehicle in the image may be the size of the pixel area covered by the first target vehicle.
  • the size of the first target vehicle in the image is an area including a vehicle detection frame of the first target vehicle, wherein the vehicle detection frame is obtained by performing vehicle detection processing on the image.
  • the image processing device obtains the first vehicle detection frame including the first target vehicle by performing vehicle detection processing on the second image, and obtains the second vehicle detection frame including the first target vehicle by performing vehicle detection processing on the third image.
  • the area of the first vehicle detection frame is the first size
  • the area of the second vehicle detection frame is the second size.
  • step 4 the image processing device determines that the first target vehicle enters the shooting area of the second camera from the shooting area of the first camera by performing the following steps:
  • Step 5 If the first size is larger than the second size, determine that the first target vehicle has entered the shooting area of the second camera from the shooting area of the first camera.
  • the first size is greater than the second size, it means that the distance between the first target vehicle and the second camera is smaller than the distance between the first target vehicle and the first camera. At this time, the second camera should be used as the first video source of the first target vehicle equipment. Therefore, when the first size is greater than the second size, the image processing device determines that the first target vehicle drives into the shooting area of the second camera from the shooting area of the first camera.
  • the first size is smaller than the second size, it means that the distance between the first target vehicle and the second camera is greater than the distance between the first target vehicle and the first camera. At this time, the first camera should be used as the first video source of the first target vehicle equipment. In some embodiments, the image processing device determines that the first target vehicle is still within the shooting area of the first camera when the first size is smaller than the second size.
  • the first size is equal to the second size, it means that the distance between the first target vehicle and the second camera is the same as the distance between the first target vehicle and the first camera, but since the current first video source device of the first target vehicle is the first camera , in order to make the display effect of the video relay more smooth, the first camera can still be used as the first video source device of the first target vehicle at this time.
  • the image processing device determines that the first target vehicle is still within the shooting area of the first camera when the first size is equal to the second size.
  • step 4 and step 5 if the difference is greater than the time difference threshold, the image processing device further judges whether the first target vehicle enters the area of the second camera from the shooting area of the first camera based on the first size and the second size.
  • the shooting area can improve the accuracy of judgment, thereby improving the effect of video relay.
  • the image processing device also displays the shooting picture of the first video source device, so that the real-time video of the first target vehicle can be displayed, achieving the effect of visually tracking the first target vehicle.
  • both the first camera and the second camera are deployed in the target area.
  • the target area may be any area, for example, the target area is a school, and for example, the target area is an industrial park.
  • the target area includes an administrative area. Both the first camera and the second camera are deployed in the target area, that is, both the shooting area of the first camera and the shooting area of the second camera are located in the target area.
  • the image processing device also stops displaying the shooting picture of the first video source device when it is determined that the first target vehicle has left the target area. This can release hardware resources for tracking and displaying the first target vehicle, and improve the utilization rate of hardware resources of the image processing device, wherein the hardware resources include: hardware resources for determining the first video source device of the first target vehicle and The hardware resource used for displaying the captured picture of the first video source device of the first target vehicle.
  • the image processing device tracks the track of the first target vehicle in the target area through the first camera and the second camera, and displays the video of the first target vehicle in the target area in real time.
  • the image processing device determines that the first target vehicle has left the target area, it is determined that there is no need to continue to track the track of the first target vehicle in the target area, so stop displaying the captured picture of the first video source device, so as to improve the utilization of hardware resources of the image processing device Rate.
  • stopping displaying the shooting picture of the first video source device is to stop displaying the shooting picture newly collected by the first video source device, but take the last frame image of the first target vehicle leaving the target area as the first tracking A display screen of the display area, wherein the first tracking display area is used to display the video of the first target vehicle in the target area.
  • the image processing device determines that the first target vehicle leaves the target area after driving out of the shooting area of the second camera, and the last frame captured by the second camera includes the image of the first target vehicle as image a, then the image processing device will Image a is used as the display screen of the first tracking display area, that is, the display screen of the first tracking display area is always image a after the first target vehicle leaves the target area.
  • the instruction may be input by the user to the image processing device through the input component, or the instruction may be sent by the terminal to the image processing device.
  • the image processing device determines that the first target vehicle disappears from the shooting area of the bayonet camera, and the first target vehicle disappears before the first target vehicle disappears from the shooting area of the bayonet camera. If there is a track record in the target area, it is determined that the first target vehicle has left the target area, wherein the bayonet camera includes a camera deployed at an entrance and exit of the target area.
  • the bayonet camera since the bayonet camera is deployed at the entrance and exit of the target area, when the first target vehicle disappears from the shooting area of the bayonet camera, it means that the first target vehicle has either entered the target area or left target area. Also, since the first target vehicle has a track record in the target area before the first target vehicle disappears from the shooting area of the bayonet camera, it means that the first target vehicle has left the target area. Therefore, the image processing device determines in this case that the first target vehicle has left the target area.
  • the shooting picture of the first video source is displayed in the first display area of the display page, and the display page also includes a second display area, and the second display area is used to display the second video source of the second target vehicle
  • the first target vehicle is different from the second target vehicle.
  • the display page can simultaneously display the real-time video of the first target vehicle and the real-time video of the second target vehicle.
  • first display area and the second display area in the embodiment of the present disclosure are only examples, and should not be understood as only real-time videos of two vehicles can be displayed simultaneously on the display page.
  • the image processing device may simultaneously display the implementation videos of m vehicles on the display page, where m is an integer greater than 1.
  • the display page shown in FIG. 3 it includes four different display areas, namely the first display area 301, the second display area 302, the third display area 303 and the fourth display area 304, wherein different display areas Regions are used to display real-time video of different vehicles respectively.
  • the first display area 301 is used to display the real-time video of vehicle a
  • the second display area 302 is used to display the real-time video of vehicle b
  • the third display area 303 is used to display the real-time video of vehicle c
  • the fourth display area 304 is used to display the real-time video of vehicle a. to display real-time video of vehicle d.
  • the following steps are further performed:
  • Step 6 Obtain the second image captured by the above-mentioned first camera.
  • the image processing device acquires the second image captured by the first camera through the communication connection.
  • the image processing apparatus receives the second image input by the user through the input component.
  • the image processing apparatus receives the second image sent by the terminal.
  • step 6 the image processing device determines that the first target vehicle is within the shooting area of the first camera when it is determined that the second image includes the first target vehicle.
  • the second image includes the vehicle to be identified, and the image processing device further performs the following steps before determining that the second image includes the first target vehicle:
  • Step 7 Obtain the target characteristic data of the first target vehicle.
  • the target characteristic data of the first target vehicle carries the identity information of the first target vehicle, that is, by comparing the characteristic data of any vehicle with the target characteristic data, it can be determined whether the vehicle is the same as the first target vehicle .
  • the target feature data includes a feature vector carrying identity information of the first target vehicle.
  • the target feature data carries local feature information of the first target vehicle.
  • the target characteristic data carries at least one of the following information: characteristic information of vehicle lights, characteristic information of vehicle logos, and characteristic information of vehicle windows.
  • the target feature data carries global feature information of the first target vehicle, where the global feature information includes overall appearance feature information of the first target vehicle.
  • the target feature data carries not only local feature information of the first target vehicle, but also global feature information of the first target vehicle.
  • the image processing device receives target feature data input by a user through an input component.
  • the image processing apparatus receives target feature data sent by the terminal.
  • the image processing device determines that the second image includes the first target vehicle by performing the following steps:
  • Step 8 Perform feature extraction processing on the second image to obtain the first feature data of the vehicle to be identified.
  • the first feature data carries the identity information of the vehicle to be identified, that is, by comparing the feature data of any vehicle with the first feature data, it can be determined whether the vehicle is the same as the vehicle to be identified.
  • the first feature data includes a feature vector carrying identity information of the vehicle to be identified.
  • the first feature data carries local feature information of the vehicle to be identified.
  • the first characteristic data carries at least one of the following information: characteristic information of vehicle lights, characteristic information of vehicle logos, and characteristic information of vehicle windows.
  • the first feature data carries global feature information of the vehicle to be identified, where the global feature information includes overall appearance feature information of the vehicle to be identified.
  • the first feature data carries not only local feature information of the vehicle to be identified, but also global feature information of the vehicle to be identified.
  • the feature extraction process is used to extract feature data of the vehicle in the image.
  • the image processing device implements feature extraction processing on the image by performing convolution processing on the image.
  • the feature extraction processing of the image is implemented through a convolutional neural network.
  • the convolutional neural network is trained, so that the trained convolutional neural network can complete the feature extraction process of the image.
  • the annotation information of the training data includes at least one of the following: local feature information of the vehicle in the image, and global feature information of the vehicle in the image.
  • Step 9 Obtain a comparison result between the first target vehicle and the vehicle to be identified according to the first similarity between the target feature data and the first feature data.
  • the first similarity is the similarity between the target feature data and the first feature data.
  • the comparison result includes that the first target vehicle is the same as the vehicle to be identified, and the comparison result may include that the first target vehicle is different from the vehicle to be identified.
  • the image processing device determines that the comparison result includes that the first target vehicle is the same as the vehicle to be identified when the first similarity is greater than the feature comparison threshold; the image processing device determines that the first similarity is less than or equal to the feature comparison threshold, it is determined that the comparison result includes that the first target vehicle is different from the vehicle to be identified.
  • the image processing device determines the square of the first similarity to obtain the squared similarity. In the case where the square similarity is greater than the feature comparison threshold, determining the comparison result includes that the first target vehicle is the same as the vehicle to be identified; when the square similarity is less than or equal to the feature comparison threshold, the image processing device determines the comparison result It is included that the first target vehicle is different from the vehicle to be identified.
  • Step 10 When the comparison result includes that the first target vehicle is the same as the vehicle to be identified, determine that the second image includes the first target vehicle.
  • the image processing device compares the first feature data with the target feature data to obtain the first similarity, and then determines whether the vehicle to be recognized is the first target vehicle according to the first similarity, thereby determining Whether the first target vehicle is included in the second image.
  • the comparison result includes that the first target vehicle is different from the vehicle to be identified, it is determined that the second image does not include the first target vehicle.
  • the image processing device before performing step 9, the image processing device further performs the following steps:
  • Step 11 Obtain at least one first target vehicle attribute of the above-mentioned first target vehicle.
  • the vehicle attribute (including the above-mentioned first target vehicle attribute and the first target vehicle attribute to be mentioned below) includes at least one of the following: vehicle type, body color, vehicle brand, vehicle number, and vehicle model.
  • the vehicle types include: a hatchback sedan, a sedan sedan, a sport utility vehicle (sport utility vehicle, SUV), a pickup, a truck, a bus, a dump truck, and a muck truck.
  • the body colors include: black, blue, gray, white, navy blue, yellow, red, green, and purple.
  • the vehicle brands include: Mercedes-Benz, BMW, Audi, Volkswagen, Hyundai, Toyota, Nissan, Great Wall, and BYD.
  • the first target vehicle attribute is the vehicle attribute of the first target vehicle.
  • the at least one first target vehicle attribute may be one first target vehicle attribute, and the at least one first target vehicle attribute may also be two or more first target vehicle attributes.
  • the at least one first target vehicle attribute includes a body color of the first target vehicle.
  • the at least one attribute of the first target vehicle includes the body color of the first target vehicle and the vehicle type of the first target vehicle.
  • the at least one first target vehicle attribute of the first target vehicle is stored in an image processing device.
  • the image processing device acquires at least one first target vehicle attribute of the first target vehicle by reading at least one first target vehicle attribute of the first target vehicle from the storage medium.
  • the image processing device receives at least one first target vehicle attribute of the first target vehicle input by a user through an input component.
  • the image processing device receives the at least one first target vehicle attribute of the first target vehicle sent by the terminal.
  • Step 12. Perform vehicle attribute extraction processing on the second image to obtain at least one first vehicle attribute of the vehicle to be identified.
  • a vehicle attribute extraction process is used to extract vehicle attributes in the image.
  • the vehicle attribute extraction process can be realized by a vehicle attribute extraction model, wherein the vehicle attribute extraction model is a computer vision model for extracting vehicle attributes.
  • the first vehicle attribute is the vehicle attribute of the vehicle to be recognized obtained by performing vehicle attribute extraction processing on the first image.
  • the at least one first vehicle attribute may be one first vehicle attribute, and the at least one first vehicle attribute may also be two or more first vehicle attributes.
  • the at least one first vehicle attribute includes the body color of the vehicle to be identified.
  • the at least one first vehicle attribute includes the body color of the vehicle to be identified and the vehicle type of the vehicle to be identified.
  • step 11 and step 12 the image processing device performs the following steps in the process of performing step 9:
  • Step 13 Obtain a comparison result between the first target vehicle and the vehicle to be identified according to the first similarity and the second similarity.
  • the second similarity is the at least one first target vehicle attribute and the at least one first target vehicle attribute.
  • the image processing device determines that the comparison result includes the first target vehicle and the vehicle to be identified when the first similarity is greater than the feature comparison threshold and the second similarity is greater than the attribute comparison threshold. same. Otherwise, the image processing device determines that the comparison result includes that the first target vehicle is different from the vehicle to be identified.
  • the first similarity degree is greater than the feature comparison threshold, indicating that the feature data of the first target vehicle is similar to the feature data of the vehicle to be identified.
  • the second similarity is greater than the attribute comparison threshold, indicating that the vehicle attribute of the first target vehicle is similar to the vehicle attribute of the vehicle to be identified. Therefore, in this case, judging that the first target vehicle is the same as the vehicle to be recognized can improve the judgment accuracy.
  • the image processing apparatus obtains the third similarity according to the first similarity and the second similarity. In a case where the third similarity is greater than the similarity threshold, it is determined that the comparison result includes that the first target vehicle is the same as the vehicle to be identified. In a case where the third similarity is less than or equal to the similarity threshold, it is determined that the comparison result includes that the first target vehicle is different from the vehicle to be identified.
  • the image processing apparatus obtains the third similarity by weighting and summing the first similarity and the second similarity.
  • s1, s2, and s3 satisfy the following formula:
  • s1, s2, and s3 satisfy the following formula:
  • s1, s2, and s3 satisfy the following formula:
  • the image processing device obtains the comparison result according to the first similarity and the second similarity, which not only utilizes the information carried by the characteristic data, but also utilizes the information carried by the vehicle attribute, so the comparison result can be improved. accuracy of the results.
  • the image processing device when the first camera is deployed in the target area, the image processing device performs the following steps in the process of executing step 7:
  • Step 14 acquiring the vehicle database to be tracked.
  • the database of vehicles to be tracked includes characteristic data of at least one vehicle to be tracked.
  • the database of vehicles to be tracked includes feature data of vehicle a to be tracked and feature data of vehicle b to be tracked.
  • the at least one vehicle to be tracked includes a vehicle whose trajectory is to be tracked within the target area.
  • the target area is the management area.
  • any vehicle entering the target area needs to be tracked.
  • the vehicle entering the target area is the vehicle to be tracked.
  • the target vehicle may be any one of the at least one vehicle to be tracked, that is, the target vehicle is a vehicle that needs to track a trajectory within the target area.
  • the image processing device receives the database of vehicles to be tracked input by a user through an input component.
  • the image processing device receives the database of vehicles to be tracked sent by the terminal.
  • Step 15 In the case of acquiring the second image captured by the first camera, acquire a characteristic data from the database of the vehicle to be tracked as the characteristic data of the target.
  • the first camera is deployed in the target area, and the second image is collected by the first camera, and the second image includes the vehicle to be identified. Then the image processing device obtains the second image collected by the first camera, indicating that the vehicle to be identified has entered the target area, so it is necessary to determine the identity of the vehicle to be identified, that is, to determine which of the at least one vehicle to be tracked is the vehicle to be identified car.
  • the image processing device acquires the second image captured by the first camera, it acquires a feature data from the database of the vehicle to be tracked, and uses the feature data as target feature data. In this way, it can be determined whether the vehicle to be identified is the target vehicle by comparing the first characteristic data with the target characteristic data.
  • the target feature data in step 15 is only an example, and it should not be understood that the image processing device only acquires feature data of a vehicle to be tracked from the vehicle to be tracked database, and determines the relationship between the vehicle to be identified and at least one vehicle to be tracked Is one of the cars in the same. In practical applications, the image processing device may compare the first characteristic data with the characteristic data of each vehicle to be tracked in the database of vehicles to be tracked, so as to determine the identity of the vehicle to be identified.
  • the following steps are also performed: acquiring the fifth image of the target area and the first image of the first camera in the fifth image.
  • Position Determine a second position of the first target vehicle in the fifth image according to the first position.
  • the comparison result includes that the first target vehicle is the same as the vehicle to be identified, indicating that the first target vehicle has appeared in the shooting area of the first camera. Therefore, the position of the first target vehicle in the fifth image, ie, the second position, can be determined according to the first position.
  • the image processing apparatus uses the first location as the second location.
  • the target area is a map missing area.
  • the missing area of the map includes an area missing detailed information on the map, wherein the detailed information includes at least one of the following: building information and road information. In this way, by determining the position of the first target vehicle in the fifth map, the accuracy of positioning in the target area can be improved.
  • the second position is displayed in the fifth image, that is, the position of the first target vehicle within the target area is displayed.
  • the image processing device when the image processing device obtains at least two positions of the first target vehicle in the fifth image, determine the track of the first target vehicle in the fifth image according to the at least two positions, and The track is displayed.
  • the image processing device performs the following steps in the process of executing step 14:
  • Step 16 Acquire at least one fourth image, where the at least one fourth image includes the at least one vehicle to be tracked.
  • the image processing apparatus receives at least one fourth image input by a user through an input component.
  • the image processing apparatus receives at least one fourth image sent by the terminal.
  • the image processing device acquires at least one image to be confirmed captured by at least one bayonet camera as at least one fourth image through the communication connection.
  • the image processing device determines that the vehicle in the image to be confirmed does not have a track record in the target area, and uses the image to be confirmed as the fourth image, thereby obtaining at least one a fourth image.
  • At least one image to be confirmed includes image a and image b, wherein image a includes vehicle A, and image b includes vehicle B. If there is no track record of vehicle A in the target area, the image processing device uses image a as the fourth image. If the vehicle B has a track record in the target area, the image processing device does not use the image b as the fourth image. At this time, at least one fourth image includes image a.
  • Step 17 Obtain the vehicle database to be tracked according to the at least one fourth image.
  • the image processing device obtains the vehicle database to be tracked by performing structural processing on at least one fourth image.
  • the image processing device obtains feature data of at least one vehicle to be tracked by performing feature extraction processing on at least one fourth image, and obtains a database of vehicles to be tracked.
  • the image processing device performs the following steps during the execution of step 16:
  • Step 18 Acquire at least one sixth image.
  • each of the at least one sixth image includes a vehicle.
  • the vehicle in at least one sixth image is a vehicle to be confirmed, that is, the vehicle to be confirmed needs to be further confirmed whether it needs to track a trajectory within the target area.
  • the image processing device acquires at least one image to be confirmed collected by at least one bayonet camera through the communication connection. After the image processing device acquires the image to be confirmed, if it is determined that the vehicle in the image to be confirmed does not have a track record in the target area, the image to be confirmed is used as the sixth image, thereby obtaining at least one sixth image.
  • At least one image to be confirmed includes image a and image b, wherein image a includes vehicle A, and image b includes vehicle B. If there is no track record of vehicle A in the target area, the image processing device uses image a as the sixth image. If the vehicle B has a track record in the target area, the image processing device does not use the image b as the sixth image. At this time, at least one sixth image includes image a.
  • Step 19 Obtaining at least one fourth image according to the at least one sixth image and the deletion instruction when a deletion instruction for at least one sixth image is detected.
  • the deletion instruction may be input by the user to the image processing apparatus.
  • the user can delete at least one image in the at least one sixth image, thereby determining at least one vehicle to be confirmed from the at least one vehicle to be confirmed included in the at least one sixth image. Track vehicles.
  • At least one sixth image includes image a and image b, wherein image a includes vehicle A to be confirmed, and image b includes vehicle B to be confirmed.
  • the user confirms that the image a is blurred. If the vehicle database to be tracked is obtained based on the image a, the accuracy of the data in the vehicle database to be tracked may be affected. Therefore, the user inputs a delete instruction to delete the image b to the image processing device. When the image processing device detects the deletion instruction, delete the image b in the at least one sixth image to obtain at least one fourth image. At this time, at least one fourth image includes image a, and at least one vehicle to be tracked includes vehicle A.
  • the image processing device performs the following steps in the process of executing step 19:
  • Step 20 Delete at least one image in the sixth image according to the deletion instruction to obtain at least one seventh image.
  • At least one seventh image in this step corresponds to at least one fourth image in step 19, that is, in this step, the image processing device according to The deleting instruction deletes the image in the at least one sixth image to obtain not at least one fourth image but at least one seventh image.
  • the vehicles in at least one seventh image are all vehicles to be confirmed, that is, at least one seventh image includes at least one vehicle to be confirmed.
  • Step 21 In a case where a confirmation tracking instruction for at least one seventh image is detected, use the at least one seventh image indicated by the confirmation tracking instruction as at least one fourth image.
  • the confirmation tracking instruction may be input by a user to the image processing device.
  • the user can determine at least one vehicle to be tracked from at least one vehicle to be confirmed included in at least one seventh image by inputting a confirmation tracking instruction to the image processing device, and then determine at least one vehicle to be tracked from at least one seventh image.
  • At least one seventh image includes image a and image b, wherein image a includes vehicle A to be confirmed, and image b includes vehicle B to be confirmed.
  • the user confirms that the vehicle B to be confirmed is the vehicle of the staff in the target area, so there is no need to track the vehicle B to be confirmed. Therefore, the user inputs a confirmation tracking instruction to the image processing device, so that the image processing device takes the vehicle A as the vehicle to be tracked.
  • the image processing device detects the confirmation tracking instruction
  • the image a in the at least one seventh image is used as the at least one fourth image.
  • at least one fourth image includes image a
  • at least one vehicle to be tracked includes vehicle A.
  • the database of vehicles to be tracked further includes at least one second vehicle attribute of at least one vehicle to be tracked, and the at least one second vehicle attribute is obtained by performing vehicle attribute extraction processing on at least one fourth image.
  • the image processing device performs the following steps during the execution of step 17:
  • Step 22 According to the at least one fourth image, obtain the feature data of the at least one vehicle to be tracked and at least one third vehicle attribute of the at least one vehicle to be tracked.
  • the image processing device can obtain feature data of at least one vehicle to be tracked according to at least one fourth image.
  • the image processing device can obtain at least one third vehicle attribute of at least one vehicle to be tracked by performing vehicle attribute extraction processing on at least one fourth image.
  • Step 23 when an editing instruction for the at least one third attribute is detected, edit the at least one third vehicle attribute according to the editing instruction to obtain the at least one second vehicle attribute.
  • the editing instruction may be input by the user to the image processing device.
  • the user can edit at least one third vehicle attribute by inputting the editing command to the image processing device.
  • the image processing device determines that the body color of the vehicle A to be tracked is white by performing step 22 . However, the body color of the vehicle A to be tracked is actually gray. At this time, the user can modify the body color of the vehicle A to be tracked to gray by inputting an edit command to the image processing device. At this time, at least one second vehicle attribute includes that the body color of the vehicle A to be tracked is gray.
  • Step 24 Obtain the vehicle database to be tracked according to the feature data of the at least one vehicle to be tracked and the at least one second attribute.
  • the image processing device stores the feature data and at least one second attribute of at least one vehicle to be tracked in the vehicle to be tracked database.
  • the user can modify the vehicle attributes of the vehicle to be tracked by inputting editing instructions to the image processing device, thereby improving the accuracy of the data in the vehicle to be tracked database.
  • the image processing device before performing step 24, the image processing device further performs the following steps:
  • Step 25 Perform vehicle detection processing on at least one seventh image, and determine at least one third position of at least one vehicle to be tracked in the at least one seventh image.
  • a vehicle detection process is used to detect the position of the vehicle in the image.
  • the image processing device determines the position of each vehicle to be tracked in the image by performing vehicle detection processing on each seventh image, and obtains at least one third position.
  • At least one seventh image includes image a and image b.
  • the image processing device determines the position of the vehicle A in the image a in the image a as a third position by performing vehicle detection processing on the image a.
  • the image processing device determines the position of the vehicle B in the image b as a third position by performing vehicle detection processing on the image b.
  • at least one third position includes the position of vehicle A in image a and the position of vehicle B in image b.
  • Step 26 Intercept at least one pixel region determined according to at least one third position from at least one seventh image to obtain at least one eighth image.
  • the image processing device can determine the position of the vehicle detection frame including the vehicle to be tracked in the image according to the third position, and then can use the pixel area included in the vehicle detection frame including the vehicle to be tracked as Eighth image.
  • the at least one third position includes the position of vehicle A in image a and the position of vehicle B in image b.
  • the image processing device determines the pixel area included in the vehicle detection frame of vehicle A according to the position of vehicle A in image a, and intercepts the pixel area included in the vehicle detection frame of vehicle A as image c.
  • the image processing device determines the pixel area included in the vehicle detection frame of the vehicle B according to the position of the vehicle B in the image b, and intercepts the pixel area included in the vehicle detection frame of the vehicle B as the image d.
  • at least one eighth image includes image c and image d.
  • the image processing device After performing step 26, the image processing device performs the following steps in the process of performing step 24:
  • Step 27 Store at least one eighth image, feature data of at least one vehicle to be tracked, at least one second vehicle attribute and at least one seventh image in the vehicle to be tracked database.
  • the image processing device also performs the following steps:
  • Step 28 obtaining retrieval conditions.
  • the retrieval condition is used to retrieve the record of the vehicle to be tracked entering the target area from the vehicle to be tracked database.
  • the retrieval condition includes at least one of the following information: time requirement, body color, vehicle type, vehicle model, license plate number.
  • the image processing apparatus receives retrieval conditions input by a user through an input component.
  • the image processing apparatus receives the retrieval conditions sent by the terminal.
  • the image processing device After performing step 28, the image processing device performs one of the following steps:
  • Step 29 Search the database of vehicles to be tracked by using the retrieval conditions to obtain image data matching the retrieval conditions, the image data includes at least one of the following: at least one seventh image, at least one eighth image.
  • the retrieval condition includes a time requirement
  • the image processing device collects images whose time meets the time requirement as image data matching the retrieval condition.
  • the retrieval condition includes vehicles to be tracked that have entered the target area within the last 5 days.
  • the image processing device uses the seventh image whose collection time is within the last 5 days and the eighth image whose collection time is within the last 5 days as the image data matching the retrieval condition.
  • the acquisition time of the eighth image is the same as the acquisition time of the corresponding seventh image, for example, the eighth image a is obtained by intercepting the pixel area in the seventh image b, then the acquisition time of the eighth image a is the same as that of the seventh image
  • the acquisition time of b is the same.
  • the retrieval condition includes a vehicle attribute
  • the image processing device uses the included image matching the vehicle attribute of the vehicle to be tracked with the vehicle attribute in the retrieval condition as the image data matching the retrieval condition.
  • the retrieval condition includes a vehicle to be tracked whose body color is red.
  • the image processing device takes the seventh image containing the red vehicle to be tracked and the eighth image containing the red vehicle to be tracked as image data matching the retrieval condition.
  • Step 30 Search the database of vehicles to be tracked by using the retrieval conditions to obtain at least one second vehicle attribute matching the retrieval conditions.
  • the retrieval condition includes a time requirement
  • the image processing device takes the image whose acquisition time meets the time requirement as the image data matching the retrieval condition, and uses the image data that matches the retrieval condition to be tracked At least one second vehicle attribute of the vehicle is used as the at least one second vehicle attribute matching the retrieval condition.
  • the retrieval condition includes vehicles to be tracked that have entered the target area within the last 5 days.
  • the image processing device takes the seventh image whose acquisition time is within the last 5 days as the image data matching the retrieval condition. Taking at least one second vehicle attribute of the vehicle to be tracked in the seventh image as the at least one second vehicle attribute matching the retrieval condition.
  • the image processing device searches the database of vehicles to be tracked using the retrieval conditions, and can obtain not only image data matching the retrieval conditions, but also at least one second vehicle attribute matching the retrieval conditions.
  • the image processing device may display the trajectory of the corresponding vehicle to be tracked within the target area.
  • the image processing device before performing step 6, the image processing device further performs the following steps:
  • Step 31 obtaining the working time zone.
  • the first camera can collect images within a specified time period.
  • the working time period is the time period during which the first camera collects images according to regulations.
  • the image processing apparatus receives the working time period input by the user through the input component.
  • the image processing apparatus receives the working time period sent by the terminal.
  • Step 32 Sending a vehicle tracking instruction to the first camera, where the vehicle tracking instruction is used to instruct the first camera to collect images within a working time period.
  • the vehicle tracking instruction is used to instruct the first camera to collect images within a working time period.
  • the image processing device can enable the first camera to collect images during the working time period, thereby improving the working efficiency of the first camera and reducing the energy consumption of the first camera.
  • FIG. 4 is a schematic structural diagram of an image processing device provided by an embodiment of the present disclosure.
  • the image processing device 1 includes: a first processing unit 11 and a second processing unit 12 .
  • the image processing device 1 further includes: an acquisition unit 13, a determination unit 14, a display unit 15, an extraction unit 16, and a third processing unit 17, wherein:
  • the first processing unit 11 is configured to determine that the first video source device of the first target vehicle is the first camera when it is determined that the first target vehicle is within the shooting area of the first camera;
  • the second processing unit 12 is configured to send the first video source device of the first target vehicle to the Switching from the first camera to the second camera, where the first camera is different from the second camera.
  • the image processing device further includes: an acquisition unit 13 configured to, when determining that the first target vehicle enters the shooting area of the second camera from the shooting area of the first camera, Before the case of , acquire a time difference threshold and at least two first images, the time difference threshold is a positive number, both of the at least two first images include the first target vehicle, and the at least one first image Including the image collected by the first camera and the image collected by the second camera; the determination unit 14 is configured to determine the difference between the time stamp of the second image and the time stamp of the third image, the second image is the image with the smallest time stamp collected by the second camera in the at least two first images, and the third image is the time of the at least two first images collected by the first camera Poke the smallest image;
  • the second processing unit 12 is configured to determine that the first target vehicle enters the shooting area of the second camera from the shooting area of the first camera when the difference is greater than the time difference threshold .
  • the at least two first images include n images with the largest time stamp in the target vehicle image set, the images in the target vehicle image set all include the first target vehicle, and the target The images in the vehicle image set include images captured by the first camera and images captured by the second camera, and the n is an integer greater than 1.
  • the determining unit 14 is further configured to determine the shooting area of the first target vehicle from the first camera after the difference is greater than the time difference threshold Before entering the shooting area of the second camera, determine a first size of the first target vehicle in the second image and a second size of the first target vehicle in the third image;
  • the second processing unit 12 is configured to determine that the first target vehicle enters the shooting area of the second camera from the shooting area of the first camera when the first size is larger than the second size. area.
  • the device further includes: a display unit 15 configured to display a captured image of the first video source device.
  • both the first camera and the second camera are deployed in the target area; the display unit 15 is further configured to determine that the first target vehicle has left the target area In this case, stop displaying the shooting picture of the first video source device.
  • the shooting picture of the first video source device is displayed in the first display area of the display page, and the display page further includes a second display area, and the second display area is used to display the second A shooting picture of a second video source device of a target vehicle, where the first target vehicle is different from the second target vehicle.
  • the acquiring unit 13 is configured to acquire the second image captured by the first camera before determining that the first target vehicle is within the shooting area of the first camera;
  • the first processing unit 11 is configured to determine that the first target vehicle is within the shooting area of the first camera when it is determined that the second image includes the first target vehicle.
  • the first processing unit 11 is configured to perform feature extraction processing on the second image to obtain first feature data of the vehicle to be identified; according to the first feature data between the target feature data and the first feature data Similarity, obtain the comparison result of the first target vehicle and the vehicle to be identified; if the comparison result includes that the first target vehicle is the same as the vehicle to be identified, determine the second An image includes the first target vehicle.
  • the acquisition unit 13 is further configured to: obtain the first target vehicle and the target vehicle according to the first similarity between the target feature data and the first feature data. Before identifying the comparison result of the vehicle, at least one first target vehicle attribute of the first target vehicle is obtained;
  • the image processing device further includes: an extraction unit 16 configured to: perform vehicle attribute extraction processing on the second image to obtain at least one first vehicle attribute of the vehicle to be identified;
  • the first processing unit 11 is configured to: obtain a comparison result between the first target vehicle and the vehicle to be identified according to the first similarity and the second similarity, the second similarity being the The similarity between the at least one first target vehicle attribute and the at least one first vehicle attribute.
  • the first processing unit 11 is configured to: perform weighted summation of the first similarity and the second similarity to obtain a third similarity; When the degree of similarity is greater than the similarity threshold, it is determined that the comparison result includes that the vehicle to be identified is the same as the target vehicle; when the third similarity is less than or equal to the similarity threshold, it is determined that the comparison The result includes that the vehicle to be identified is different from the target vehicle.
  • the first camera is deployed in the target area
  • the acquisition unit 13 is configured to: acquire a database of vehicles to be tracked, the database of vehicles to be tracked includes characteristic data of at least one vehicle to be tracked, and the at least one vehicle to be tracked includes vehicles that need to be tracked in the target area track vehicle, and the target vehicle is any one of the at least one vehicle to be tracked; in the case of acquiring the second image collected by the first camera, from the vehicle to be tracked
  • the database obtains a feature data as the target feature data.
  • the acquisition unit 13 is configured to: acquire at least one fourth image, the at least one fourth image includes the at least one vehicle to be tracked; according to the at least one first Four images to obtain the vehicle database to be tracked.
  • the acquisition unit 13 is configured to: acquire the at least one fourth image captured by at least one checkpoint camera, and the at least one checkpoint camera is deployed at the entrance and exit of the target area .
  • the comparison result includes that the first target vehicle is the same as the vehicle to be identified, and the acquiring unit 13 is further configured to: acquire the fifth image of the target area and the The first position of the first camera in the fifth image; the image processing device further includes: a third processing unit 17, and is also configured to: according to the first position, determine that the first target vehicle is in the The second position in the fifth image.
  • the functions of the device provided by the embodiments of the present disclosure or the modules included therein can be used to execute the methods described in the above method embodiments, and the implementation can refer to the descriptions of the above method embodiments.
  • FIG. 5 is a schematic diagram of a hardware structure of an image processing device provided by an embodiment of the present disclosure.
  • the image processing device 2 includes a processor 21 , a memory 22 , an input device 23 and an output device 24 .
  • the processor 21 , the memory 22 , the input device 23 and the output device 24 are coupled through connectors, and the connectors include various interfaces, transmission lines or buses, etc., which are not limited in this embodiment of the present disclosure.
  • coupling refers to interconnection in a specific manner, including direct connection or indirect connection through other devices, for example, connection through various interfaces, transmission lines, and buses.
  • the processor 21 may be one or more graphics processing units (graphics processing unit, GPU), and in the case where the processor 21 is a GPU, the GPU may be a single-core GPU or a multi-core GPU.
  • the processor 21 may be a processor group composed of multiple GPUs, and the multiple processors are coupled to each other through one or more buses.
  • the processor may also be other types of processors, etc., which are not limited in this embodiment of the present disclosure.
  • the memory 22 can be used to store computer program instructions and various computer program codes including program codes for implementing the solutions of the present disclosure.
  • Memory includes but not limited to random access memory (random access memory, RAM), read-only memory (read-only memory, ROM), erasable programmable read-only memory (erasable programmable read only memory, EPROM), or portable Read-only memory (compact disc read-only memory, CD-ROM), which is used for related instructions and data.
  • the input device 23 is used for inputting data and/or signals and the output device 24 is used for outputting data and/or signals.
  • the input device 23 and the output device 24 can be independent devices, or an integrated device.
  • the memory 22 may not only be used to store related instructions, but also be used to store related data.
  • Fig. 5 only shows a simplified design of an image processing device.
  • the image processing device may also include other necessary components, including but not limited to any number of input/output devices, processors, memories, etc., and all image processing devices that can implement the embodiments of the present disclosure are included in this within the scope of public protection.
  • the disclosed systems, devices and methods may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components can be combined or May be integrated into another system, or some features may be ignored, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • all or part of them may be implemented by software, hardware, firmware or any combination thereof.
  • software When implemented using software, it may be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, all or part of the processes or functions according to the embodiments of the present disclosure will be generated.
  • the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in or transmitted via a computer-readable storage medium.
  • the computer instructions can be sent from a website site, computer, server, or data center via wired (e.g., coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.) Another website site, computer, server or data center for transmission.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device including a server, a data center, and the like integrated with one or more available media.
  • the available medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, a digital versatile disc (digital versatile disc, DVD)), or a semiconductor medium (for example, a solid state disk (solid state disk, SSD) )wait.
  • a magnetic medium for example, a floppy disk, a hard disk, a magnetic tape
  • an optical medium for example, a digital versatile disc (digital versatile disc, DVD)
  • a semiconductor medium for example, a solid state disk (solid state disk, SSD)
  • the processes can be completed by computer programs to instruct related hardware.
  • the programs can be stored in computer-readable storage media.
  • When the programs are executed may include the processes of the foregoing method embodiments.
  • the aforementioned storage medium includes: various media capable of storing program codes such as read-only memory (ROM) or random access memory (RAM), magnetic disk or optical disk.
  • the image processing method includes: determining that the first target vehicle is in the shooting area of the first camera, determining that the first video source device of the first target vehicle is the first camera; When it is determined that the first target vehicle enters the shooting area of the second camera from the shooting area of the first camera, the first video source device of the first target vehicle is switched from the first camera to the shooting area of the second camera.
  • the second camera, the first camera is different from the second camera.
  • the first video source device of the first target vehicle is transferred from the Switching the first camera to the second camera can improve the accuracy of the determined real-time first video source device, thereby improving the effect of video relay.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Library & Information Science (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

La présente demande concerne un procédé et un appareil de traitement d'images, un dispositif électronique, un support d'enregistrement lisible par ordinateur, et un produit programme d'ordinateur. Le procédé consiste à : lorsqu'il est déterminé qu'un premier véhicule cible se trouve dans une zone de photographie d'une première caméra, déterminer qu'un premier dispositif de source vidéo du premier véhicule cible est la première caméra ; et lorsqu'il est déterminé que le premier véhicule cible pénètre une zone de photographie d'une seconde caméra à partir de la zone de photographie de la première caméra, basculer le premier dispositif de source vidéo du premier véhicule cible de la première caméra sur la seconde caméra, la première caméra étant différente de la seconde caméra.
PCT/CN2022/107671 2021-08-23 2022-07-25 Procédé et appareil de traitement d'images, dispositif électronique, support d'enregistrement lisible par ordinateur, et produit programme d'ordinateur Ceased WO2023024787A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110971089.6 2021-08-23
CN202110971089.6A CN113705417B (zh) 2021-08-23 2021-08-23 图像处理方法及装置、电子设备及计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2023024787A1 true WO2023024787A1 (fr) 2023-03-02

Family

ID=78654284

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/107671 Ceased WO2023024787A1 (fr) 2021-08-23 2022-07-25 Procédé et appareil de traitement d'images, dispositif électronique, support d'enregistrement lisible par ordinateur, et produit programme d'ordinateur

Country Status (2)

Country Link
CN (1) CN113705417B (fr)
WO (1) WO2023024787A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117935173A (zh) * 2024-03-21 2024-04-26 安徽蔚来智驾科技有限公司 目标车辆的识别方法、场端服务器及可读存储介质
WO2024244792A1 (fr) * 2023-05-31 2024-12-05 京东方科技集团股份有限公司 Procédé de marquage de zone, procédé de traitement d'informations, système, appareil, dispositif et support

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113705417B (zh) * 2021-08-23 2022-06-28 深圳市商汤科技有限公司 图像处理方法及装置、电子设备及计算机可读存储介质
CN114333016A (zh) * 2021-12-29 2022-04-12 上海商汤智能科技有限公司 图像处理方法及装置、电子设备及计算机可读存储介质
CN115103110B (zh) * 2022-06-10 2023-07-04 慧之安信息技术股份有限公司 基于边缘计算的家庭智能监控方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107730898A (zh) * 2016-11-08 2018-02-23 北京奥斯达兴业科技有限公司 停车场非法车辆识别方法及系统
CN108198200A (zh) * 2018-01-26 2018-06-22 福州大学 跨摄像头场景下指定行人在线跟踪方法
CN109325967A (zh) * 2018-09-14 2019-02-12 腾讯科技(深圳)有限公司 目标跟踪方法、装置、介质以及设备
CN111818313A (zh) * 2020-08-28 2020-10-23 深圳市城市交通规划设计研究中心股份有限公司 一种基于监控视频的车辆实时追踪方法及装置
CN113705417A (zh) * 2021-08-23 2021-11-26 深圳市商汤科技有限公司 图像处理方法及装置、电子设备及计算机可读存储介质

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107635188A (zh) * 2017-09-08 2018-01-26 安徽四创电子股份有限公司 一种基于Docker平台的视频车辆跟踪分析方法
WO2019238142A2 (fr) * 2019-08-05 2019-12-19 深圳市锐明技术股份有限公司 Système et procédé de surveillance de véhicule

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107730898A (zh) * 2016-11-08 2018-02-23 北京奥斯达兴业科技有限公司 停车场非法车辆识别方法及系统
CN108198200A (zh) * 2018-01-26 2018-06-22 福州大学 跨摄像头场景下指定行人在线跟踪方法
CN109325967A (zh) * 2018-09-14 2019-02-12 腾讯科技(深圳)有限公司 目标跟踪方法、装置、介质以及设备
CN111818313A (zh) * 2020-08-28 2020-10-23 深圳市城市交通规划设计研究中心股份有限公司 一种基于监控视频的车辆实时追踪方法及装置
CN113705417A (zh) * 2021-08-23 2021-11-26 深圳市商汤科技有限公司 图像处理方法及装置、电子设备及计算机可读存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024244792A1 (fr) * 2023-05-31 2024-12-05 京东方科技集团股份有限公司 Procédé de marquage de zone, procédé de traitement d'informations, système, appareil, dispositif et support
CN117935173A (zh) * 2024-03-21 2024-04-26 安徽蔚来智驾科技有限公司 目标车辆的识别方法、场端服务器及可读存储介质

Also Published As

Publication number Publication date
CN113705417A (zh) 2021-11-26
CN113705417B (zh) 2022-06-28

Similar Documents

Publication Publication Date Title
WO2023024787A1 (fr) Procédé et appareil de traitement d'images, dispositif électronique, support d'enregistrement lisible par ordinateur, et produit programme d'ordinateur
CN111581423B (zh) 一种目标检索方法及装置
CN107292240B (zh) 一种基于人脸与人体识别的找人方法及系统
US20220301317A1 (en) Method and device for constructing object motion trajectory, and computer storage medium
CN111950368B (zh) 货运车辆监控方法、装置、电子设备和介质
WO2020155790A1 (fr) Procédé et appareil d'extraction d'informations de règlement de sinistre, et dispositif électronique
KR102297217B1 (ko) 영상들 간에 객체와 객체 위치의 동일성을 식별하기 위한 방법 및 장치
CN106060470A (zh) 一种视频监控方法及其系统
WO2016013885A1 (fr) Procédé d'extraction d'image et dispositif électronique associé
RU2710308C1 (ru) Система и способ для обработки видеоданных из архива
CN109615904A (zh) 停车管理方法、装置、计算机设备和存储介质
CN111598124A (zh) 图像处理及装置、处理器、电子设备、存储介质
CN116206363A (zh) 行为识别方法、装置、设备、存储介质及程序产品
WO2023024790A1 (fr) Procédé et appareil d'identification de véhicule, dispositif électronique, support de stockage lisible par ordinateur et produit de programme informatique
CN205883437U (zh) 一种视频监控系统
CN115730097A (zh) 基于人员重识别的人脸归档方法、装置、设备及介质
CN114549882A (zh) 一种图像聚档方法、装置、电子设备及存储介质
CN112800878A (zh) 目标检测方法、装置、电子设备及可读存储介质
CN114220087A (zh) 一种车牌检测方法、车牌检测器及相关设备
CN113470013A (zh) 一种搬移物品的检测方法及装置
JP2022534314A (ja) ピクチャに基づいた多次元情報の統合方法及び関連機器
HK40058658A (en) Image processing method and apparatus, electronic device and computer readable storage medium
Xiong et al. Moving object detection based on ViBe long-term background modeling
EP3907650B1 (fr) Procédé pour identifier des affiliés dans des données vidéo
CN118644795B (zh) 一种无人机场景用伪装物目标检测系统及检测方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22860133

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22860133

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 16.08.2024)

122 Ep: pct application non-entry in european phase

Ref document number: 22860133

Country of ref document: EP

Kind code of ref document: A1