WO2021082006A1 - Dispositif de surveillance et procédé de commande - Google Patents
Dispositif de surveillance et procédé de commande Download PDFInfo
- Publication number
- WO2021082006A1 WO2021082006A1 PCT/CN2019/115138 CN2019115138W WO2021082006A1 WO 2021082006 A1 WO2021082006 A1 WO 2021082006A1 CN 2019115138 W CN2019115138 W CN 2019115138W WO 2021082006 A1 WO2021082006 A1 WO 2021082006A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- camera module
- processed
- target object
- controller
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
Definitions
- This application relates to the field of computer image recognition, and in particular to a monitoring device and a control method.
- Image recognition technology is one of the classic problems in computer vision.
- Pedestrian recognition technology can be a technology that recognizes the identity of a specific pedestrian in an image or video sequence by using computer vision technology. For example, a face image is obtained and compared with a face image in an image database, and the identity of the pedestrian corresponding to the face image can be identified through image recognition technology and the behavior trajectory of the pedestrian can be generated. Therefore, recognition technology can be widely used in fields such as smart video surveillance and smart security.
- a camera module deployed in a monitoring device usually captures images and transmits the acquired images to a back-end server for identification and analysis through the network.
- the transmission delay of the image is relatively large, resulting in low image processing efficiency. Therefore, how to improve image processing efficiency has become an urgent problem to be solved.
- the present application provides a monitoring device and a control method, so that after the monitoring device acquires an image, the image processing and analysis can be performed in the monitoring device, so that the processing efficiency of the image can be improved.
- a monitoring device including a camera module, a controller, and a processor, wherein the camera module is used to obtain images to be processed; the controller is used to send sample images to the processor; The processor is configured to perform image feature comparison between the to-be-processed image and the sample image to obtain a comparison result.
- the above-mentioned image to be processed may include a target object.
- the above-mentioned target object may refer to a human figure; or, it may refer to the face of a human figure; or, it may also refer to a vehicle (for example, a vehicle that has obvious characteristics and cannot recognize a license plate number); for example, , The appearance of vehicles with obvious recesses, protrusions, or scratches.
- the above-mentioned sample image may be an image in a database; for example, the sample image may be a blacklisted face image obtained through a security system, or the sample image may be an image of a suspicious vehicle obtained, and this application does not do anything about it. limited.
- the processor deployed in the monitoring device can perform image feature comparison on the acquired image to be processed and the sample image, so as to obtain the comparison result of the image to be processed; it is realized that the image to be processed is completed in the monitoring device During the process of acquiring, processing and analyzing the acquired images to be processed, there is no need to transmit the acquired images to be processed to a server deployed in the background for processing and analysis, thereby avoiding the delay problem introduced by the communication network and improving the processing efficiency of the images to be processed.
- the processor and the controller may refer to the same device.
- the foregoing processor may be a neural network processor.
- a neural network processor can be deployed in the monitoring device, and the acquired image to be processed can be compared with the sample image through the neural network processor. Since the neural network processor includes a computing unit, The calculation time of image feature comparison can be reduced, and the calculation efficiency of image feature comparison can be improved.
- the above-mentioned controller and processor may be deployed in an integrated chip, and the integrated chip may be called a system on a chip (system on a chip, SOC), or a part of the system on a chip.
- SOC system on a chip
- controller and processor may be deployed in two physically independent chips.
- the controller can be deployed in the main control chip to execute logic programs;
- the processor can refer to a neural network processor, and the neural network processor can be deployed in an artificial intelligence (AI) chip to execute images. Processing operation.
- AI artificial intelligence
- the processor is further configured to send the comparison result to the controller; the controller is configured to control the camera module according to the comparison result Whether the camera module of the dome camera in the monitor tracks and monitors the target object in the image to be processed.
- the processor can be used to send the comparison result to the controller, so that the controller can be used to determine whether to control the dome camera module in the camera module according to the comparison result to the target object in the image to be processed Continuous tracking is realized, thereby avoiding the problem that real-time monitoring of the target object cannot be achieved when the target object is far away from the monitoring range of the camera module, and improving the safety performance of the monitoring device.
- the comparison result is a similarity value between the image to be processed and the sample image.
- the comparison result of the image to be processed and the sample image may be measured by the similarity value between the image feature of the image to be processed and the image feature of the sample image.
- the similarity value of the image feature can be measured by the cosine distance between the image feature of the image to be processed and the image feature of the sample image.
- the similarity value between the image to be processed and the sample image can be determined by the following equation:
- the controller is further configured to: when the similarity value is greater than a preset threshold, control the dome camera module in the camera module to Tracking and monitoring the target object in the image to be processed in a rotating manner.
- the similarity value when the similarity value is greater than or equal to the preset threshold, it can indicate that the image to be processed and the sample image include the same target object; for example, when the target object is a portrait or face image, it can indicate the monitored pedestrian It is a pedestrian in the blacklist database.
- the dome camera module when it is determined that the similarity value between the image to be processed and the sample image is greater than the preset threshold, the dome camera module can be controlled by the controller to continuously track the target object, so as to avoid when the target object is far away from the camera module.
- the continuous tracking mechanism of the dome camera module proposed in the embodiment of the present application can obtain more information of the target object, thereby improving the safety performance of the monitoring device.
- the box camera module in the camera module is used to obtain the image to be processed.
- the aforementioned camera module may be a bolt-action camera module, and the bolt-action camera module may be used to obtain images to be processed.
- the box camera module can be a wide-angle box camera, and can also be called a wide-angle box camera or box camera.
- the above-mentioned monitoring device may be a monitoring device that cooperates with a gun and a ball
- the camera module may include a bolt camera module and a dome camera module, and the bolt camera module in the camera module may be Used to obtain the image to be processed.
- the box camera in the camera module can obtain the above-mentioned to-be-processed image.
- the dome camera module in the camera module is used to obtain the image to be processed.
- the image to be processed includes a target object
- the dome camera module is configured to obtain the image to be processed according to the target coordinate information of the target object
- the target coordinate information is obtained by performing coordinate mapping on the coordinate information of the target object in the gun camera module in the camera module.
- the controller can obtain the image according to the box camera module in the camera module.
- the coordinate information of the target object obtains the target coordinate information of the dome camera module of the target object in the camera module; the dome camera module in the camera module can obtain the aforementioned image to be processed according to the target coordinate information.
- the controller can be used to obtain the coordinate information of the target object in the camera module of the dome camera, and perform coordinate mapping through a preset algorithm to obtain the target coordinate information of the target object in the camera module of the dome camera, and then can control the camera module of the dome camera.
- the group's magnification or zooming parameters, etc. enable the dome camera module to monitor the target object and obtain the to-be-processed image that meets the recognition requirements.
- the target object in the image to be processed is a portrait.
- a control method is provided, the control method is applied to a monitoring device, the monitoring device includes a camera module, a controller, and a processor, and the control method includes:
- the monitoring device includes a camera module, a controller, and a processor.
- the control method includes: the camera module obtains an image to be processed; the controller sends a sample image to the processor; The image feature comparison between the to-be-processed image and the sample image is performed to obtain a comparison result.
- the above-mentioned image to be processed may include a target object.
- the above-mentioned target object may refer to a human figure; or, it may refer to the face of a human figure; or, it may also refer to a vehicle (for example, a vehicle that has obvious characteristics and cannot recognize a license plate number); for example, , The appearance of vehicles with obvious recesses, protrusions or scratches.
- the processor can perform image feature comparison on the acquired image to be processed and the sample image to obtain the comparison result of the image to be processed; the acquisition, processing and analysis of the image to be processed can be completed in the monitoring device In the process, there is no need to transmit the acquired image to be processed to a server deployed in the background for processing and analysis, thereby avoiding the delay problem introduced by the communication network and improving the processing efficiency of the image to be processed.
- the processor and the controller may refer to the same device.
- the foregoing processor may be a neural network processor.
- a neural network processor can be deployed in the monitoring device, and the acquired image to be processed can be compared with the sample image through the neural network processor. Since the neural network processor includes a computing unit, The calculation time of image feature comparison can be reduced, and the calculation efficiency of image feature comparison can be improved.
- the above-mentioned controller and processor may be deployed in an integrated chip, which may also be called a system on a chip (system on a chip, SOC), or a part of the system on a chip.
- SOC system on a chip
- the above-mentioned controller and processor may be deployed in two physically independent chips.
- the controller can be deployed in the main control chip to execute logic programs;
- the processor can refer to a neural network processor, and the neural network processor can be deployed in an artificial intelligence chip to perform operations in image processing.
- control method further includes: the processor sends the comparison result to the controller; and the controller controls the comparison result according to the comparison result. Whether the dome camera module in the camera module tracks and monitors the target object in the image to be processed.
- the processor may send the comparison result to the controller, so that the controller can determine whether to control the dome camera module in the camera module to continuously track the target object in the image to be processed according to the comparison result. This can avoid the problem that real-time monitoring of the target object cannot be achieved when the target object is far away from the monitoring range of the camera module, and the safety performance of the monitoring device can be improved.
- the comparison result is a similarity value between the image to be processed and the sample image.
- the comparison result of the image to be processed and the sample image may be measured by the similarity value between the image feature of the image to be processed and the image feature of the sample image.
- the similarity value of the image feature can be measured by the cosine distance between the image feature of the image to be processed and the image feature of the sample image.
- the similarity value between the image to be processed and the sample image can be determined by the following equation:
- the controller controls whether the dome camera module in the camera module tracks and monitors the target object in the image to be processed according to the comparison result , Including: when the similarity value is greater than a preset threshold, the controller controls the dome camera module in the camera module to track and monitor the target object in the image to be processed in a rotating manner.
- the similarity value when the similarity value is greater than or equal to the preset threshold, it can indicate that the image to be processed and the sample image include the same target object; for example, when the target object is a portrait or face image, it can indicate the monitored pedestrian It is a pedestrian in the blacklist database.
- the dome camera module when the controller determines that the similarity value between the image to be processed and the sample image is greater than the preset threshold, the dome camera module can be controlled by the controller to continuously track the target object to avoid when the target object is far away The target object cannot be monitored during the monitoring range of the bullet camera module.
- the continuous tracking mechanism of the dome camera module proposed in the embodiment of the present application can obtain more information about the target object, thereby improving the security performance of the monitoring device.
- the acquiring the image to be processed by the camera module includes: acquiring the image to be processed by the box camera module in the camera module.
- the aforementioned camera module may be a bolt-action camera module, and the bolt-action camera module may be used to obtain images to be processed.
- the box camera module can be a wide-angle box camera, and can also be called a wide-angle box camera or box camera.
- the above-mentioned monitoring device may be a monitoring device that cooperates with a gun and a ball
- the camera module may include a bolt camera module and a dome camera module, and the bolt camera module in the camera module may be Used to obtain the image to be processed.
- the gun in the camera module can obtain the above-mentioned to-be-processed image.
- the dome camera module in the camera module is used to obtain the image to be processed.
- the image to be processed includes a target object
- the dome camera module is configured to obtain the image to be processed according to the target coordinate information of the target object
- the target coordinate information is obtained by performing coordinate mapping on the coordinate information of the target object in the gun camera module in the camera module.
- the controller can obtain the image according to the box camera module in the camera module.
- the coordinate information of the target object obtains the target coordinate information of the dome camera module of the target object in the camera module; the dome camera module in the camera module can obtain the aforementioned image to be processed according to the target coordinate information.
- the controller can be used to obtain the coordinate information of the target object in the camera module of the dome camera, and perform coordinate mapping through a preset algorithm to obtain the target coordinate information of the target object in the camera module of the dome camera, and then can control the camera module of the dome camera.
- the group's magnification or zooming parameters, etc. enable the dome camera module to monitor the target object and obtain the to-be-processed image that meets the recognition requirements.
- the target object in the image to be processed is a portrait.
- a computer program product includes: computer program code, which when the computer program code runs on a computer, causes the computer to execute the control method in the second aspect.
- the above-mentioned computer program code may be stored in whole or in part on a first storage medium, where the first storage medium may be packaged with the processor, or may be packaged separately with the processor.
- first storage medium may be packaged with the processor, or may be packaged separately with the processor.
- a computer-readable medium stores a program code, and when the computer program code runs on a computer, the computer executes the control method in the second aspect.
- FIG. 1 is a schematic diagram of an image shooting system provided by an embodiment of the present application.
- FIG. 2 is a schematic structural diagram of a monitoring device provided by an embodiment of the present application.
- Fig. 3 is a schematic diagram of a monitoring device for gun-and-ball cooperation provided by an embodiment of the present application
- FIG. 4 is a schematic structural diagram of an integrated chip provided by an embodiment of the present application.
- Fig. 5 is a schematic structural diagram of a neural network processor provided by an embodiment of the present application.
- FIG. 6 is a schematic structural diagram of a processor provided by an embodiment of the present application.
- FIG. 7 is a schematic flowchart of a control method provided by an embodiment of the present application.
- FIG. 8 is a schematic flowchart of another control method provided by an embodiment of the present application.
- Fig. 9 is a schematic flowchart of another control method provided by an embodiment of the present application.
- Fig. 1 is a schematic diagram of an image capturing system provided by an embodiment of the present application.
- the image capturing system 100 may include a monitoring device 110 and a server 120, where the monitoring device 110 is used to obtain a monitoring image (for example, a face image), and the server is used to receive the monitoring image sent by the monitoring device 110, And compare the image features according to the monitored images and the sample images in the database (for example, the blacklisted face image library) to obtain the comparison result.
- a monitoring image for example, a face image
- the server is used to receive the monitoring image sent by the monitoring device 110, And compare the image features according to the monitored images and the sample images in the database (for example, the blacklisted face image library) to obtain the comparison result.
- the database for example, the blacklisted face image library
- the image shooting system it is necessary to transmit the surveillance image acquired by the surveillance device to a server deployed on the back-end through a communication network, and the server performs further processing and analysis on the surveillance image.
- the communication network is usually susceptible to factors such as the environment, which results in a large transmission delay of the surveillance image, which results in a lower efficiency of the overall image capture system in processing the surveillance image.
- the embodiment of the present application proposes a monitoring device and a control method.
- the processor deployed in the monitoring device can perform image feature comparison on the acquired image to be processed and the sample image, so as to obtain the comparison result of the image to be processed.
- the process of acquiring, processing and analyzing the image to be processed in the monitoring device is realized, without the need to transmit the acquired image to be processed to a server deployed in the background for processing and analysis, thereby avoiding the delay problem introduced by the communication network. Improve the processing efficiency of the image to be processed.
- the monitoring device provided in the embodiment of the present application can be applied to a scene of an intelligent tracing system.
- a pedestrian photo for example, a photo of a missing child
- the monitoring image acquired in real time within the monitoring range of the monitoring device is matched with the child's photo to realize the image matching in the monitoring range.
- the location information of the missing child can be quickly determined within.
- the monitoring device provided in the embodiment of the present application can be applied to a safe city scene.
- a pedestrian photo for example, a blacklisted face photo
- the monitoring device of the embodiment of the present application is sent to the monitoring device of the embodiment of the present application, and the monitoring image acquired in real time within the monitoring range of the monitoring device is matched with the blacklisted face photo, thereby The location information of suspicious persons can be quickly determined within the monitoring range.
- a vehicle photo (for example, the vehicle has obvious characteristics and the license plate number cannot be recognized) is sent to the monitoring device of the embodiment of the present application; for example, a vehicle with obvious concave parts, convex parts or scratches in appearance.
- the real-time monitoring image acquired by the monitoring device in the monitoring range is matched with the vehicle photo, so as to quickly determine the location information of the suspicious vehicle in the monitoring range.
- the monitoring device of the embodiment of the present application will be described in detail below with reference to FIG. 2 to FIG. 5.
- FIG. 2 is a schematic structural diagram of a monitoring device provided by an embodiment of the present application.
- the monitoring device 200 can include a camera module 210, a controller 220, and a processor 230.
- the camera module 210 can be used to obtain images to be processed; the controller 220 can be used to send images to the processor 230. Sample image; the processor 230 is used to perform image feature comparison between the image to be processed and the sample image to obtain a comparison result.
- the monitoring device can be deployed at subway entrances, buildings, or road intersections, etc.
- the monitoring device can shoot real-time road images near the monitoring device, and the road images can include pedestrian images or vehicle images.
- the aforementioned camera module 210 may refer to a camera used to obtain images to be processed in the monitoring device 200, and the images to be processed may include a target object, where the target object may refer to a portrait, or the target object may refer to a portrait of a portrait.
- the face, or the target object can also refer to a vehicle (for example, a vehicle that has obvious characteristics and cannot recognize a license plate number).
- the processor may be a general-purpose processor, such as a central processing unit (CPU), or a dedicated processor, such as a graphics processing unit (GPU), or a neural network processor (neural-processing unit, GPU). network processing unit, NPU).
- CPU central processing unit
- GPU graphics processing unit
- NPU neural network processor
- the processor and the controller may refer to the same device.
- the monitoring device 200 may be a bolt-action monitoring device, that is, the camera module 210 may refer to a bolt-action camera module; wherein, the bolt-action camera module may be used to monitor a global picture of a road.
- the box camera module can be a wide-angle box camera, and can also be called a wide-angle box camera or box camera.
- the image to be processed may be the image to be processed obtained by the box camera module.
- the monitoring device 200 may be a monitoring device that cooperates with a bullet camera module and a dome camera module, and may be called a monitoring device with a bullet camera cooperation, that is, the camera module 210 may include a bullet camera. Camera module and dome camera module.
- FIG. 3 is a schematic diagram of a monitoring device for gun-and-ball cooperation provided by an embodiment of the present application.
- the monitoring device 300 may include a box camera module 310, a dome camera module 320, and an integrated chip 330.
- the dome camera module 320 may refer to a dome camera, or it may be called a long camera. Focus ball machine or ball machine; the integrated chip 330 may include the controller 220 and the processor 230 shown in FIG. 2.
- the controller 220 may be used to control the gun-camera module and the ball-camera module to monitor.
- the controller 220 may be used to detect the first coordinate information of the object to be photographed in the box camera module, and map the first coordinate information to obtain the second coordinate information of the object to be photographed in the dome camera module.
- the camera module of the dome camera is controlled to monitor the object to be photographed.
- the bolt-action camera module may not be able to obtain the monitored image of pedestrian A.
- the controller 220 may Pedestrian A’s coordinate position information in the camera module of the dome camera obtains the coordinate information of pedestrian A in the camera module of the dome camera, thereby adjusting the zoom parameters of the dome camera module and controlling the monitoring distance of the dome camera module. Group of pedestrians A farther away.
- the image to be processed may be an image obtained by the box camera module, or the image to be processed may also be a dome camera module The captured image.
- the box camera module in the camera module is used to obtain the above-mentioned image to be processed.
- the gun in the camera module can be used to obtain images to be processed.
- the dome camera module in the camera module is used to obtain the above-mentioned image to be processed.
- the target object can be included in the image to be processed, and the dome camera module in the camera module can be used to obtain the image to be processed according to the target coordinate information of the target object.
- the coordinate information of the target object in the gun camera module is obtained by coordinate mapping.
- the controller can obtain The coordinate information of the target object in the camera module of the dome camera can be mapped through the preset algorithm to obtain the target coordinate information of the target object in the camera module of the dome camera, and then the magnification or zoom parameters of the camera module of the dome camera can be controlled. , So that the camera module of the dome camera can monitor the target object and obtain the to-be-processed image that meets the recognition requirements.
- the image to be processed can be obtained through the box camera module or the dome camera module in the camera module, and the image to be processed is sent to the processor; the processor can be used to connect the image to be processed with The sample image is compared with image features to obtain the comparison result.
- the monitoring device provided by the embodiments of the present application, the process of acquiring, processing, and analyzing the image to be processed is completed in the monitoring device, without the need to transmit the acquired image to be processed to a server deployed in the background for processing and analysis, thereby The time delay introduced by the communication network is avoided, and the processing efficiency of the image to be processed is improved.
- a neural network may be used to perform image feature extraction and image feature comparison between the image to be processed and the sample image to obtain a comparison result.
- the processor 230 may be further configured to send 220 the comparison result of the image feature comparison between the image to be processed and the sample image to the controller; the controller 220 may be configured to control the camera module according to the comparison result. Whether the dome camera module in the group 210 tracks and monitors the target object in the image to be processed.
- the so-called tracking monitoring means that the dome camera module continuously monitors the target object in the image to be processed for a period of time. Specifically, the dome camera module continuously monitors the target object in the image to be processed. The camera module can track and monitor the target object in a rotating manner until the dome camera module rotates to the maximum angle and cannot rotate further for monitoring. Therefore, the dome camera module tracks the monitoring time, that is, the above period of time. It is related to the angle that the dome camera module can rotate.
- the aforementioned comparison result may be the similarity value between the image to be processed and the sample image.
- the processor 230 may refer to a neural network processor.
- the neural network processor may obtain images to be processed (for example, pedestrian images) and sample patterns (for example, blacklisted face image data); Feature extraction obtains the image features of the image to be processed and the image features of the sample image; according to the image features of the image to be processed and the image features of the sample image, the similarity value of the image features is determined.
- the similarity value of the image feature can be measured by the cosine distance between the image feature of the image to be processed and the image feature of the sample image.
- the similarity value between the image to be processed and the sample image can be determined by the following equation:
- the similarity value when the similarity value is less than the preset threshold, it means that the image to be processed and the sample image include different target objects; for example, when the target object is a portrait or a face image, it can indicate The monitored pedestrian is not a pedestrian in the blacklist database.
- the similarity value is greater than or equal to the preset threshold, it means that the image to be processed and the sample image include the same target object; for example, when the target object is a portrait or face image, it can indicate that the monitored pedestrian is a blacklist Pedestrians in the library.
- the controller 220 may control the dome camera module in the camera module to track and monitor the target object in the image to be processed in a rotating manner.
- the dome camera module when it is determined that the similarity value between the image to be processed and the sample image is greater than the preset threshold, the dome camera module can be controlled by the controller to continuously track the target object, so as to avoid when the target object is far away from the gun.
- the continuous tracking mechanism of the dome camera module proposed in the embodiment of this application, more information about the target object can be obtained, thereby improving the security performance of the monitoring device and improving the efficiency of solving crimes. .
- FIG. 4 is a schematic structural diagram of an integrated chip provided by an embodiment of the present application.
- the integrated chip 400 may include a controller 410 and a processor 420, where the controller 410 may be used to execute logic programs in the monitoring device 200; the processor 420 may be used to execute operations in image processing .
- the integrated chip 400 may be the integrated chip 330 as shown in FIG. 3.
- the integrated chip shown in FIG. 4 may be called a system on a chip (system on a chip, SOC), or a part of a system on a chip.
- SOC system on a chip
- controller 220 and the processor 230 may also be deployed in two physically independent chips, which is not limited in this application.
- the controller 220 may be deployed in a main control chip to execute logic programs; the processor 230 may refer to a neural network processor, and the neural network processor may be deployed in an artificial intelligence (AI) chip for Perform calculations in image processing; among them, the main control chip and the AI chip can be two physically independent chips.
- AI artificial intelligence
- Fig. 5 is a schematic structural diagram of a neural network processor provided by an embodiment of the present application.
- the core part of the neural network processor 500 is the arithmetic circuit 503, and the controller 504 controls the arithmetic circuit 503 to extract the data in the memory (weight memory or input memory) and perform calculations .
- the arithmetic circuit 503 includes multiple processing units (process engines, PE). In some implementations, the arithmetic circuit 503 is a two-dimensional systolic array. The arithmetic circuit 503 may also be a one-dimensional systolic array or other electronic circuit capable of performing mathematical operations such as multiplication and addition. In some implementations, the arithmetic circuit 503 is a general-purpose matrix processor.
- the arithmetic circuit 503 fetches the data corresponding to matrix B from the weight memory 502 and caches it on each PE in the arithmetic circuit 503.
- the arithmetic circuit 503 fetches the matrix A data and matrix B from the input memory 501 to perform matrix operations, and the partial result or final result of the obtained matrix is stored in an accumulator 508 (accumulator).
- the vector calculation unit 507 can perform further processing on the output of the arithmetic circuit 503, such as vector multiplication, vector addition, exponential operation, logarithmic operation, size comparison, and so on.
- the vector calculation unit 507 can be used for network calculations in the non-convolutional/non-FC layer of the neural network, such as pooling, batch normalization, local response normalization, etc. .
- the vector calculation unit 507 can store the processed output vector to the unified memory 506.
- the vector calculation unit 507 may apply a nonlinear function to the output of the arithmetic circuit 503, such as a vector of accumulated values, to generate the activation value.
- the vector calculation unit 507 generates a normalized value, a combined value, or both.
- the processed output vector can be used as an activated input to the arithmetic circuit 503, for example, for use in a subsequent layer in a neural network.
- the unified memory 506 is used to store input data and output data.
- the weight data directly passes through the storage unit access controller 505 (direct memory access controller, DMAC) to store the input data in the external memory into the input memory 501 and/or unified memory 506, and the weight data in the external memory into the weight memory 502 , And store the data in the unified memory 506 into the external memory.
- DMAC direct memory access controller
- the bus interface unit 510 (bus interface unit, BIU) is used to implement interaction with the instruction fetch memory 509 through a bus.
- the instruction fetch buffer 509 (instruction fetch buffer) connected to the controller 504 is used to store instructions used by the controller 504.
- the controller 504 is used to call the instructions cached in the instruction fetch memory 509 to control the working process of the computing accelerator.
- the unified memory 506, the input memory 501, the weight memory 502, and the instruction fetch memory 509 are all on-chip (On-Chip) memories.
- the external memory is a memory external to the NPU.
- the external memory can be a double data rate synchronous dynamic random access memory.
- Memory double data rate synchronous dynamic random access memory, DDR SDRAM), high bandwidth memory (HBM) or other readable and writable memory.
- the neural network processor may include the above-mentioned arithmetic circuit 503 or the vector calculation unit 507, when the neural network processor performs matrix operations in image processing, for example, image feature extraction and image feature extraction are performed on the image to be processed and the sample image. The calculation efficiency is higher when comparing features.
- the processor shown in FIG. 6 may be a neural network processor, and the neural network processor and the controller may be respectively deployed in the main control chip and the NPU chip.
- the NPU chip can include a CPU and a matrix calculation unit, where the CPU can be used to perform processing logic during image feature comparison; the matrix calculation unit can be used to perform matrix operations during image feature comparison ;
- the main control chip can be used for image management.
- the main control chip can be used to obtain images in the database, such as blacklisted face images obtained through the security system, or blacklisted vehicle images;
- the NPU chip can receive the main The database image sent by the control chip and the image to be processed taken by the camera module, the CPU in the NPU chip can be used to control the matrix calculation unit to start performing image comparison;
- the matrix calculation unit can be used to obtain the image to be processed taken by the camera module Load the image characteristics of the image into the left matrix of the matrix calculation unit, load the blacklist image characteristics into the right matrix of the matrix calculation unit, and obtain the face similarity through the matrix multiplication and addition operation.
- FIG. 6 takes the NPU chip including the CPU and the matrix calculation unit as an example for illustration.
- the NPU may not include the CPU.
- the NPU chip may be connected to a CPU deployed outside the NPU chip.
- the similarity determination may be performed in the NPU chip including the CPU.
- the matrix calculation unit in the NPU chip can be used to perform matrix operations on the image features of the image to be processed and the image features of the blacklist image to obtain the similarity value (for example, the cosine distance) between the image features;
- the NPU chip includes The CPU can be used to compare the magnitude of the similarity value (for example, the cosine distance) with a preset threshold; when the similarity value is less than the preset threshold, it can indicate that the image to be processed and the blacklist image include different target objects. When the similarity value is greater than or equal to the preset threshold, it can indicate that the image to be processed and the blacklist image include the same target object.
- the NPU chip can send the similarity value to the main control chip, and the main control chip can control the dome camera module in the camera module to track and monitor the image in the image to be processed in a rotating manner. target.
- the similarity determination may also be performed in the main control chip.
- the matrix calculation unit in the NPU chip can be used to perform matrix operations on the image features of the image to be processed and the image features of the blacklisted image to obtain the similarity value (for example, cosine distance) between the image features; and the similarity value Send to the main control chip, the main control chip executes the comparison between the similarity value and the preset threshold.
- the main control chip can control whether the dome camera module in the camera module tracks and monitors the target in the image to be processed according to the similarity value Object. For example, when the similarity value is greater than the preset threshold, the main control chip can control the dome camera module in the camera module to track and monitor the target object in the image to be processed in a rotating manner.
- the matrix calculation unit shown in FIG. 6 may be the arithmetic circuit 503 or the vector calculation unit 507 shown in FIG. 5.
- the monitoring device in the embodiment of the present application can execute the following control method of the embodiment of the present application, that is, for the specific working process of the above various monitoring devices, refer to the corresponding process in the following method embodiment.
- Fig. 7 is a schematic diagram of a control method provided by an embodiment of the present application.
- the control method shown in FIG. 7 may be applied to a monitoring device, and the monitoring device may include a camera module, a controller, and a processor.
- control method 600 shown in FIG. 7 may be applied to the monitoring device shown in FIG. 2 or FIG. 3 described above.
- the control method shown in FIG. 7 includes steps 610 to 630, and these steps are respectively described in detail below.
- Step 610 The camera module obtains an image to be processed.
- the monitoring device may be deployed at a subway entrance, a building, or a road intersection, and the image to be processed may be a picture within a monitoring range captured by the monitoring device through a camera module.
- the image to be processed may include a target object; where the target object may refer to an object whose recognition result obtained by a recognition algorithm is unstructured data.
- the target object may refer to a portrait, or the target object may refer to the face of a portrait, or the target object may also refer to a vehicle; for example, a vehicle that has obvious characteristics and cannot recognize a license plate number.
- the above-mentioned vehicle with obvious characteristics and unable to identify the license plate number may refer to a vehicle with obvious recessed parts, raised parts or scratches in appearance.
- Step 620 The controller may send the sample image to the processor.
- the sample image may be an image in the database; for example, the sample image may be a blacklisted face image obtained through a security system, or the sample image may be an image of a suspicious vehicle obtained, which is not limited in this application.
- the sample image may refer to a blacklisted face image in the database or a sample image of a pedestrian to be identified.
- the sample images may be vehicle images in a database or sample images of vehicles that need to be identified.
- Step 630 The processor may perform image feature comparison between the image to be processed and the sample image to obtain a comparison result.
- the processor may receive the sample image sent by the controller and the image to be processed taken by the camera module, and compare the features of the image to be processed and the sample image to obtain the comparison result of the image to be processed and the sample image, thereby Identify the target object included in the image to be processed.
- the aforementioned monitoring device may be a bolt-action monitoring device, that is, the camera module may refer to a bolt-action camera module; the bolt-action camera module may be a wide-angle bolt-action camera, or it may be called a wide-angle gun Machine or box camera.
- the image to be processed in step 610 may be an image obtained by the box camera module.
- the above-mentioned monitoring device may be a monitoring device that cooperates with the camera module of the bullet camera and the camera module of the dome camera, and may be called the monitoring device of the cooperation of the bullet camera, namely the camera module. It can include a box camera camera module and a ball camera camera module.
- the image to be processed in the above step 610 may be an image obtained by the box camera module, or may also be an image obtained by the dome camera module .
- the box camera module in the camera module obtains the above-mentioned image to be processed.
- the gun in the camera module can obtain the above-mentioned to-be-processed image.
- the dome camera module in the camera module is used to obtain the above-mentioned image to be processed.
- the target object can be included in the image to be processed, and the dome camera module in the camera module can be used to obtain the image to be processed according to the target coordinate information of the target object.
- the coordinate information of the target object in the gun camera module is obtained by coordinate mapping.
- the controller can obtain The coordinate information of the target object in the camera module of the dome camera can be mapped through the preset algorithm to obtain the target coordinate information of the target object in the camera module of the dome camera, and then the magnification or zoom parameters of the camera module of the dome camera can be controlled. , So that the camera module of the dome camera can monitor the target object and obtain the to-be-processed image that meets the recognition requirements.
- the image to be processed can be obtained through the box camera module or the dome camera module in the camera module, and the image to be processed is sent to the processor; the processor can be used to connect the image to be processed with The sample image is compared with image features to obtain the comparison result.
- a neural network may be used to perform image feature extraction and image feature comparison between the image to be processed and the sample image to obtain a comparison result.
- control method may further include: the processor may send a comparison result to the controller; the controller may control whether the dome camera module in the camera module tracks and monitors the target object in the image to be processed according to the comparison result.
- the aforementioned comparison result may refer to the similarity value between the image to be processed and the sample image.
- the neural network processor can first obtain images to be processed (for example, pedestrian images) and sample patterns (for example, blacklisted face image data), and then perform image feature extraction on the images to be processed and sample images to obtain Image features and image features of the sample image; according to the image features of the image to be processed and the image features of the sample image, determine the similarity value of the image features.
- images to be processed for example, pedestrian images
- sample patterns for example, blacklisted face image data
- the similarity value of the image feature can be measured by the cosine distance between the image feature of the image to be processed and the image feature of the sample image.
- the similarity value between the image to be processed and the sample image can be determined by the following equation:
- the controller may compare the received similarity value with a preset threshold.
- the similarity value is less than the preset threshold, it means that the image to be processed and the sample image include different target objects; for example, when the target When the object is a portrait or a face image, it can indicate that the monitored pedestrian is not a pedestrian in the blacklist database.
- the similarity value is greater than or equal to the preset threshold, it means that the image to be processed and the sample image include the same target object; for example, when the target object is a portrait or face image, it can indicate that the monitored pedestrian is a blacklist Pedestrians in the library.
- the controller may control the dome camera module in the camera module to track and monitor the target object in the image to be processed in a rotating manner.
- the processor may compare the similarity value with a preset threshold value, and send the comparison result to the controller.
- the dome camera module when it is determined that the similarity value between the image to be processed and the sample image is greater than the preset threshold, the dome camera module can be controlled by the controller to continuously track the target object, so as to avoid when the target object is far away from the gun.
- the continuous tracking mechanism of the dome camera module proposed in the embodiment of this application, more information about the target object can be obtained, thereby improving the security performance of the monitoring device and improving the efficiency of solving crimes. .
- the image acquired by the camera module can be the image acquired by the box camera module, or the image acquired by the camera module can also be the image acquired by the dome camera module. The different possible situations are explained in detail.
- Fig. 8 is a schematic diagram of a control method provided by an embodiment of the present application. This control method can be applied to a monitoring device for gun-and-ball cooperation.
- the control method 700 shown in FIG. 8 includes steps 701 to 710, and these steps are respectively described in detail below.
- control method shown in FIG. 8 is illustrated by taking the target object in the acquired image to be processed as a portrait.
- the control method shown in FIG. 8 can also be applied to a to-be-processed image that includes other target objects, where the target object may refer to an object whose recognition result obtained by a recognition algorithm is unstructured data.
- Step 701 The controller may send a coordinate system synchronization request to the bolt camera module.
- Step 702 The controller may send a coordinate system synchronization request to the bolt camera module.
- step 701 and 702 it is possible to request the bolt camera module and the dome camera module to realize the synchronization of the coordinate system.
- the above steps 701 and 702 can be performed at the same time, or step 702 can be performed first.
- Step 701 is executed again, and the execution sequence of step 701 and step 702 is not limited in this application.
- the box camera module and the dome camera module can be integrated into the body of the all-in-one camera, and can communicate with the controller and the NPU; the above controller and the NPU can be integrated in the same chip Or, the controller and NPU can also be deployed in two physically independent chips.
- Step 703 The box camera module may send the coordinate system synchronization success to the controller.
- Step 704 The dome camera module may send the coordinate system synchronization success to the controller.
- steps 703 and 704 may be performed at the same time, or step 704 may be performed first, and then step 703 may be performed.
- the present application does not limit the execution order of step 703 and step 704 in any way.
- Step 705 The controller sends sample data to the NPU.
- sample data may refer to sample images.
- the sample data may refer to a blacklisted image library; for example, it may be a blacklisted face image, or it may also be a blacklisted vehicle image.
- the above-mentioned sample data may refer to blacklist image features obtained after feature extraction of the sample image through a neural network algorithm.
- sample data sent by the controller to the NPU may be the original blacklist image, or may be image features obtained after feature extraction on the blacklist database, and this application does not make any limitation on this.
- Step 706 The box camera module may send an image (for example, an image to be processed) to the NPU.
- an image for example, an image to be processed
- the above-mentioned image may be an image taken by the box camera module within the monitoring range.
- the target object may be included in the image.
- the target object may be a human face, or the target object may be a vehicle with obvious characteristics and an unrecognizable license plate number.
- the image can be processed in the NPU through the detection network, the tracking network, and the image selection network.
- the detection network is used to obtain the coordinates of the portrait in the image
- the tracking network is used to mark the portrait in the image
- the image selection network is used to evaluate the image quality and determine the image with better image quality.
- detection network may be different algorithms executed in the NPU.
- the NPU receives multi-frame images sent by the box camera module, and the detection network can detect the coordinates of the portrait in each frame of the multi-frame image; the tracking network can detect the portrait of the same pedestrian in the multi-frame image Marking; the image selection network can evaluate the image quality of multi-frame images and select the optimal frame image, that is, the image that needs to be recognized can be determined from the multi-frame image.
- step 707 is executed.
- Step 707 The NPU chip performs face comparison to determine the similarity value.
- the sample data sent by the controller to the NPU may be a sample image
- the NPU may perform image feature extraction on the portrait and the sample image in the image (for example, the above-mentioned optimal frame image) to obtain image features and samples of the portrait
- the image characteristics of the image according to the image characteristics of the portrait and the image characteristics of the sample image, the similarity value of the image characteristics is determined.
- the sample data sent by the controller to the NPU may be sample image features obtained through a neural network algorithm, and the NPU may use the same neural network algorithm to image the portrait in the image (for example, the above-mentioned optimal frame image).
- Feature extraction to obtain the image features of the portrait according to the image features of the portrait and the image features of the sample image, determine the similarity value of the image features.
- the aforementioned portrait may also refer to a face image.
- the similarity value of the image feature can be measured by the cosine distance between the image feature of the portrait and the image feature of the sample image.
- the similarity value between the image to be processed and the sample image can be determined by the following equation:
- Step 708 The NPU sends the similarity value to the controller.
- Step 709 The controller determines whether to continuously track the target object.
- the controller compares the preset threshold with the similarity value.
- the similarity value is less than the preset threshold, it means that the image to be processed and the sample image include different portraits, which can indicate that the monitored pedestrian is not black.
- Pedestrians in the list library when the similarity value is greater than or equal to the preset threshold, it means that the portrait in the image to be processed and the sample image include the same portrait, which can indicate that the monitored pedestrian is a pedestrian in the blacklist library.
- Step 710 When the similarity value is greater than or equal to the preset threshold, the controller controls the dome camera module to continuously track the target object.
- the controller can immediately send an alarm instruction and control the dome camera module to continuously track the target object.
- the image to be processed is the image obtained by the box camera module in the camera module.
- the following is a detailed description of the dome camera module in the camera module. The embodiment of processing the image will be described in detail.
- Fig. 9 is a schematic diagram of a control method provided by an embodiment of the present application. This control method can be applied to a monitoring device for gun-and-ball cooperation.
- the control method 800 shown in FIG. 9 includes steps 801 to 815, and these steps are respectively described in detail below.
- control method shown in FIG. 9 is illustrated by taking the target object in the acquired image to be processed as a portrait.
- the control method shown in FIG. 9 is also applied to a to-be-processed image including other target objects, where the target object may refer to an object whose recognition result obtained by a recognition algorithm is unstructured data.
- Step 801 The controller may send a coordinate system synchronization request to the bolt camera module.
- Step 802 The controller may send a coordinate system synchronization request to the bolt camera module.
- step 801 and 802 the bolt camera module and the dome camera module can be requested to synchronize the coordinate system.
- the above steps 801 and 802 can be executed at the same time, or step 802 can be executed first. , Step 801 is executed again, and the execution sequence of step 801 and step 802 is not limited in this application.
- the box camera module and the dome camera module can be integrated into the body of the all-in-one camera, and can communicate with the controller and the NPU; the above controller and the NPU can be integrated in the same chip Or, the controller and NPU can also be deployed in two physically independent chips.
- Step 803 The box camera module may send the coordinate system synchronization success to the controller.
- Step 804 The dome camera module may send the coordinate system synchronization success to the controller.
- step 803 and step 804 may be performed at the same time, or step 804 may be performed first, and then step 803 may be performed.
- the present application does not limit the execution order of step 803 and step 804 in any way.
- Step 805 The controller sends sample data to the NPU.
- sample data may refer to sample images.
- blacklisted image library for example, it may be a blacklisted face image, or it may also be a blacklisted vehicle image.
- the above-mentioned sample data may refer to blacklist image features obtained after feature extraction of the sample image through a neural network algorithm.
- sample data sent by the controller to the NPU may be the original blacklist image, or may be image features obtained after feature extraction on the blacklist database, and this application does not make any limitation on this.
- Step 806 The box camera module may send the first image to the NPU.
- the first image may be an image taken by the box camera module within the monitoring range.
- Step 807 The NPU detects the portrait coordinates in the first image.
- the NPU may send the portrait coordinates in the first image obtained through the detection network to the controller, and the controller then controls the dome camera module to obtain the image of the pedestrian.
- Step 808 The NPU sends the portrait coordinates in the first image to the controller.
- the controller can obtain the target coordinate information of the portrait in the dome camera module through coordinate mapping according to the portrait coordinates in the acquired first image.
- Step 809 The controller sends target coordinate information to the camera module of the dome camera.
- Step 810 The camera module of the dome camera performs omni-directional movement of the pan/tilt, lens zoom, and zoom control (Pan/Tilt/Zoom, PTZ) adjustments according to the target coordinates.
- the dome camera module obtains an image of the target after being adjusted to the best monitoring position through translation, rotation, and zooming according to the target coordinate information.
- Step 811 The dome camera module may send the second image to the NPU.
- the NPU can process the acquired multi-frame second image through the detection network, the tracking network, and the image selection network to obtain the optimal frame image in the multi-frame second image; the specific processing flow can be seen in Figure 8 above Step 706 is not repeated here.
- the optimal frame image among the multiple frames of second images may be identified, that is, step 812 is performed.
- Step 812 The NPU performs face comparison and determines the similarity value.
- the sample data sent by the controller to the NPU may be a sample image
- the NPU may perform image feature extraction on the portrait and the sample image in the second image (for example, the optimal frame image in the multi-frame second image) , Get the image features of the portrait and the image features of the sample image; determine the similarity value of the image features according to the image features of the portrait and the image features of the sample image.
- the sample data sent by the controller to the NPU may be sample image features obtained through a neural network algorithm, and the NPU may use the same neural network algorithm to extract image features of the portrait in the second image to obtain the image of the portrait Features: Determine the similarity value of the image features according to the image features of the portrait and the image features of the sample image.
- the aforementioned portrait may also refer to a face image.
- the cosine distance between the image feature of the portrait and the image feature of the sample image can be used to measure the similarity value of the image feature.
- the similarity value between the image to be processed (for example, the second image) and the sample image can be determined by the following equation:
- Step 813 The NPU sends the similarity value to the controller.
- Step 814 The controller determines whether to continuously track the target object.
- the controller compares the preset threshold with the similarity value.
- the similarity value is less than the preset threshold, it means that the portrait in the second image and the sample image include different portraits, which can indicate that the monitored pedestrian is not a blacklist.
- Pedestrians in the library when the similarity value is greater than or equal to the preset threshold, it means that the portrait in the second image and the sample image include the same portrait, which can indicate that the monitored pedestrian is a pedestrian in the blacklist library.
- Step 815 When the similarity value is greater than or equal to the preset threshold, the controller may control the dome camera module to continuously track the target object.
- the controller can immediately send an alarm instruction and control the dome camera module to continuously track the target object.
- the image to be processed can be obtained through the box camera module or the dome camera module in the camera module, and the image to be processed is sent to the neural network processor; the neural network processor can be used for Perform image feature comparison between the image to be processed and the sample image to obtain the comparison result.
- the camera module of the dome camera can be controlled by the controller to continuously track the target object, so as to avoid being used as the target object.
- the continuous tracking mechanism of the dome camera module proposed in the embodiment of this application can obtain more information about the target object, thereby improving the security performance of the monitoring device .
- the size of the sequence number of the above-mentioned processes does not mean the order of execution, and the execution order of each process should be determined by its function and internal logic, and should not correspond to the embodiments of the present application.
- the implementation process constitutes any limitation.
- the disclosed system, device, and method may be implemented in other ways.
- the device embodiments described above are merely illustrative, for example, the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or It can be integrated into another system, or some features can be ignored or not implemented.
- the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
- the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
- the functional modules in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
- the function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
- the technical solution of the present application essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application.
- the aforementioned storage media include: U disk, mobile hard disk, read-only memory (read-only memory, ROM), random access memory (random access memory, RAM), magnetic disks or optical disks and other media that can store program codes. .
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
L'invention concerne un dispositif de surveillance (110) et un procédé de commande. Le dispositif de surveillance (110) comprend un module de photographie (210), un dispositif de commande (220), et un processeur (230), le module de photographie (210) étant utilisé pour obtenir une image à traiter ; le dispositif de commande (220) étant utilisé pour envoyer une image d'échantillon au processeur (230) ; et le processeur (230) étant utilisé pour effectuer une comparaison de caractéristiques d'image sur l'image à traiter et l'image d'échantillon de façon à obtenir un résultat de comparaison. Par conséquent, le traitement et l'analyse de l'image à traiter peuvent être effectués dans le dispositif de surveillance (110), ce qui permet d'éviter le problème du retard long lorsque l'image à traiter est transmise à un serveur d'arrière-plan (120) pour traitement, et l'efficacité de traitement de l'image à traiter peut être améliorée.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/CN2019/115138 WO2021082006A1 (fr) | 2019-11-01 | 2019-11-01 | Dispositif de surveillance et procédé de commande |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/CN2019/115138 WO2021082006A1 (fr) | 2019-11-01 | 2019-11-01 | Dispositif de surveillance et procédé de commande |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2021082006A1 true WO2021082006A1 (fr) | 2021-05-06 |
Family
ID=75715733
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2019/115138 Ceased WO2021082006A1 (fr) | 2019-11-01 | 2019-11-01 | Dispositif de surveillance et procédé de commande |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2021082006A1 (fr) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113709400A (zh) * | 2021-07-29 | 2021-11-26 | 四川天翼网络服务有限公司 | 一种基于arm架构的考生违规行为实时检测系统及方法 |
| CN119854638A (zh) * | 2024-12-27 | 2025-04-18 | 杭州微影软件有限公司 | 一种目标匹配方法、装置及电子设备 |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108416285A (zh) * | 2018-03-02 | 2018-08-17 | 深圳市佳信捷技术股份有限公司 | 枪球联动监控方法、装置及计算机可读存储介质 |
| CN108900758A (zh) * | 2018-09-07 | 2018-11-27 | 厦门博聪信息技术有限公司 | 一种枪球协作的智能人脸抓拍摄像机及其人脸抓拍方法 |
| US20190007623A1 (en) * | 2017-06-30 | 2019-01-03 | Polycom, Inc. | People detection method for auto-framing and tracking in a video conference |
| CN110084139A (zh) * | 2019-04-04 | 2019-08-02 | 长沙千视通智能科技有限公司 | 一种基于多分支深度学习的车辆重识别方法 |
| CN110210404A (zh) * | 2019-05-31 | 2019-09-06 | 深圳算子科技有限公司 | 人脸识别方法及系统 |
| CN110232307A (zh) * | 2019-04-04 | 2019-09-13 | 中国石油大学(华东) | 一种基于无人机的多帧联合人脸识别算法 |
-
2019
- 2019-11-01 WO PCT/CN2019/115138 patent/WO2021082006A1/fr not_active Ceased
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20190007623A1 (en) * | 2017-06-30 | 2019-01-03 | Polycom, Inc. | People detection method for auto-framing and tracking in a video conference |
| CN108416285A (zh) * | 2018-03-02 | 2018-08-17 | 深圳市佳信捷技术股份有限公司 | 枪球联动监控方法、装置及计算机可读存储介质 |
| CN108900758A (zh) * | 2018-09-07 | 2018-11-27 | 厦门博聪信息技术有限公司 | 一种枪球协作的智能人脸抓拍摄像机及其人脸抓拍方法 |
| CN110084139A (zh) * | 2019-04-04 | 2019-08-02 | 长沙千视通智能科技有限公司 | 一种基于多分支深度学习的车辆重识别方法 |
| CN110232307A (zh) * | 2019-04-04 | 2019-09-13 | 中国石油大学(华东) | 一种基于无人机的多帧联合人脸识别算法 |
| CN110210404A (zh) * | 2019-05-31 | 2019-09-06 | 深圳算子科技有限公司 | 人脸识别方法及系统 |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113709400A (zh) * | 2021-07-29 | 2021-11-26 | 四川天翼网络服务有限公司 | 一种基于arm架构的考生违规行为实时检测系统及方法 |
| CN119854638A (zh) * | 2024-12-27 | 2025-04-18 | 杭州微影软件有限公司 | 一种目标匹配方法、装置及电子设备 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10943095B2 (en) | Methods and systems for matching extracted feature descriptors for enhanced face recognition | |
| US20200364443A1 (en) | Method for acquiring motion track and device thereof, storage medium, and terminal | |
| CN107093171B (zh) | 一种图像处理方法及装置、系统 | |
| US12283130B2 (en) | Spoof detection by generating 3D point clouds from captured image frames | |
| WO2020083111A1 (fr) | Procédé et dispositif de détection de caractère vivant, appareil électronique, support de stockage et système associé utilisant le procédé de détection de caractère vivant | |
| CN106682620A (zh) | 人脸图像采集方法及装置 | |
| WO2023016271A1 (fr) | Procédé de détermination d'attitude, dispositif électronique et support de stockage lisible | |
| WO2020094091A1 (fr) | Procédé de capture d'image, caméra de surveillance et système de surveillance | |
| KR101781358B1 (ko) | 디지털 영상 내의 얼굴 인식을 통한 개인 식별 시스템 및 방법 | |
| US9965494B2 (en) | Sharing photos | |
| WO2022121130A1 (fr) | Procédé et appareil de détection de cible d'énergie, dispositif informatique et support de stockage | |
| WO2021218695A1 (fr) | Procédé de détection de vivacité sur la base d'une caméra monoculaire, dispositif et support d'enregistrement lisible | |
| CN109905641B (zh) | 一种目标监控方法、装置、设备及系统 | |
| CN113298158B (zh) | 数据检测方法、装置、设备及存储介质 | |
| Qi et al. | A real-time face detection method based on blink detection | |
| CN109214324A (zh) | 基于多相机阵列的最正脸图像输出方法及输出系统 | |
| CN108288025A (zh) | 一种车载视频监控方法、装置及设备 | |
| CN114743264B (zh) | 拍摄行为检测方法、装置、设备及存储介质 | |
| WO2021082006A1 (fr) | Dispositif de surveillance et procédé de commande | |
| John et al. | Real time blink recognition from various head pose using single eye | |
| CN116052081B (zh) | 一种场地安全实时监测方法、系统、电子设备及存储介质 | |
| US20140111431A1 (en) | Optimizing photos | |
| CN117132648A (zh) | 一种视觉定位方法、电子设备及计算机可读存储介质 | |
| CN208077174U (zh) | 用于身份认证的识别装置 | |
| Tseng et al. | Anti-Spoofing of Live Face Authentication on Smartphone. |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19950517 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 19950517 Country of ref document: EP Kind code of ref document: A1 |