WO2022134388A1 - Procédé et dispositif de détection d'évitement de tarif d'utilisateur, dispositif électronique, support de stockage et produit programme d'ordinateur - Google Patents
Procédé et dispositif de détection d'évitement de tarif d'utilisateur, dispositif électronique, support de stockage et produit programme d'ordinateur Download PDFInfo
- Publication number
- WO2022134388A1 WO2022134388A1 PCT/CN2021/086701 CN2021086701W WO2022134388A1 WO 2022134388 A1 WO2022134388 A1 WO 2022134388A1 CN 2021086701 W CN2021086701 W CN 2021086701W WO 2022134388 A1 WO2022134388 A1 WO 2022134388A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- target object
- feature
- identity
- identity feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/103—Static body considered as a whole, e.g. static pedestrian or occupant recognition
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
Definitions
- the present disclosure relates to, but is not limited to, the field of computer technology, and in particular, relates to a method and device for detecting fare evasion by car, electronic equipment, storage media, and computer program products.
- the embodiment of the present disclosure proposes a technical solution for fare evasion detection by car, which can be implemented as follows:
- Embodiments of the present disclosure provide a method for detecting fare evasion by car, including:
- a first identity feature obtained by identifying the first image of the target object; in the case that the first identity feature meets a preset occlusion condition, acquire a second image, wherein the first image and the The second image is collected at different shooting angles for the target object; the second image is identified to obtain the second identity feature of the second image; the first identity feature and the second identity feature are obtained performing an association; identifying the target object based on the associated second identity feature, and confirming that the target object has not evaded fares if the identity of the target object is successfully identified.
- the method further includes: identifying the target object based on the associated second identity feature, and confirming that the target object evades fares when the identity of the target object cannot be identified.
- the method further includes: when the first identity feature cannot be identified from the first image, confirming that the target object evades fares.
- the method further includes: receiving an infrared signal, wherein the infrared signal is triggered by the target object leaving the identification area of the identity recognition; Pictures and/or videos of the identified area.
- the method further includes: searching for an identification record within a preset time period in which the receiving time is located; and confirming that the target object identification record does not exist within the preset time period. The target object escapes the ticket.
- the occlusion condition includes at least one of the following: the first identity feature does not contain a face feature; the quality score of the face corresponding to the first identity feature is less than a preset quality score threshold; the The occlusion area of the face corresponding to the first identity feature is greater than the preset area threshold.
- the recognizing the second image to obtain the second identity feature of the second image includes: performing face and human detection on the second image to obtain at least one first object according to preset screening conditions, screening the detection results of the at least one first object to obtain the second identity feature of the target object.
- the screening conditions include at least one of the following: the position of the first object is within a preset recognition area; the image area of the first object in the second image is the largest; the The first object is closest to the preset object.
- performing face and human body detection on the second image to obtain at least one detection result of the first object includes: performing face detection and human body detection on the second image to obtain at least one a human face and at least one human body; associate at least one human face and at least one human body in the second image to obtain a detection result of the at least one first object.
- the associating the first identity feature and the second identity feature includes: associating the human body feature included in the first identity feature with the human body feature included in the second identity feature. Matching; in the case that the human body feature of the first identity feature matches the human body feature of the second identity feature, associate the first identity feature and the second identity feature.
- the method further includes: reacquiring a second image based on the acquisition time of the first image in the case that the human body feature of the first identity feature does not match the human body feature of the second identity feature , and associate the first identity feature with the re-acquired second identity feature of the second image until a preset number of associations is reached or the human body feature of the first identity feature and the human body of the second identity feature are characteristics match.
- the identifying the target object based on the associated second identity feature includes: based on the face feature included in the associated second identity feature, performing a Identification.
- the method further includes: when the identity of the target object is successfully identified, generating and saving an identification record of the target object, wherein the identification record includes identification time, user information and identification location one or more of the .
- the first image is captured at a first position at a shooting angle towards the target object
- the second image is captured at a second position at a shooting angle towards the target object
- the second position is located above the first position
- the embodiment of the present disclosure provides a vehicle fare evasion detection device, including:
- a first acquiring part configured to acquire a first identity feature obtained by identifying the first image of the target object
- the second acquiring part is configured to acquire a second image under the condition that the first identity feature complies with a preset occlusion condition, wherein the first image and the second image are shot in different ways for the target object angle collection;
- an identification part configured to identify the second image to obtain a second identity feature of the second image
- an association part configured to associate the first identity feature with the second identity feature
- the determining part is configured to identify the target object based on the associated second identity feature, and confirm that the target object has not evaded ticket if the identity of the target object is successfully identified.
- An embodiment of the present disclosure provides an electronic device, including: a processor; a memory configured to store instructions executable by the processor; wherein the processor is configured to call the instructions stored in the memory to execute the above method. some or all of the steps.
- Embodiments of the present disclosure provide a computer-readable storage medium, on which computer program instructions are stored, and when the computer program instructions are executed by a processor, the foregoing method is implemented.
- An embodiment of the present disclosure provides a computer program product, wherein the computer program product includes a non-transitory computer-readable storage medium storing a computer program, and when the computer program is read and executed by a computer, the implementation of the present disclosure is implemented Some or all of the steps of the methods described in the examples.
- the first identity feature obtained by identifying the first image of the target object may be obtained, and the second image is obtained when the first identity feature complies with a preset occlusion condition. Then, the first identity feature obtained from the first image and the second identity feature obtained from the second image are associated, and the target object is identified based on the associated second identity feature, and after the identity of the target object is successfully identified In this case, it is confirmed that the target object has not evaded fares, so that in the case where the target object cannot be identified through the first image, the target object can be identified through the second image associated with the first image, which can reduce the intentional occlusion of the face by the target object, etc. In the case where the identity cannot be identified due to reasons, the accuracy of the identification of fare evasion on the bus will be improved.
- FIG. 1 shows a schematic diagram of an implementation flow of a method for detecting fare evasion in a car provided by an embodiment of the present disclosure
- FIG. 2 shows a schematic diagram of a first position and a second position provided by an embodiment of the present disclosure
- FIG. 3A shows a schematic diagram of an implementation flow of a method for detecting fare evasion in a vehicle provided by an embodiment of the present disclosure
- FIG. 3B shows a schematic diagram of an implementation flow of a method for detecting fare evasion in a vehicle provided by an embodiment of the present disclosure
- FIG. 3C shows a schematic flowchart of the implementation of a method for detecting fare evasion in a vehicle provided by an embodiment of the present disclosure during the process of passengers entering and exiting a station;
- FIG. 4 shows a schematic diagram of the composition and structure of a vehicle fare evasion detection device provided by an embodiment of the present disclosure
- FIG. 5 shows a schematic diagram of the composition structure of an electronic device provided by an embodiment of the present disclosure
- FIG. 6 shows a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
- the fare evasion detection solution provided by the embodiments of the present disclosure can be applied to the recognition scenarios of fare evasion in rail transit, scenic spots, and other scenarios. It also recognizes two images taken from different shooting angles, so that even if the incoming passenger maliciously avoids the camera or deliberately covers the face to avoid fare evasion, the passenger cannot be identified in one image, and the passenger can be identified through the image from another shooting angle. Identify, thereby reducing the occurrence of fare evasion, saving human resources, and reducing missed inspections caused by human fatigue.
- the fare evasion detection solution provided by the embodiment of the present disclosure is also applicable to an identity recognition scenario with a large flow of people, so as to meet the requirements of ticket evasion recognition in various scenarios.
- the method for detecting fare evasion in a car may be executed by a terminal device, a server, or other types of electronic devices, where the terminal device may be a user equipment (User Equipment, UE), a mobile device, a user terminal, a terminal, a cellular phone , Cordless phones, Personal Digital Assistant (PDA), handheld devices, computing devices, in-vehicle devices, wearable devices, etc.
- the method for detecting fare evasion in a vehicle may be implemented by a processor invoking computer-readable instructions stored in a memory.
- the method may be performed by a server.
- FIG. 1 shows a flowchart of a method for detecting fare evasion in a car according to an embodiment of the present disclosure. As shown in FIG. 1 , the method for detecting fare evasion by car includes:
- Step S11 acquiring the first identity feature obtained by identifying the first image of the target object.
- the electronic device may acquire the first image collected for the target object.
- the target object may be an object that currently needs to be identified in the shooting scene.
- the shooting scene may include multiple objects, and the objects may be pedestrians, users, passengers, and the like.
- the electronic device may have a photographing function, and may photograph the target object at a certain photographing angle for the target object to obtain the first image collected at the first photographing angle.
- the electronic device may acquire the first image captured by the first capturing device at the first capturing angle.
- the first image may also be a video frame in the video.
- the electronic device may acquire the first video captured by the first camera at the first camera angle, and then may extract a video from the first video frame as the first image.
- the electronic device may decode the 1080p video stream of the first video into a frame image of a single frame and convert it into an RGB format to obtain the first image and/or the second image.
- the first image may be identified, for example, the first image may be identified through some identification algorithms or a neural network to obtain the identified first identity feature.
- a face and a human body in the first image may be identified, and the obtained first identity feature includes a face feature and/or a human body feature.
- the electronic device may also directly acquire the first identity feature obtained by recognizing the first image of the target object through other devices.
- the first identity feature may be a feature used to represent the identity of the target object, and the first identity feature may include a face feature and/or a human body feature.
- the target object when the first identity feature cannot be identified from the first image, it may be confirmed that the target object evades fares.
- the target object in the case of acquiring the first image, the target object may be identified through the first image, that is, the first image may be identified. If the first identity feature of the target object cannot be identified from the first image, it can be considered that the target object cannot be identified through the first image, that is, it is confirmed that the target object evades fares.
- Step S12 in the case that the first identity feature complies with a preset occlusion condition, acquire a second image, wherein the first image and the second image are collected at different shooting angles for the target object;
- the first identity feature complies with the preset occlusion condition, and if the first identity feature complies with the preset occlusion condition, it can be considered that the target object cannot be identified by the first identity feature, so that The associated second image may be further acquired, and the target object may be authenticated using the second image.
- the first image and the second image are collected at different shooting angles for the target object, and the shooting angle can be understood as the relative viewing angle of the image captured by the shooting device for the target object, for example, shooting angles such as flat shooting, overhead shooting, and overhead shooting.
- the electronic device may acquire a second image captured by the photographing device at a second photographing angle, where the first photographing angle and the second photographing angle are different.
- the second image can also be a video frame in the video.
- the first image is captured at a first position with a shooting angle towards the target object
- the second image is captured at a second position with a shooting angle towards the target object
- the first image and the second image can be is an image captured from the front of the target object.
- the second position may be located above the first position, and the first image and the second image may respectively show the front of the target object from different viewing angles, so that even if there is a certain occlusion of the front of the target object at one shooting angle, at the other
- the front of the target object at the shooting angle can also be exposed, and then the identity of the target object can be recognized through the frontal image of the target object.
- FIG. 2 shows a schematic diagram of a first position and a second position provided by an embodiment of the present disclosure.
- the first position and the second position can be as shown in FIG. 2
- a photographing device can be set at the first position 21 and the second position 22 respectively, and the photographing device can shoot facing the target object to obtain the first image and the second image respectively.
- the photographing devices set at the first position and the second position can respectively shoot the target object, so as to obtain the first image collected towards the target object at the first position and the second position A second image captured at the target object.
- the preset occlusion conditions may include at least one of the following situations: the first identity feature does not include a face feature, the face quality corresponding to the first identity feature is less than a preset quality threshold, the first identity feature The occlusion area of the corresponding face is greater than the preset area threshold.
- the target object cannot be identified by the first identity feature.
- the quality of the face corresponding to the first identity feature is less than the preset quality threshold, it can be considered that the quality of the face of the target object provided by the first image is low, and it may not be possible to identify the target object through the face of the first image.
- the occlusion area of the face corresponding to the first identity feature is greater than the preset area threshold, it can be considered that the face of the target object provided by the first image is incomplete, and the key area of the face may be occluded.
- the key areas of the human face such as the human eyes, nose, and mouth, cannot be identified by the face in the first image.
- the preset occlusion condition can be used as a priori condition for whether the first identity feature can identify the target object, and through the preset occlusion condition, a judgment can be made quickly, thereby improving the efficiency of the identity recognition.
- Step S13 identifying the second image to obtain a second identity feature of the second image
- the second image may be identified, for example, the second image may be identified through some identification algorithms or a neural network to obtain the identified second identity feature.
- the face and the human body in the second image may be identified, and the obtained second identity feature includes the face feature and/or the human body feature.
- Step S14 associating the first identity feature and the second identity feature
- the first identity feature and the second identity feature may be matched, and if the first identity feature and the second identity feature match, it may be considered that the first identity feature and the second identity feature belong to the same target Object, that is, the first identity feature and the second identity feature can be associated as the identity feature of the same target object.
- the human body feature included in the first identity feature may be matched with the human body feature included in the second identity feature.
- the human body feature of the first identity feature and the face feature of the second identity feature may be associated.
- the human body features included in the first identity feature can be matched with the human body features included in the second identity feature, for example, by calculating the The distance between the human body feature and the human body feature of the second identity feature, such as Euclidean distance, cosine distance, etc., when the distance is less than or equal to the preset value, determine the difference between the human body feature of the first identity feature and the second identity feature. If the human body features match, it can be considered that the first identity feature and the second identity feature belong to the same target object.
- the human body feature of the first identity feature and the face feature of the second identity feature may be associated, that is, the face feature included in the second identity feature and the human body feature of the first identity feature , which are associated with the facial features and human body features of the same target object, so as to realize the association between the first identity feature and the second identity feature, that is, the identity recognition can be performed by the facial features included in the second identity feature.
- the distance between the human body feature of the first identity feature and the human body feature of the second identity feature is greater than the preset value, it can be considered that the human body feature of the first identity feature does not match the human body feature of the second identity feature , that is, it can be considered that the first identity feature and the second identity feature belong to different target objects, and the first identity feature and the second identity feature cannot be associated.
- the second image may be re-acquired based on the acquisition time of the first image, and the first identity feature and the human body feature may be re-acquired.
- the re-acquired second identity feature of the second image is associated until a preset number of associations is reached or the human body feature of the first identity feature matches the human body feature of the second identity feature.
- the preset time period in which the acquisition time of the first image is located may be determined, and then the second image acquired within the preset time period may be re-acquired. the second image, so as to obtain the second image having the same target object as the first image as much as possible.
- the preset time period and the preset association times can be set according to actual application scenarios.
- the preset time period can be set to a duration of 20s, 30s, etc.
- the association times can be set to 3 to 10 times, etc.
- the present disclosure does not limit the specific preset time period and the preset association times.
- Step S15 Identify the target object based on the associated second identity feature, and confirm that the target object has not evaded fares if the identity of the target object is successfully identified.
- the facial feature of the target object may be determined in the second identity feature associated with the first identity feature, and then the target object is identified based on the facial feature included in the associated second identity feature , to determine the identity information of the target object. If the identity of the target object is successfully identified, it can be confirmed that the target object has not escaped the ticket. For example, in a subway entry scene, the facial features included in the second identity feature associated with the first identity feature can be compared with the facial features of the passengers pre-stored in the database.
- the target object is the pre-stored passenger, and further the identity information of the target object can be obtained, thereby successfully identifying the target object and confirming that the target object has not evaded fares.
- the identity information may include information such as a face image, a user identification number, an associated account number, and the like.
- an identification record of the target object may be generated and saved, and the identification record may include one or more of identification time, user information, and identification location, thereby It can provide an information basis for the information related to the target object of subsequent calls. For example, in the rail transit scene, when the target object enters the subway station, the target object can be identified, and when the identity of the target object is successfully identified, the ride record of the target object can be generated and saved ( identification records).
- a certain fee can be automatically deducted in the associated account included in the identity information according to the consumption information of the target object, so as to realize the user's entry and exit. Pass without feeling.
- the target object is identified based on the associated second identity feature, and when the identity of the target object cannot be identified, it is confirmed that the target object has escaped the ticket.
- the facial features included in the second identity feature associated with the first identity feature can be compared with the facial features of the pre-stored passengers in the database, and the facial features included in the second identity feature and the pre-stored passengers' facial features can be compared. When the facial features do not match, it can be considered that the identity of the target object cannot be recognized, and the target object can be considered to have evaded fares.
- an infrared generating device can also be set at the exit boundary of the identification area for identifying the target object, and the target object will trigger an infrared signal when leaving the identification area, so that the identification can be assisted by the infrared signal.
- the electronic device can receive infrared signals, and then collect and save pictures and/or videos of the target object passing through the recognition area according to the receiving time of the infrared signals. For example, obtaining pictures and/or videos taken at different shooting angles within a preset time period, the pictures and/or videos can be used as a follow-up target to avoid fare evasion, to respond to wrongful deduction complaints, or to retain on-site vouchers.
- the identification record within the preset time period in which the infrared signal is received can also be searched, that is, the identification record saved in a period before and after the receiving time can be searched.
- the identification record saved in a period before and after the receiving time can be searched.
- pictures and/or videos collected within the preset time period may be acquired and saved, and the pictures and/or videos may be used as a subsequent target object to evade fares or retain on-site credentials.
- the pictures and/or videos of the entry site can be stored by means of infrared signal triggering to assist in reproducing the scene during the fare evasion research and judgment.
- an authentication record in the case where an authentication record exists within a preset time period, it indicates that there is a target object leaving the identification area at the receiving time, and there is an authentication record for the target object, then it can be considered that the target object is authenticated. success.
- videos captured within the preset time period may also be acquired and saved. That is, the target object can trigger the infrared signal whether it enters the subway station with normal traffic or enters the subway station by evading fares.
- the embodiment of the present disclosure can confirm the identity of the passenger through pictures taken from different angles of the passenger in a non-inductive passage scene, without setting a ticket gate, thereby improving the accuracy of the passenger's identity recognition in the non-inductive passage.
- the second image may be identified to obtain the second identity feature of the second image.
- the following provides an implementation manner of identifying the second image to obtain the second identity feature of the second image.
- face and human body detection may be performed on the second image to obtain a detection result of at least one first object.
- the second image may be input into a convolutional neural network for face and human body detection, and the convolutional The neural network performs face and human detection on the image, and the convolutional neural network outputs a detection result of at least one first object.
- the first object may be an object such as a pedestrian, a passenger, and the like, and a target object may be included in the plurality of first objects. Since the detection result of at least one first object can be obtained after the face and human body detection is performed, the target object needs to be determined in the at least one first object.
- the detected faces and human bodies can be filtered by using preset filter conditions, thereby reducing the situation of wrong target object detection. Therefore, according to the preset screening conditions, the detection results of at least one first object can be screened, and the detection results of at least one first object can be filtered, that is, a plurality of human faces and human bodies detected in the second image can be screened, Filter out faces and human bodies that do not belong to the target object, and filter out the detection results of the target object. Then, according to the detection result of the target object, the second identity feature of the target object can be extracted in the image area where the target object is located. In the case where the detection result of the first image includes multiple faces and human bodies, the faces and human bodies that do not belong to the target object can be quickly filtered through the preset screening conditions, so as to accurately determine the multiple first objects. target.
- the detection result of the face and human body detection may include the detection frame and position information of the face and the human body of the same object.
- the detection frame can be used to identify the face and the human body of an object in the image, so that the detection frame can intuitively identify the face and the human body of the same object.
- the location information can indicate the location of an object, which can be an image location, or the location can also be a spatial location in the world coordinate system. The position is converted to the spatial position of the object in the world coordinate system.
- the detection result may further include information such as the mass of the face and the human body, the size of the detection frame, the object identification number (ID), and the occlusion area of the face.
- ID object identification number
- the quality of the face and the human body can be used as a priori condition for identity recognition. For example, if the face quality of the target object is less than a certain quality threshold, it can be considered that the first image or the second image provides the target object as a person. If the quality of the face is low, it may not be possible to identify the target object through this image, and the first image or the second image of the target object can be reacquired.
- the size of the detection frame may include the length and width of the detection frame, and in some examples, may also include the area of the image region where the detection frame is located.
- the object identification number may be a unique identification number of an object. In some examples, the detection results of multiple first images or second images may be tracked.
- the detection results of the images are matched, the detection results belonging to the target object in the multiple images are determined, and the same object identification number is set for the multiple detection results of the target object.
- the face occlusion area can be the area of the occluded area of the face, and the face occlusion area can be used as a priori condition for identification. For example, if the face occlusion area of the target object is greater than a certain area threshold, it can be considered that the The face of the target object provided by the image may not be able to identify the target object, and the first image or the second image of the target object can be obtained again.
- the filtering conditions may be set according to actual application scenarios, and the embodiments of the present disclosure do not limit the specific filtering conditions.
- the filtering conditions may include one or more items such as the position of the first object being within the preset recognition area, the image area of the first object in the second image being the largest, and the first object being the closest to the preset object.
- the process of recognizing the first image to obtain the first identity feature of the first image may be similar to the process of obtaining the second identity feature of the second image, and may be implemented in a similar implementation manner.
- the target object may be a passenger entering a subway station.
- the target object can be photographed to obtain the first image and/or the second image.
- the preset recognition area For the detection results of multiple first objects in any one of the first image and the second image, it can be determined whether the first objects are in the preset recognition area according to the detection results of each first object, and the preset recognition
- the area may be a punch-in area of a subway station, and the target object may be in the punch-in area, so that whether the first object is the target object can be determined according to whether the first object is in the punch-in area.
- the photographing device Since the first image and the second image can be captured towards the target object, and in order to clearly capture the face of the target object, the photographing device is usually set near the punch-in area, so the image area of the target object in the image is usually The largest of the plurality of first objects. Therefore, according to the detection result of at least one first object, it can be determined whether the area of the image area where the first object is located is the largest among the multiple first objects, if the area of the image area where a first object is located is the largest, the first object May be the target object.
- the target object may also be closest to a preset object, for example, the target object is closest to a preset object such as a photographing device, a punching device, etc., so that it can be determined whether the first object is close to the preset object according to the detection result of at least one first object.
- the object is closest. If a first object is closest to the preset object, the first object may be the target object.
- the target object for identification can be quickly determined among the plurality of first objects, thereby improving the efficiency of identification and the accuracy of identification.
- face detection and human body detection may be performed on the second image respectively to obtain at least one human face and at least one human body detection. human body. Then at least one human face in the second image is associated with at least one human body, so as to obtain a detection result of at least one first object.
- at least one face and at least one human body can be associated according to the distance between the face and the human body, the human body posture formed by the human face and the human body, etc., so that the distance between the associated face and the human body is smaller than the distance threshold
- the human body composed of the face and the human body conforms to the normal human body posture.
- FIG. 3A shows an implementation flow chart of a method for detecting fare evasion by car provided by an embodiment of the present disclosure, including the following steps:
- the first image may be a video frame extracted from a first video shot by a channel photographing device at a first position.
- the electronic device can directly read the video frame of the first video, thereby saving the time for acquiring the first image and reducing the delay in reading the first image.
- the first image can be input into a neural network, and the neural network can be used to perform face and human body detection, face and human body matching, and target tracking on the first image in sequence, and multiple detection results of the first object can be obtained, and the detection results can include The mass, position, size, object ID (Identity) and other information of the face and the human body.
- a plurality of detection results may be screened to screen out the detection results of the first object in the recognition area, and it is determined that the first object in the recognition area is the target object.
- facial features and human body features of the target object may be extracted as the first identity feature of the target object.
- S203 determine whether there is a recognizable face in the first image.
- the second image may be a video frame extracted from a second video captured by an overhead photographing device (eg, an overhead camera) at the second location.
- the electronic device can obtain the video stream of the second video output by the overhead shooting device, for example, can obtain the Real Time Streaming Protocol (RTSP) video stream of the second video, and extract the second video from the video stream of the second video. image.
- RTSP Real Time Streaming Protocol
- the second image can be input into the neural network, and the neural network can be used to detect the face and the human body in the second image, and the face and the human body are matched, and the detection results of the multiple first objects in the second image can be obtained.
- a plurality of first objects may be screened, a target object located in the identification area is screened, and the second identity feature of the target object in the second image is further determined.
- the second image may be acquired from the video stream of the second video again.
- the above steps S205-S206 can be performed independently, that is, the face and human body detection process of the first image and the human face and human body detection process of the second image can be performed independently of each other, and multiple The second image can continuously provide the second identity feature detected in the identification area, so as to provide correlative face and human body information for punch-in verification.
- the first identity feature and the second identity feature may be stored in the form of data cache, so that the delay when the first identity feature and the second identity feature are associated can be reduced.
- identification is performed based on the facial features in the second identity feature. If the facial features of the second identity feature cannot identify the identity of the target object, the target object can be considered to be a fare evasion passenger.
- S208 Receive an infrared signal, and search for an identification record within a preset time period where the receiving time is located according to the receiving time of the infrared signal. Save the video captured within a preset time period.
- the face thumbnail corresponding to the target object can also be compared with the passenger data in the database to identify the identity information of the target object. . Then, an identification record (such as a ride record) can be generated according to the identity information of the target object, and the fee can be deducted.
- an identification record such as a ride record
- the vehicle fare evasion detection solution provided by the embodiments of the present disclosure can identify the target object through images collected at different shooting angles, so that even if the image collected at one shooting angle does not include the face of the target object or the face is occluded
- the identification of the target object can also be realized by means of images collected from another shooting angle, which can reduce the chance of fare evasion in the subway entry scene.
- the embodiment of the present disclosure also uses infrared signals to trigger the detection of incoming passengers, which has a higher detection success rate for fare evasion, which can be used as a basis for judging fare evasion, and the judgment on fare evasion is more accurate.
- the embodiment of the present disclosure can be applied to a non-inductive passage scenario. In the non-inductive passage scenario, ticket gates may not be set, and the identity of the passenger can be confirmed by taking pictures of the passenger from different angles, thereby improving the accuracy of the passenger's identity recognition in the non-inductive passage. Rate.
- the following describes the embodiment of the present disclosure by taking the scene of a rail transit passenger brushing their faces and entering a station as an example.
- the embodiments of the present disclosure provide a fare evasion detection scheme.
- the following improvements are made to the face-scanning entering scheme in the related art: 1) In addition to the face-scanning scheme at the entry and exit passages
- an overhead camera facing the aisle is added, and a supporting monitoring system is added, which can realize a more comprehensive monitoring of the entry scene, and can avoid the problem that the face information cannot be collected at the aisle to a certain extent, thereby reducing ticket evasion.
- the face detection and feature extraction neural network of deep learning can be used to detect, locate, extract and perform face and human body detection on the surveillance video captured by the camera at the entry channel and the surveillance video captured by the overhead surveillance camera.
- Recognition, etc. combined with spatial calibration and face, human body correlation and feature comparison, the surveillance video can be analyzed frame by frame to determine whether passengers have evaded fares, so that real-time detection of fare evasion passengers can be realized, and it can handle the flow of people.
- the feature comparison method can be used to avoid the feature ID splitting in the pedestrian tracking process; the detected face frame size and position can be used to filter out passengers who are far away to avoid mistaken check-in; the camera at the entry passage is used to directly read
- the method of fetching data and the method of buffering the passenger face and human body data detected by the RTSP stream of the overhead camera can avoid the problem of large detection delay and facilitate the time correlation matching of the face and human body data;
- the triggering of the infrared camera at the entry passage can store the high-definition big picture of the current moment of the passenger entering the station, and assist in reproducing the scene when the ticket evasion is judged.
- an embodiment of the present disclosure provides a method for detecting fare evasion by car, and the method can be executed by an electronic device. As shown in FIG. 3B , the method may include the following steps:
- Step S301 respectively decode the surveillance video captured by the camera at the entry and exit passage and the surveillance video captured by the overhead camera, to obtain the latest frame of each surveillance video;
- the overhead camera may be a high-definition network camera facing the aisle, for example, a camera suspended behind the aisle and facing the direction in which passengers enter the station.
- the video stream can be obtained from the camera at the channel and the overhead camera respectively (multiple video streams can be supported), such as 1080p code stream, the video stream is decoded into a single frame of frame image, and each frame of image is converted into RGB format.
- latest frame image of the surveillance video captured by the camera at the entry and exit passage may correspond to the first image in the foregoing embodiment
- the latest frame image of the surveillance video captured by the overhead camera may correspond to the first image in the foregoing embodiment.
- Second image Second image
- Step S302 respectively detecting the human face and the human body in the latest frame image of each surveillance video, and then using the correlation model to match the human face and the human body;
- the latest frame image of each surveillance video can be input into a model that detects, tracks, and matches faces and human bodies to obtain face detection frames and human body detection frames, as well as each face detection frame and human body detection frame.
- Detect information such as quality, location, size, tracking ID, etc.
- Step S303 according to the frame position, size and preset spatial calibration of the face detection frame and the human body detection frame obtained by matching, screen the captured face and human body;
- the spatial calibration may be the mapping relationship between the pre-calibrated image coordinates and the world coordinates. Based on the preset spatial calibration and the frame position and size information of the face detection frame and the human body detection frame obtained by matching, the face and human body closest to the current channel can be selected. Based on the spatial calibration, the position of the face and the human body in the world coordinates can also be determined, so that the face and the human body that are not in the preset punch-in area in the frame image captured by the overhead camera can be filtered out.
- Step S304 perform face recognition on the screened faces, and generate a passenger's ride record, perform check-in and deduction, and save the check-in picture;
- the screened face images can be Base64 transcoded, and then compared with the pre-stored passenger data in the database through remote calling to identify the passenger information, generate a ride record, punch in and deduct the fee, and save it at the same time.
- Punch picture can be the passenger's face picture or the scene picture when punching in. For passengers who fail to punch in, they can make a punch-in verification.
- the overhead camera can also continuously output the face and human body information detected in the monitoring area, so as to use the feature comparison method to correlate the face features and the human body features during the check-in verification. If the passenger does not check in successfully during the period from entering to exiting the station, it can be judged that the passenger has evaded fares.
- Step S305 in the case that the infrared camera detects the passenger, correlate the scene scene at the current moment, make a fare evasion judgment, and save the scene.
- the infrared camera will detect the passenger. If there is no check-in verification record of the passenger within a period of time before and after the current moment, it will be judged that a passenger has evaded fares.
- the site is preserved in the form of photos.
- FIG. 3C shows a schematic diagram of the implementation process of a method for detecting fare evasion in a car provided by an embodiment of the present disclosure in the process of entering and leaving the station.
- the fare evasion detection process includes the detection 310 of the entry process and the exit process.
- face detection is performed on the passengers entering the station. If it is determined that a human face is detected, the passenger identity verification is performed; in the detection 320 of the outbound process, if it is determined that the passenger is entering the station. If no face is detected during the station process, it is determined that the passenger has evaded fare.
- the passenger identity verification is successful. If the verification fails and the departure of the passenger is detected, it is determined that the passenger has evaded fare. If the verification is successful After the passenger is detected to have left, it is determined that the passenger has not evaded fares; regardless of whether the passenger has evaded fares or not, the scene can be retained when the passenger leaves the station.
- the method for detecting fare evasion in a car has the following beneficial effects: 1) using face verification failure as the basis for judging fare evasion, the judgment of fare evasion is more accurate, and the rules for judging fare evasion have better interpretability; 2 ) The use of infrared cameras to trigger fare evasion detection has a higher detection success rate for fare evasion; 3) No ticket gates and manual card swiping are required, saving costs and improving the convenience and efficiency of passengers entering the station; 4) Ability to handle ticket evasion detection during crowded hours.
- the embodiments of the present disclosure also provide a vehicle evasion detection device, an electronic device, a computer-readable storage medium, and a program, all of which can be used to implement any of the vehicle ticket evasion detection methods provided by the embodiments of the present disclosure, and the corresponding technical solutions and See the corresponding entry in the Methods section for a description.
- FIG. 4 shows a schematic diagram of the composition and structure of a vehicle fare evasion detection device provided by an embodiment of the present disclosure. As shown in FIG. 4 , the device includes:
- the first acquisition part 31 is configured to acquire the first identity feature obtained by identifying the first image of the target object
- the second acquiring part 32 is configured to acquire a second image when the first identity feature complies with a preset occlusion condition, wherein the first image and the second image are different for the target object Captured from the shooting angle;
- the identification part 33 is configured to identify the second image to obtain the second identity feature of the second image
- an association part 34 configured to associate the first identity feature and the second identity feature
- the determining part 35 is configured to identify the target object based on the associated second identity feature, and confirm that the target object has not escaped the ticket if the identity of the target object is successfully identified.
- the determining part 35 is further configured to identify the target object based on the associated second identity feature, and when the identity of the target object cannot be identified, confirm the identity of the target object.
- the target object evaded fares.
- the determining part 35 is further configured to confirm that the target object evades fares when the first identity feature cannot be identified from the first image.
- it further includes: an infrared triggering part configured to receive an infrared signal, wherein the infrared signal is triggered when the target object leaves the identification area of the identification; A picture and/or video of the target object passing through the recognition area is saved.
- the infrared triggering part is further configured to search for identification records within a preset time period where the receiving time is located; there is no identification record of the target object within the preset time period In the case of identification records, it is confirmed that the target object evaded fares.
- the occlusion condition includes at least one of the following: the first identity feature does not contain a face feature; the quality score of the face corresponding to the first identity feature is less than a preset quality score threshold; the The occlusion area of the face corresponding to the first identity feature is greater than the preset area threshold.
- the identifying part 33 is configured to perform face and human body detection on the second image to obtain a detection result of at least one first object;
- the detection result of an object is screened to obtain the second identity feature of the target object.
- the screening conditions include at least one of the following: the position of the first object is within a preset recognition area; the image area of the first object in the second image is the largest; the The first object is closest to the preset object.
- the identifying part 33 is configured to perform face detection and human body detection on the second image to obtain at least one human face and at least one human body; At least one human body is associated to obtain the detection result of the at least one first object.
- the associating part 34 is configured to match the human body feature included in the first identity feature with the human body feature included in the second identity feature; When the human body features of the second identity feature match, the first identity feature and the second identity feature are associated.
- the associating part 34 is further configured to, in the case that the human body feature of the first identity feature matches the human body feature of the second identity feature, based on the acquisition time of the first image , re-acquire the second image, and associate the first identity feature with the re-acquired second identity feature of the second image until a preset number of associations is reached or the human body feature of the first identity feature and the The second identity feature matches the human body features.
- the determining part 35 is configured to identify the target object based on the facial features included in the associated second identity feature.
- it further includes: a generating part configured to generate and save an identification record of the target object when the identity of the target object is successfully identified, wherein the identification record includes identification time, One or more of User Information and Identifying Location.
- the first image is captured at a first position at a shooting angle towards the target object
- the second image is captured at a second position at a shooting angle towards the target object
- the second position is located above the first position
- the functions or included parts of the apparatus provided in the embodiments of the present disclosure may be configured to execute the methods described in the above method embodiments, and the specific implementation may refer to the descriptions in the above method embodiments. No longer.
- a "part" may be a part of a circuit, a part of a processor, a part of a program or software, etc., of course, a unit, a module or a non-modularity.
- Embodiments of the present disclosure further provide a computer-readable storage medium, on which computer program instructions are stored, and when the computer program instructions are executed by a processor, the foregoing method is implemented.
- the computer-readable storage medium may be a non-volatile computer-readable storage medium.
- An embodiment of the present disclosure further provides an electronic device, comprising: a processor; a memory configured to store instructions executable by the processor; wherein the processor is configured to invoke the instructions stored in the memory to execute the above method.
- Embodiments of the present disclosure also provide a computer program product, including computer-readable codes.
- a processor in the device executes the detection of ticket evasion in a vehicle provided by any of the above embodiments. method instruction.
- Embodiments of the present disclosure further provide another computer program product configured to store computer-readable instructions, which, when executed, cause the computer to perform the operations of the method for detecting fare evasion in a vehicle provided by any of the foregoing embodiments.
- the electronic device may be provided as a terminal, server or other form of device.
- FIG. 5 shows a schematic structural diagram of an electronic device 800 provided by an embodiment of the present disclosure.
- electronic device 800 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, fitness device, personal digital assistant, etc. terminal.
- the electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power supply component 806, a multimedia component 808, an audio component 810, an Input/Output (I/O) interface 812, Sensor assembly 814 , and communication assembly 816 .
- the processing component 802 generally controls the overall operation of the electronic device 800, such as operations associated with display, phone calls, data communications, camera operations, and recording operations.
- the processing component 802 can include one or more processors 820 to execute instructions to perform all or some of the steps of the methods described above. Additionally, processing component 802 may include one or more sections that facilitate interaction between processing component 802 and other components. For example, processing component 802 may include a multimedia module to facilitate interaction between multimedia component 808 and processing component 802.
- Memory 804 is configured to store various types of data to support operation at electronic device 800 . Examples of such data include instructions for any application or method operating on electronic device 800, contact data, phonebook data, messages, pictures, videos, and the like.
- the memory 804 may be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as Static Random-Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (Electrically Erasable) Erasable Programmable read only memory, EEPROM), Erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (Read-Only Memory) , ROM), magnetic memory, flash memory, magnetic disk or optical disk.
- SRAM Static Random-Access Memory
- EEPROM Electrically Erasable Programmable Read-Only Memory
- EPROM Erasable Programmable Read Only Memory
- PROM Programmable Read Only Memory
- Read-Only Memory Read-Only Memory
- Power supply assembly 806 provides power to various components of electronic device 800 .
- Power supply components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power to electronic device 800 .
- Multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and the user.
- the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user.
- the touch panel includes one or more touch sensors to sense touch, swipe, and gestures on the touch panel. The touch sensor may not only sense the boundaries of a touch or swipe action, but also detect the duration and pressure associated with the touch or swipe action.
- the multimedia component 808 includes a front-facing camera and/or a rear-facing camera. When the electronic device 800 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each of the front and rear cameras can be a fixed optical lens system or have focal length and optical zoom capability.
- Audio component 810 is configured to output and/or input audio signals.
- the audio component 810 includes a microphone (MIC) that is configured to receive external audio signals when the electronic device 800 is in an operating mode, such as a calling mode, a recording mode, and a voice recognition mode.
- the received audio signal may be stored in memory 804 or transmitted via communication component 816 .
- audio component 810 also includes a speaker configured to output audio signals.
- the I/O interface 812 provides an interface between the processing component 802 and a peripheral interface module, which may be a keyboard, a click wheel, a button, or the like. These buttons may include, but are not limited to: home button, volume buttons, start button, and lock button.
- Sensor assembly 814 includes one or more sensors configured to provide status assessment of various aspects of electronic device 800 .
- the sensor assembly 814 can detect the on/off state of the electronic device 800, the relative positioning of the components, such as the display and the keypad of the electronic device 800, the sensor assembly 814 can also detect the electronic device 800 or one of the electronic device 800 Changes in the position of components, presence or absence of user contact with the electronic device 800 , orientation or acceleration/deceleration of the electronic device 800 and changes in the temperature of the electronic device 800 .
- Sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact.
- Sensor assembly 814 may also include a light sensor, such as a Complementary Metal-Oxide-Semiconductor (CMOS) or Charge Coupled Devices (CCD) image sensor, for use in imaging applications.
- CMOS Complementary Metal-Oxide-Semiconductor
- CCD Charge Coupled Devices
- the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
- Communication component 816 is configured to facilitate wired or wireless communication between electronic device 800 and other devices.
- the electronic device 800 can access a wireless network based on a communication standard, such as a wireless network (WiFi), a second generation mobile communication technology (The 2nd Generation, 2G) or a third generation mobile communication technology (The 3rd Generation, 3G), or their The combination.
- the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel.
- the communication component 816 also includes a Near Field Communication (NFC) module to facilitate short-range communication.
- the NFC module may be based on Radio Frequency Identification (RFID) technology, Infrared Data Association (IrDA) technology, Ultra Wide Band (UWB) technology, Bluetooth (Bluetooth, BT) technology and other technology to achieve.
- RFID Radio Frequency Identification
- IrDA Infrared Data Association
- UWB Ultra Wide Band
- Bluetooth Bluetooth
- the electronic device 800 may be implemented by one or more Application Specific Integrated Circuit (ASIC), Digital Signal Processing (DSP), Digital Signal Processing Device (Digital Signal Processing Device) , DSPD), Programmable Logic Device (PLD), Field Programmable Gate Array (FPGA), controller, microcontroller, microprocessor, or other electronic component implementation, used to perform the above method.
- ASIC Application Specific Integrated Circuit
- DSP Digital Signal Processing
- DSPD Digital Signal Processing Device
- PLD Programmable Logic Device
- FPGA Field Programmable Gate Array
- controller microcontroller, microprocessor, or other electronic component implementation, used to perform the above method.
- a non-volatile computer-readable storage medium such as a memory 804 comprising computer program instructions executable by the processor 820 of the electronic device 800 to perform the above method is also provided.
- FIG. 6 shows a schematic structural diagram of an electronic device 1900 provided by an embodiment of the present disclosure.
- the electronic device 1900 may be provided as a server.
- electronic device 1900 includes a processing component 1922, which may include one or more processors, and a memory resource represented by memory 1932 configured to store instructions executable by processing component 1922, such as applications.
- An application program stored in memory 1932 may include one or more portions each corresponding to a set of instructions.
- the processing component 1922 is configured to execute instructions to perform the above-described methods.
- the electronic device 1900 may also include a power supply assembly 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input output (I/O) interface 1958 .
- the electronic device 1900 can operate based on an operating system stored in the memory 1932, such as a Microsoft server operating system (Windows Server TM ), a graphical user interface based operating system (Mac OS X TM ) introduced by Apple, a multi-user multi-process computer operating system (Unix TM ), Free and Open Source Unix-like Operating System (Linux TM ), Open Source Unix-like Operating System (FreeBSD TM ) or the like.
- Microsoft server operating system Windows Server TM
- Mac OS X TM graphical user interface based operating system
- Uniix TM multi-user multi-process computer operating system
- Free and Open Source Unix-like Operating System Linux TM
- FreeBSD TM Open Source Unix-like Operating System
- a non-volatile computer-readable storage medium such as memory 1932 comprising computer program instructions executable by the processing component 1922 of the electronic device 1900 to perform the methods described above.
- Embodiments of the present disclosure may provide a system, method and/or computer program product.
- the computer program product may comprise a computer-readable storage medium having computer-readable program instructions loaded thereon for causing a processor to implement the above-described method.
- a computer-readable storage medium may be a tangible device that can hold and store instructions for use by the instruction execution device.
- the computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
- Computer readable storage media may include (a non-exhaustive list): portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM) or flash memory), Static Random Access Memory (SRAM), Portable Compact Disc Read-Only Memory (CD-ROM), Digital Video Disc (DVD), Memory Stick, Floppy Disk, Mechanical Encoding devices, such as punch cards or raised structures in grooves on which instructions are stored, and any suitable combination of the above.
- RAM random access memory
- ROM read only memory
- EPROM erasable programmable read only memory
- flash memory Static Random Access Memory
- SRAM Static Random Access Memory
- CD-ROM Portable Compact Disc Read-Only Memory
- DVD Digital Video Disc
- Memory Stick Memory Stick
- Mechanical Encoding devices such as punch cards or raised structures in grooves on which instructions are stored, and any suitable combination of the above.
- Computer-readable storage media are not to be construed as transient signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (eg, light pulses through fiber optic cables), or through electrical wires transmitted electrical signals.
- the computer readable program instructions described herein may be downloaded to various computing/processing devices from a computer readable storage medium, or to an external computer or external storage device over a network such as the Internet, a local area network, a wide area network, and/or a wireless network.
- the network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
- a network adapter card or network interface in each computing/processing device receives computer-readable program instructions from a network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in each computing/processing device .
- the computer program instructions for carrying out the above method may be assembly instructions, instruction set architecture (Instruction Set Architecture, ISA) instructions, machine instructions, machine-related instructions, microcodes, firmware instructions, state setting data, or in one or more.
- Source or object code written in any combination of programming languages, including object-oriented programming languages, such as Smalltalk, C++, etc., and conventional procedural programming languages, such as the "C" language or similar programming languages.
- the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server implement.
- the remote computer can be connected to the user's computer through any kind of network—including a Local Area Network (LAN) or a Wide Area Network (WAN)—or, can be connected to an external computer (e.g. use an internet service provider to connect via the internet).
- LAN Local Area Network
- WAN Wide Area Network
- custom electronic circuits such as programmable logic circuits, Field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), are personalized by utilizing state information of computer readable program instructions, The electronic circuit can execute computer readable program instructions to implement the above method.
- FPGAs Field Programmable Gate Arrays
- PDAs Programmable Logic Arrays
- These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer or other programmable data processing apparatus to produce a machine that causes the instructions when executed by the processor of the computer or other programmable data processing apparatus , resulting in means for implementing the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
- These computer readable program instructions can also be stored in a computer readable storage medium, these instructions cause a computer, programmable data processing apparatus and/or other equipment to operate in a specific manner, so that the computer readable medium on which the instructions are stored includes An article of manufacture comprising instructions for implementing various aspects of the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
- Computer readable program instructions can also be loaded onto a computer, other programmable data processing apparatus, or other equipment to cause a series of operational steps to be performed on the computer, other programmable data processing apparatus, or other equipment to produce a computer-implemented process , thereby causing instructions executing on a computer, other programmable data processing apparatus, or other device to implement the functions/acts specified in one or more blocks of the flowcharts and/or block diagrams.
- each block in the flowchart or block diagrams may represent a section, segment, or portion of instructions that includes one or more functions for implementing the specified logical function(s) executable instructions.
- the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
- each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or actions , or can be implemented in a combination of dedicated hardware and computer instructions.
- the computer program product can be specifically implemented by hardware, software or a combination thereof.
- the computer program product is embodied as a computer storage medium, and in another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), etc. Wait.
- a software development kit Software Development Kit, SDK
- Embodiments of the present disclosure provide a method and device, electronic device, storage medium, and computer program product for detecting fare evasion by car, wherein the method includes: acquiring a first identity feature obtained by identifying a first image of a target object; In the case that the first identity feature complies with a preset occlusion condition, a second image is acquired, wherein the first image and the second image are collected at different shooting angles for the target object; Identifying the second image to obtain the second identity feature of the second image; associating the first identity feature and the second identity feature; identifying the target object based on the associated second identity feature The identification is performed, and if the identity of the target object is successfully identified, it is confirmed that the target object has not escaped the ticket.
- ticket evasion behavior recognition in various identity recognition scenarios can be realized, and the accuracy rate of fare evasion recognition by bus can be improved.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Image Analysis (AREA)
Abstract
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202011529962.8A CN112597886A (zh) | 2020-12-22 | 2020-12-22 | 乘车逃票检测方法及装置、电子设备和存储介质 |
| CN202011529962.8 | 2020-12-22 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2022134388A1 true WO2022134388A1 (fr) | 2022-06-30 |
Family
ID=75200747
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2021/086701 Ceased WO2022134388A1 (fr) | 2020-12-22 | 2021-04-12 | Procédé et dispositif de détection d'évitement de tarif d'utilisateur, dispositif électronique, support de stockage et produit programme d'ordinateur |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN112597886A (fr) |
| WO (1) | WO2022134388A1 (fr) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN117455442A (zh) * | 2023-12-25 | 2024-01-26 | 数据空间研究院 | 基于统计增强的身份识别方法、系统和存储介质 |
| CN120285536A (zh) * | 2025-06-12 | 2025-07-11 | 爱康普科技(大连)有限公司 | 一种跑步测试系统及方法 |
Families Citing this family (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112597886A (zh) * | 2020-12-22 | 2021-04-02 | 成都商汤科技有限公司 | 乘车逃票检测方法及装置、电子设备和存储介质 |
| CN113269124B (zh) * | 2021-06-09 | 2023-05-09 | 重庆中科云从科技有限公司 | 一种对象识别方法、系统、设备及计算机可读介质 |
| CN114078235A (zh) * | 2021-11-16 | 2022-02-22 | 交控科技股份有限公司 | 基于图像识别的轨道交通费用结算方法和装置 |
| CN114332505B (zh) * | 2021-12-28 | 2025-09-23 | 北京爱笔科技有限公司 | 目标匹配方法、事件确定方法、装置和计算机设备 |
| CN114613072B (zh) * | 2022-04-18 | 2023-06-23 | 宁波小遛共享信息科技有限公司 | 共享车辆的还车控制方法、装置和电子设备 |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107016348A (zh) * | 2017-03-09 | 2017-08-04 | 广东欧珀移动通信有限公司 | 结合深度信息的人脸检测方法、检测装置和电子装置 |
| CN109726656A (zh) * | 2018-12-18 | 2019-05-07 | 广东中安金狮科创有限公司 | 监控设备及其尾随监测方法、装置、可读存储介质 |
| CN109766755A (zh) * | 2018-12-06 | 2019-05-17 | 深圳市天彦通信股份有限公司 | 人脸识别方法及相关产品 |
| CN112597886A (zh) * | 2020-12-22 | 2021-04-02 | 成都商汤科技有限公司 | 乘车逃票检测方法及装置、电子设备和存储介质 |
Family Cites Families (22)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106503687B (zh) * | 2016-11-09 | 2019-04-05 | 合肥工业大学 | 融合人脸多角度特征的监控视频人物身份识别系统及其方法 |
| CN107633204B (zh) * | 2017-08-17 | 2019-01-29 | 平安科技(深圳)有限公司 | 人脸遮挡检测方法、装置及存储介质 |
| CN107633558A (zh) * | 2017-09-12 | 2018-01-26 | 浙江网新电气技术有限公司 | 一种基于人像与身份证比对识别的自助检票方法和设备 |
| CN107886667A (zh) * | 2017-10-11 | 2018-04-06 | 深圳云天励飞技术有限公司 | 报警方法及装置 |
| CN107992797B (zh) * | 2017-11-02 | 2022-02-08 | 中控智慧科技股份有限公司 | 人脸识别方法及相关装置 |
| CN107945321B (zh) * | 2017-11-08 | 2021-02-09 | 平安科技(深圳)有限公司 | 基于人脸识别的安检方法、应用服务器及计算机可读存储介质 |
| CN108399665A (zh) * | 2018-01-03 | 2018-08-14 | 平安科技(深圳)有限公司 | 基于人脸识别的安全监控方法、装置及存储介质 |
| CN108776768A (zh) * | 2018-04-19 | 2018-11-09 | 广州视源电子科技股份有限公司 | 图像识别方法及装置 |
| CN108805071A (zh) * | 2018-06-06 | 2018-11-13 | 北京京东金融科技控股有限公司 | 身份核验方法及装置、电子设备、存储介质 |
| CN109117803B (zh) * | 2018-08-21 | 2021-08-24 | 腾讯科技(深圳)有限公司 | 人脸图像的聚类方法、装置、服务器及存储介质 |
| CN111161205B (zh) * | 2018-10-19 | 2023-04-18 | 阿里巴巴集团控股有限公司 | 一种图像处理和人脸图像识别方法、装置及设备 |
| CN109658572B (zh) * | 2018-12-21 | 2020-09-15 | 上海商汤智能科技有限公司 | 图像处理方法及装置、电子设备和存储介质 |
| CN111382642A (zh) * | 2018-12-29 | 2020-07-07 | 北京市商汤科技开发有限公司 | 人脸属性识别方法及装置、电子设备和存储介质 |
| CN111460413B (zh) * | 2019-01-18 | 2023-06-20 | 阿里巴巴集团控股有限公司 | 身份识别系统及方法、装置、电子设备、存储介质 |
| CN110348301B (zh) * | 2019-06-04 | 2024-07-05 | 平安科技(深圳)有限公司 | 基于视频的查票实现方法、装置、计算机设备及存储介质 |
| CN110263830B (zh) * | 2019-06-06 | 2021-06-08 | 北京旷视科技有限公司 | 图像处理方法、装置和系统及存储介质 |
| CN110458062A (zh) * | 2019-07-30 | 2019-11-15 | 深圳市商汤科技有限公司 | 人脸识别方法及装置、电子设备和存储介质 |
| CN110781821B (zh) * | 2019-10-25 | 2022-11-01 | 上海商汤智能科技有限公司 | 基于无人机的目标检测方法及装置、电子设备和存储介质 |
| CN111768542B (zh) * | 2020-06-28 | 2022-04-19 | 浙江大华技术股份有限公司 | 一种闸机控制系统、方法、装置、服务器及存储介质 |
| CN111768543A (zh) * | 2020-06-29 | 2020-10-13 | 杭州翔毅科技有限公司 | 基于人脸识别的通行管理方法、设备、存储介质及装置 |
| CN111815675B (zh) * | 2020-06-30 | 2023-07-21 | 北京市商汤科技开发有限公司 | 目标对象的跟踪方法及装置、电子设备和存储介质 |
| CN111967311B (zh) * | 2020-07-06 | 2021-09-10 | 广东技术师范大学 | 情绪识别方法、装置、计算机设备及存储介质 |
-
2020
- 2020-12-22 CN CN202011529962.8A patent/CN112597886A/zh active Pending
-
2021
- 2021-04-12 WO PCT/CN2021/086701 patent/WO2022134388A1/fr not_active Ceased
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107016348A (zh) * | 2017-03-09 | 2017-08-04 | 广东欧珀移动通信有限公司 | 结合深度信息的人脸检测方法、检测装置和电子装置 |
| CN109766755A (zh) * | 2018-12-06 | 2019-05-17 | 深圳市天彦通信股份有限公司 | 人脸识别方法及相关产品 |
| CN109726656A (zh) * | 2018-12-18 | 2019-05-07 | 广东中安金狮科创有限公司 | 监控设备及其尾随监测方法、装置、可读存储介质 |
| CN112597886A (zh) * | 2020-12-22 | 2021-04-02 | 成都商汤科技有限公司 | 乘车逃票检测方法及装置、电子设备和存储介质 |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN117455442A (zh) * | 2023-12-25 | 2024-01-26 | 数据空间研究院 | 基于统计增强的身份识别方法、系统和存储介质 |
| CN117455442B (zh) * | 2023-12-25 | 2024-03-19 | 数据空间研究院 | 基于统计增强的身份识别方法、系统和存储介质 |
| CN120285536A (zh) * | 2025-06-12 | 2025-07-11 | 爱康普科技(大连)有限公司 | 一种跑步测试系统及方法 |
Also Published As
| Publication number | Publication date |
|---|---|
| CN112597886A (zh) | 2021-04-02 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11410001B2 (en) | Method and apparatus for object authentication using images, electronic device, and storage medium | |
| WO2022134388A1 (fr) | Procédé et dispositif de détection d'évitement de tarif d'utilisateur, dispositif électronique, support de stockage et produit programme d'ordinateur | |
| TWI775091B (zh) | 資料更新方法、電子設備和儲存介質 | |
| US11321575B2 (en) | Method, apparatus and system for liveness detection, electronic device, and storage medium | |
| US20210166040A1 (en) | Method and system for detecting companions, electronic device and storage medium | |
| CN108197586B (zh) | 脸部识别方法和装置 | |
| KR20210065178A (ko) | 생체 검출 방법 및 장치, 전자 기기 및 저장 매체 | |
| CN110287671B (zh) | 验证方法及装置、电子设备和存储介质 | |
| US11222231B2 (en) | Target matching method and apparatus, electronic device, and storage medium | |
| TWI766458B (zh) | 資訊識別方法及裝置、電子設備、儲存媒體 | |
| WO2020259073A1 (fr) | Procédé et appareil de traitement d'image, dispositif électronique et support de stockage | |
| EP2998960B1 (fr) | Procédé et dispositif de navigation vidéo | |
| CN110532957B (zh) | 人脸识别方法及装置、电子设备和存储介质 | |
| CN109951476B (zh) | 基于时序的攻击预测方法、装置及存储介质 | |
| WO2018228422A1 (fr) | Procédé, dispositif et système d'émission d'informations d'avertissement | |
| WO2022099989A1 (fr) | Procédés de commande de dispositif d'identification de vitalité et de contrôle d'accès, appareil, dispositif électronique, support de stockage, et programme informatique | |
| WO2022160616A1 (fr) | Procédé et appareil de détection de passages, dispositif électronique et support de stockage lisible par ordinateur | |
| WO2023024791A1 (fr) | Procédé et appareil d'ajustement de taux de trame, dispositif électronique, support de stockage et programme | |
| WO2017113930A1 (fr) | Procédé et dispositif de reconnaissance d'empreintes digitales | |
| CN109344703B (zh) | 对象检测方法及装置、电子设备和存储介质 | |
| TWI770531B (zh) | 人臉識別方法、電子設備和儲存介質 | |
| CN110781842A (zh) | 图像处理方法及装置、电子设备和存储介质 | |
| WO2022257306A1 (fr) | Procédé et appareil d'identification d'identité, dispositif électronique et support de stockage | |
| TWI751593B (zh) | 網路訓練方法及裝置、圖像處理方法及裝置、電子設備、電腦可讀儲存媒體及電腦程式 | |
| CN113506325B (zh) | 图像处理方法及装置、电子设备和存储介质 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21908400 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 21908400 Country of ref document: EP Kind code of ref document: A1 |
|
| 32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 11.12.2023) |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 21908400 Country of ref document: EP Kind code of ref document: A1 |