WO2019008789A1 - 解析装置、解析方法及びプログラム - Google Patents
解析装置、解析方法及びプログラム Download PDFInfo
- Publication number
- WO2019008789A1 WO2019008789A1 PCT/JP2017/038132 JP2017038132W WO2019008789A1 WO 2019008789 A1 WO2019008789 A1 WO 2019008789A1 JP 2017038132 W JP2017038132 W JP 2017038132W WO 2019008789 A1 WO2019008789 A1 WO 2019008789A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- coordinates
- detected
- face
- vehicle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/97—Determining parameters from multiple pictures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/59—Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
- G06V20/593—Recognising seat occupancy
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30242—Counting objects in image
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30268—Vehicle interior
Definitions
- the present invention relates to an analysis device, an analysis method, and a program.
- Patent Documents 1 and 2 disclose an apparatus for detecting the number of people in a vehicle.
- Patent Document 1 discloses an apparatus that analyzes an image obtained by photographing a vehicle from the side to detect a profile of a person, and determines the number of people in the vehicle based on the detection result.
- each of a plurality of images obtained by continuously capturing a vehicle is analyzed to detect a person, and it is estimated which seat in the vehicle each person detected based on how the vehicle looks in each image
- an apparatus for determining the number of people in a vehicle based on the number of seats determined to be sitting is disclosed.
- An object of the present invention is to provide a new technology for detecting the number of people in a vehicle.
- a predetermined location and person of the vehicle are detected from each of a plurality of images obtained by photographing the same vehicle a plurality of times from different directions, and each of the plurality of people in the coordinate system based on the detected predetermined location
- Image analysis means for detecting coordinates; Grouping means for grouping the persons detected from the different images based on the coordinates; Counting means for counting the number of groups;
- An analysis apparatus having the
- the computer is A predetermined location and person of the vehicle are detected from each of a plurality of images obtained by photographing the same vehicle a plurality of times from different directions, and each of the plurality of people in the coordinate system based on the detected predetermined location
- An analysis method is provided to perform the
- a predetermined location and person of the vehicle are detected from each of a plurality of images obtained by photographing the same vehicle a plurality of times from different directions, and each of the plurality of people in the coordinate system based on the detected predetermined location
- Image analysis means for detecting coordinates, Grouping means for grouping the persons detected from the different images based on the coordinates;
- Counting means for counting the number of groups, A program to function as is provided.
- a new technology is realized to detect the number of people in a vehicle.
- the analysis device detects the number of people in the vehicle based on the analysis results of each of a plurality of images obtained by continuously capturing the same vehicle.
- the analysis device detects a predetermined part of a vehicle and a person from each of a plurality of images obtained by photographing the same vehicle a plurality of times from different directions.
- the analysis device detects the coordinates of each of a plurality of persons in a coordinate system based on a predetermined location of the detected vehicle.
- the analysis device groups persons detected from different images based on the coordinates.
- the analysis device groups together with close coordinates.
- the analysis device counts the number of groups, and outputs the number of groups as the number of people on the vehicle.
- Each functional unit included in the analysis apparatus includes a central processing unit (CPU) of any computer, a memory, a program loaded to the memory, a storage unit such as a hard disk storing the program (the apparatus is shipped in advance
- a storage unit such as a hard disk storing the program
- the apparatus is shipped in advance
- storage media such as CDs (Compact Disc), programs downloaded from servers on the Internet, etc.)
- CDs Compact Disc
- FIG. 1 is a block diagram illustrating the hardware configuration of the analysis apparatus of the present embodiment.
- the analysis apparatus includes a processor 1A, a memory 2A, an input / output interface 3A, a peripheral circuit 4A, and a bus 5A.
- Peripheral circuit 4A includes various modules.
- the analysis device may be configured of a plurality of physically and / or logically divided devices. In this case, each of the plurality of devices may have a processor 1A, a memory 2A, an input / output interface 3A, a peripheral circuit 4A, and a bus 5A.
- the bus 5A is a data transmission path for the processor 1A, the memory 2A, the peripheral circuit 4A, and the input / output interface 3A to mutually transmit and receive data.
- the processor 1A is an arithmetic processing unit such as a CPU or a graphics processing unit (GPU), for example.
- the memory 2A is, for example, a memory such as a random access memory (RAM) or a read only memory (ROM).
- the input / output interface 3A is an interface for acquiring information from an input device (eg, a keyboard, a mouse, a microphone, a physical key, a touch panel display, a code reader, etc.), an external device, an external server, an external sensor, etc. Example: Display, speaker, printer, mailer etc.), an external device, an interface for outputting information to an external server, etc. are included.
- the processor 1A can issue an instruction to each module and perform an operation based on the result of the operation.
- the analysis device 10 includes an image analysis unit 11, a grouping unit 12, and a count unit 13.
- the image analysis unit 11 detects a predetermined part of a vehicle and a person from each of a plurality of images obtained by photographing the same vehicle a plurality of times from different directions. In the present embodiment, a human face is detected. Then, the image analysis unit 11 detects the coordinates of each of a plurality of detected faces in a two-dimensional coordinate system with a predetermined portion of the detected vehicle as a reference (eg, an origin).
- a two-dimensional coordinate system having a predetermined position of the detected vehicle as an origin may be referred to as a "vehicle coordinate system".
- the sensor 101 and the camera 102 are installed along the road through which the vehicle passes.
- the sensor 101 detects that the vehicle has passed a predetermined position on the road.
- the sensor 101 may include means for emitting light (e.g., a laser) in a predetermined direction (e.g., the direction of the arrow shown) and means for receiving the reflected light.
- the passage of the vehicle may be detected by detecting the presence of an obstacle that impedes the progress of the emitted light based on how the reflected light is received.
- it may be detected by a weight sensor or the like that the vehicle has passed a predetermined position.
- the sensor 101 inputs a signal indicating that to the camera 102.
- the camera 102 captures an image according to the detection.
- the camera 102 may continuously shoot a predetermined number of still images (e.g., several dozens to hundreds of images per second) according to the detection by the sensor 101, and generate a plurality of still image files.
- the camera 102 may capture a moving image for a predetermined shooting time according to the detection by the sensor 101, and may generate a moving image file including a plurality of frames.
- the predetermined number and the predetermined photographing time can be arbitrarily determined in advance according to the specifications of the camera 102, the moving speed of the vehicle, and the like.
- the position and the direction of the camera 102 are set so as to photograph the vehicle detected by the sensor 101 by the photographing according to the detection by the sensor 101.
- the image file generated by the camera 102 is input to the analysis device 10 in real time processing or batch processing.
- the analysis device 10 and the camera 102 may be configured to be communicable by any communication means.
- the image analysis unit 11 sets a plurality of images obtained by photographing the same vehicle as processing targets, and detects a vehicle and a human face from each of the plurality of images. For example, in accordance with a single detection by the sensor 101, a predetermined number of still images captured by the camera 102 or a moving image for a predetermined imaging time can be set as a processing target.
- the detection of the faces of vehicles and people may be realized by template matching. Also, it may be realized by a detector constructed by machine learning using a large number of images. As a detector, for example, SVM (support vector machine), LDA (linear discriminant analysis), GLVQ (generalized learning vector quantization), neural network or the like can be used.
- SVM support vector machine
- LDA linear discriminant analysis
- GLVQ generalized learning vector quantization
- the face of the person to be detected may be the face of the person inside the detected vehicle, that is, the face present inside the contour of the detected vehicle. By doing this, it is possible to detect only the face of the person who is in the vehicle and to exclude the face of a person outside the vehicle, such as a passerby or a traffic guide, from the detection targets.
- the image analysis unit 11 After detecting the vehicle from each of the plurality of images, the image analysis unit 11 detects a predetermined location and a predetermined direction of the detected vehicle from each of the plurality of images. Then, a two-dimensional coordinate system (vehicle coordinate system) is defined for each image with the detected predetermined location as the origin and the detected predetermined direction as the x-axis direction and the y-axis direction.
- vehicle coordinate system vehicle coordinate system
- FIG. 4 shows an example of a vehicle coordinate system set in one image F.
- the rear end of the bumper attached to the back of the body of the vehicle is a predetermined position
- the longitudinal direction and the height direction of the vehicle are a predetermined direction.
- a vehicle coordinate system is defined in which the rear end of the bumper is the origin, the longitudinal direction of the vehicle is the x axis, and the height direction is the y axis.
- the image analysis unit 11 After setting the vehicle coordinate system for each image, the image analysis unit 11 detects the coordinates of the face of the person detected from each image.
- the coordinates of each of the plurality of faces can be obtained by a method according to the method of detecting the human face.
- a representative point of this area B (e.g. center, upper right, upper left, lower right, lower left, etc.) Can be determined as the coordinates of the face present in the area B.
- the coordinates of the eyes, nose, mouth (eg, representative coordinates of the area occupied by the eyes, nose, mouth) It can be determined as coordinates of a face having a nose and a mouth.
- FIG. 5 By the image analysis by the image analysis unit 11, an analysis result as shown in FIG. 5 is obtained.
- face ID identifier
- coordinates in a vehicle coordinate system for each of a plurality of faces
- frame ID an image in which each face is detected
- the grouping unit 12 described below groups a plurality of face IDs attached to the same person's face.
- the grouping unit 12 groups a plurality of people detected from the plurality of images based on the coordinates of the vehicle coordinate system.
- the grouping unit 12 groups the plurality of faces detected from the plurality of images based on the coordinates of the vehicle coordinate system.
- the grouping unit 12 groups people detected from different images based on coordinates.
- the grouping unit 12 groups together those having similar coordinates in the vehicle coordinate system.
- the grouping unit 12 can group a plurality of faces detected from a plurality of images using the distance between coordinates.
- data as shown in FIG. 6 is obtained.
- the obtained data may be stored in a storage medium such as a memory.
- IDs of groups to which each of the faces belongs are associated with each of the plurality of faces detected by the image analysis unit 11.
- the same group ID is associated with the face ID of the face whose coordinates in the vehicle coordinate system are close.
- grouping may be performed by other methods.
- the grouping unit 12 selects one or more faces detected from the first image that is the first processing target (eg, the first captured image) among the plurality of images to be processed, Belong to different groups.
- the face ID of each face detected from the first image is associated with a different group ID and stored in a storage medium such as a memory.
- M M is an integer of 1 or more
- the grouping unit 12 generates M groups and causes each of the M faces to belong to each group. For example, M new group IDs are generated, and different group IDs are associated with IDs of M faces detected from the first image (see FIG. 6).
- the grouping unit 12 causes the coordinates of each of one or more faces detected from the first image, and the coordinates of each of one or more faces detected from the second image captured immediately after that. Calculate the distance between Then, the grouping unit 12 causes the faces of the two persons to be compared to belong to the same group according to the comparison between the calculated distance and the predetermined value. For example, when the grouping unit 12 satisfies the distance condition that the calculated distance is less than the predetermined value, the grouping unit 12 causes the two faces to be processed to belong to the same group.
- the predetermined value can be arbitrarily determined in advance.
- the grouping unit 12 makes the 2-1 face belong to the same group as the 1-1 face.
- the same group ID is associated with the face ID of the 1-1 face and the face ID of the 2-1 face (see FIG. 6).
- the grouping unit 12 if the second face detected from the second image does not satisfy the distance condition for any of the faces detected from the first image, the grouping unit 12 generates a new group. , And the face 2-2 belongs to the group. For example, a new group ID is generated, and the group ID is associated with the face ID of the second face (see FIG. 6).
- the grouping unit 12 performs the above process on all pairs of the Nth image and the (N + 1) th image taken immediately after that, and groups a plurality of faces.
- the grouping unit 12 selects one or more faces detected from the first image that is the first processing target (eg, the first captured image) among the plurality of images to be processed, Belong to different groups.
- the face ID of each face detected from the first image is associated with a different group ID and stored in a storage medium such as a memory.
- M M is an integer of 1 or more
- the grouping unit 12 generates M groups and causes each of the M faces to belong to each group. For example, M new group IDs are generated, and different group IDs are associated with IDs of M faces detected from the first image (see FIG. 6).
- the grouping unit 12 detects one or more faces detected from the first image and one or more detected from the second image captured immediately thereafter.
- the distance in the x-axis direction and the distance in the y-axis direction to each of the plurality of faces are calculated.
- the grouping unit 12 belongs to the same group two faces satisfying the distance condition “the distance in the x-axis direction is less than the first predetermined value and the distance in the y-axis direction is less than the second predetermined value”. Let That is, the grouping unit 12 causes the two faces to be processed to belong to the same group when the distance condition is satisfied.
- the first predetermined value and the second predetermined value can be arbitrarily determined in advance.
- the grouping unit 12 when the 2-1 face detected from the second image and the 1-1 face detected from the first image satisfy the above-described distance condition, the grouping unit 12 generates the second image. Make 1 face belong to the same group as the 1-1 face.
- the same group ID is associated with the face ID of the 1-1 face and the face ID of the 2-1 face (see FIG. 6).
- the grouping unit 12 if the second face detected from the second image does not satisfy the distance condition for any of the faces detected from the first image, the grouping unit 12 generates a new group. , And the face 2-2 belongs to the group. For example, a new group ID is generated, and the group ID is associated with the face ID of the second face (see FIG. 6).
- the grouping unit 12 performs the above process on all pairs of the Nth image and the (N + 1) th image taken immediately after that, and groups a plurality of faces.
- the grouping unit 12 selects one or more faces detected from the first image that is the first processing target (eg, the first captured image) among the plurality of images to be processed, Belong to different groups.
- the face ID of each face detected from the first image is associated with a different group ID and stored in a storage medium such as a memory.
- M M is an integer of 1 or more
- the grouping unit 12 generates M groups and causes each of the M faces to belong to each group. For example, M new group IDs are generated, and different group IDs are associated with IDs of M faces detected from the first image (see FIG. 6).
- the grouping unit 12 determines representative coordinates for each group.
- the coordinates of the members are set as representative coordinates of each group.
- the member is a face belonging to each group, that is, a face identified by a face ID associated with each group ID.
- the number of face IDs associated with the same group ID is the number of members.
- the coordinates of the members are coordinates of the vehicle coordinate system of each face.
- the grouping unit 12 calculates the distance between the representative coordinates of each of the plurality of groups and the coordinates of a person not belonging to the group. That is, the grouping unit 12 calculates the distance between each of one or more faces detected from the second image captured immediately after the first image and the representative coordinates of each of the M groups. Then, the grouping unit 12 causes the faces not belonging to the group to belong to the predetermined group in accordance with the comparison between the calculated distance and the predetermined value. That is, the grouping unit 12 causes each of the one or more faces detected from the second image to belong to the group satisfying the distance condition of “the distance is less than the predetermined value”. For example, the face ID of each of one or more faces detected from the second image is associated with the group ID of the group that satisfies the distance condition (see FIG. 6).
- the predetermined value can be arbitrarily determined in advance.
- the grouping unit 12 redetermines the representative coordinates of the group to which the new member has been added.
- a statistical value for example, an average
- y of the y coordinates of the plurality members An example is given in which the y-coordinate of the representative coordinates is an example), but the present invention is not limited thereto.
- the grouping unit 12 When there is no group satisfying the distance condition, the grouping unit 12 generates a new group corresponding to the face detected from the second image, and causes the face to belong to the group. For example, a new group ID is generated, and the face ID of the face is associated with the group ID (see FIG. 6). Further, the grouping unit 12 sets the coordinates of the face as the representative coordinates of the group.
- the grouping unit 12 performs the above process on all of the plurality of images to be processed, and groups the plurality of faces.
- the grouping unit 12 calculates the distance between representative coordinates of each of a plurality of groups to which one or a plurality of faces belong and a face not belonging to the group based on the coordinates of the vehicle coordinate system. Faces that do not belong can belong to a group that satisfies the distance condition of “distance less than a predetermined value”.
- the grouping unit 12 selects one or more faces detected from the first image that is the first processing target (eg, the first captured image) among the plurality of images to be processed, Belong to different groups.
- the face ID of each face detected from the first image is associated with a different group ID and stored in a storage medium such as a memory.
- M M is an integer of 1 or more
- the grouping unit 12 generates M groups and causes each of the M faces to belong to each group. For example, M new group IDs are generated, and different group IDs are associated with IDs of M faces detected from the first image (see FIG. 6).
- the grouping unit 12 determines representative coordinates for each group. At the timing where the number of members is 1, for example, the coordinates of the members are set as representative coordinates of each group.
- the grouping unit 12 calculates the distance between the representative coordinates of each of the plurality of groups and the coordinates of a person not belonging to the group. That is, the grouping unit 12 determines the distance in the x-axis direction between each of one or more faces detected from the second image captured immediately after the first image and the representative coordinates of each of the M groups. And calculate the distance in the y-axis direction. Then, the grouping unit 12 sets each of the one or more faces detected from the second image to “the distance in the x-axis direction is less than a first predetermined value, and the distance in the y-axis direction is It belongs to the group satisfying the distance condition of “less than predetermined value”. For example, the face ID of each of one or more faces detected from the second image is associated with the group ID of the group that satisfies the distance condition (see FIG. 6). The first predetermined value and the second predetermined value can be arbitrarily determined in advance.
- the grouping unit 12 redetermines the representative coordinates of the group to which the new member has been added.
- a statistical value for example, an average
- y of the y coordinates of the plurality members An example is given in which the y-coordinate of the representative coordinates is an example), but the present invention is not limited thereto.
- the grouping unit 12 When there is no group satisfying the distance condition, the grouping unit 12 generates a new group corresponding to the face detected from the second image, and causes the face to belong to the group. For example, a new group ID is generated, and the face ID of the face is associated with the group ID (see FIG. 6). Further, the grouping unit 12 sets the coordinates of the face as the representative coordinates of the group.
- the grouping unit 12 performs the above process on all of the plurality of images to be processed, and groups the plurality of faces.
- the grouping unit 12 determines the distance in the x-axis direction between the representative coordinates of each of the plurality of groups to which one or more faces belong and the faces not belonging to the group and The distance in the axial direction is calculated, and the face not belonging to the group satisfies the distance condition of “the distance in the x-axis direction is less than the first predetermined value and the distance in the y-axis direction is less than the second predetermined value” Can belong to a group.
- the counting unit 13 counts the number of groups. For example, using data as shown in FIG. 6, the number of group IDs is counted.
- the analysis device 10 outputs the number of groups as the number of people in the vehicle. For example, the analysis device 10 stores the count number in the storage device in association with the image or the like.
- the analysis device 10 stores the image obtained by photographing the vehicle in which the number of people in the vehicle does not satisfy the predetermined condition (eg, two or more, three or more, four or more, etc.) according to the user operation. It may be extracted from the device and output.
- the predetermined condition eg, two or more, three or more, four or more, etc.
- the image analysis unit 11 targets a plurality of images obtained by photographing the same vehicle a plurality of times from different directions as processing targets, and from each of the plurality of images : Detect the face). Then, the image analysis unit 11 detects the coordinates of each of a plurality of persons (e.g., the coordinates of each of a plurality of faces) in a coordinate system (a vehicle coordinate system) with a predetermined portion of the detected vehicle as a reference (e.g., an origin). .
- the plurality of images are, for example, images taken by the camera 102 shown in FIG. 3 and images taken continuously by the camera 102 in response to one detection by the sensor 101.
- the grouping unit 12 groups persons (for example, faces) detected from different images based on the coordinates of the vehicle coordinate system.
- the grouping unit 12 groups together those having similar coordinates in the vehicle coordinate system. As a result, the faces of the same person existing across a plurality of images are grouped.
- the counting unit 13 counts the number of groups.
- the analysis device 10 outputs the number of groups as the number of people in the vehicle. For example, the analysis device 10 stores the count number in association with an image or the like.
- the detection result (number of passengers) can be determined based on the analysis result of each of the plurality of images.
- the person A is an obstacle (part of the vehicle, in the first image taken at the first timing. It may be hidden by other occupants etc. and not included in the image.
- the vehicle moves and the positional relationship between the camera and person A and the obstacle changes, and as a result, the person A included in the first timing is included in the image. There is.
- the detection result is determined based on the analysis result of the plurality of images.
- the analysis device 10 of the present embodiment is different from the first embodiment in the functional configuration of the image analysis unit 11 and the grouping unit 12. This will be described below.
- the hardware configuration of the analysis device 10 is the same as that of the first embodiment.
- FIG. 2 shows an example of a functional block diagram of the analysis device 10 of the present embodiment.
- the analysis device 10 includes an image analysis unit 11, a grouping unit 12, and a count unit 13.
- the configuration of the counting unit 13 is the same as that of the first embodiment.
- the image analysis unit 11 has a function of extracting the feature amount of the image of the human face included in the image.
- Person included in image means a person detected from an image by image analysis.
- the grouping unit 12 displays the face of the first person (hereinafter The group to which the first face belongs is determined based on the feature amount of the image (which may be referred to as “first face”) and the feature amount of the image of the face belonging to each group. That is, the first person is a person who belongs to a plurality of groups when the distance is compared with a predetermined value.
- the grouping unit 12 sets each of the one or more faces detected from the second image to the above-mentioned distances among the one or more faces detected from the immediately preceding first image. Belong to the same group as the ones that satisfy the condition.
- the grouping unit 12 determines a group to which the first face belongs based on the feature amount of the face image.
- the image analysis unit 11 extracts feature amounts of images of the 1-1st face, the 1-2nd face, and the 2-1st face. Then, the grouping unit 12 collates the feature amount of the image of the 2-1 face with the feature amount of the image of the 1-1 face. Similarly, the grouping unit 12 collates the feature amount of the image of the 2-1 face with the feature amount of the image of the 1-2 face.
- the grouping unit 12 causes the (2-1) th face to belong to the same group as, for example, the more similar one of the (1-1) th face and the (1-2) th face. That is, as a result of the matching, the face belonging to the same group as the face having the higher similarity to the feature amount of the image of the (2-1) th face. For example, if the face with the higher similarity to the feature amount of the image of the 2-1 face is the 1-1 face, the face ID of the 1-1 face and the 2-1 face The same group ID is associated with the face ID of (see FIG. 6).
- the grouping unit 12 generates a 2-1st face when the degree of similarity with the 2-1st face does not exceed a predetermined threshold value in any of the 1-1st face and the 1-2nd face. May belong to a new group. That is, the 2-1 face may belong to a group different from both the 1-1 face and the 1-2 face. For example, the grouping unit 12 generates a new group ID, and associates the group ID with the face ID of the 2-1 face (see FIG. 6).
- the image analysis unit 11 calculates the feature amount of each of the first face and the plurality of faces satisfying the distance condition between the first face and the first face. Extract Note that the image analysis unit 11 does not have to extract the feature amounts of other face images. Then, the grouping unit 12 determines whether to cause the first face to belong to any group based on the degree of similarity of the feature amounts of the face image.
- the grouping unit 12 satisfies the above distance condition between the one or more faces detected from an image and the representative coordinates among a plurality of groups existing at that time. Belong to
- the grouping unit 12 determines a group to which the first face belongs based on the feature amount of the face image. Specifically, the grouping unit 12 sets one of the plurality of groups that satisfy the distance condition to the first face to the group in which the member's face is most similar to the first face. Make your face belong.
- the grouping unit 12 determines a representative member from among the groups, and determines the determined representative member and the first member. The degree of similarity can be determined between The grouping unit 12 can determine the representative member based on the imaging timing of the image including the first person. For example, the grouping unit 12 may use, as a representative member, a member included in an image whose photographing timing is closest to the image including the first face among the plurality of members. In addition, members included in an image captured within a predetermined time from the imaging timing of an image including a first person, members included in an image captured within a predetermined number of frames from an image including a first person, etc. , May be a representative member.
- the predetermined time and the predetermined number of frames are preferably smaller.
- the photographing timing is near, there is a high possibility that the photographing is performed with the same direction, the same expression, or the like. Therefore, it can be accurately determined whether the person is the same person.
- the other configuration of the image analysis unit 11 and the grouping unit 12 is the same as that of the first embodiment.
- the analysis device 10 of the present embodiment in the process of grouping the same person across a plurality of images, if there is a face (first face) in which the group to which the group belongs can not be determined based on the distance condition.
- the group to which the face belongs can be determined based on the feature amount of the image of.
- the distance condition is preferentially used to group the same person straddling a plurality of images, and the feature amount of the face image is used only when the determination can not be made under the distance condition.
- the analysis apparatus 10 according to the present embodiment can reduce the processing load on the computer as compared to the processing of grouping only by the feature amount of the face image without using the distance condition. Further, the grouping accuracy is improved as compared to the processing of grouping only on the distance condition without using the feature amount of the face image.
- the process of determining whether or not the first face is similar to the image is a first process, regardless of which of the above methods 1 to 4 is used for grouping. This can be performed between an image including a face and a face included in an image close in shooting timing.
- the imaging timing is near, it is highly possible that the face orientation, the expression, etc. are not the same or change significantly, as compared with the case where the imaging timing is distant. Therefore, it is possible to accurately detect the group to which the face whose image is similar to the first face belongs, and make the first face belong to that group.
- the image processed by the analysis device 10 is taken from the side of the vehicle as shown in FIG.
- the image may be taken from the oblique front, from the side, or from the oblique rear.
- the shooting direction can be adjusted by adjusting the direction of the optical axis of the camera.
- the moving vehicle is photographed while fixing the direction of the camera. Therefore, the relative relationship between the optical axis of the camera and the vehicle at the shooting timing changes for each image. Then, the horizontal interval in the image of two persons (for example, the driver's seat and the front passenger seat, or next to the rear seat, etc.) arranged side by side in the vehicle appears different for each image.
- FIG. 8 shows three images F taken continuously.
- the arrows indicate the shooting order.
- the transition of how to appear in the image of two persons (persons A1 and A2) arranged side by side in the vehicle is shown.
- the faces of the two persons are extracted and shown, and the other parts such as the vehicle are omitted.
- the traveling direction of the vehicle is from the left to the right in the figure.
- the positions of the persons A1 and A2 (positions of the vehicle) in the image are shifted from left to right as shown ((1) ⁇ (2) ⁇ (3)).
- FIG. 8 is an example in the case of photographing from the diagonally forward of the vehicle.
- the positional relationship between the vehicle and the camera during continuous imaging approaches each other over time.
- the horizontal interval Dx in the images of the persons A1 and A2 gradually decreases.
- the positional relationship between the vehicle and the camera during shooting (while the image includes the vehicle) will be away from each other as time passes.
- the distance Dx in the lateral direction in the image of the persons A1 and A2 arranged side by side in the vehicle gradually increases.
- the positional relationship between the vehicle and the camera during shooting approaches each other with the passage of time, and then goes away.
- the lateral distance Dx in the image of the persons A1 and A2 arranged side by side in the vehicle gradually decreases and then gradually increases.
- the vehicle and camera while shooting may approach each other as time passes, and then may move away. Then, the distance Dx in the lateral direction in the image of the persons A1 and A2 arranged side by side in the vehicle gradually decreases and then gradually increases.
- the analysis device 10 improves the accuracy of the process of grouping the same person across a plurality of images based on the distance condition by performing the process in consideration of the above-described phenomenon.
- the hardware configuration of the analysis device 10 is the same as in the first and second embodiments.
- FIG. 2 shows an example of a functional block diagram of the analysis device 10 of the present embodiment.
- the analysis device 10 includes an image analysis unit 11, a grouping unit 12, and a count unit 13.
- the configuration of the counting unit 13 is the same as in the first and second embodiments.
- the image analysis unit 11 detects the position in the image of each of a plurality of persons (for example, faces) detected from the image. For example, the image analysis unit 11 sets a plurality of faces in a two-dimensional coordinate system with an arbitrary position (eg, lower left) of the image as an origin and an arbitrary direction (eg, horizontal direction and vertical direction) as x and y axes. Detect each coordinate.
- An analysis result as shown in FIG. 9 is obtained by the image analysis by the image analysis unit 11 of the present embodiment.
- the analysis result of FIG. 9 differs from the analysis result of FIG. 5 described in the first embodiment in that “in-frame coordinates” indicating the position in the frame of each face is included.
- the grouping unit 12 corrects the coordinates of the vehicle coordinate system of each of a plurality of persons (e.g., faces) detected from the plurality of images based on the position (in-frame coordinates) in each image of the persons (e.g., faces). , Group a plurality of people (e.g., faces) based on the corrected coordinates. This will be described below.
- the grouping unit 12 determines whether there is a face pair in which the distance in the x-axis direction (the horizontal direction of the image) is equal to or less than a predetermined value in faces detected from one image. If it exists, the x coordinate (coordinates in the longitudinal direction of the vehicle) of the vehicle coordinate system of the two faces included in the pair is corrected based on the position in the image of the two faces.
- the predetermined value can be arbitrarily determined in advance.
- the correction content corresponds to the representative value (example: average) of the x-coordinate (position in the horizontal direction of the image) of the in-frame coordinates of the two faces.
- the x-coordinate of the vehicle coordinate system so that the distance Dx decreases.
- a predetermined value is added to the x coordinate of the vehicle coordinate system of one face (the smaller value of x), and the predetermined value is reduced from the x coordinate of the vehicle coordinate system of the other face (the larger x value).
- the predetermined value may correspond to the representative value of the x coordinate of the in-frame coordinates.
- the x-coordinate of the vehicle coordinate system is corrected so that the interval Dx becomes large.
- the predetermined value is reduced from the x coordinate of the vehicle coordinate system of one face (the smaller value of x), and the predetermined value is added to the x coordinate of the vehicle coordinate system of the other face (the larger x value).
- the predetermined value may correspond to the representative value of the x coordinate of the in-frame coordinates.
- the grouping unit 12 may hold in advance correction information in which the correction content is determined according to the representative value of the x coordinate of the in-frame coordinates. Then, the grouping unit 12 may determine the correction content using the correction information, and perform the determined correction.
- the process of grouping a plurality of faces based on the coordinates of the corrected vehicle coordinate system is the same as the process of grouping a plurality of faces based on the coordinates of the vehicle coordinate system described in the first and second embodiments.
- the other configurations of the image analysis unit 11 and the grouping unit 12 are the same as those of the first and second embodiments.
- the same effects as those of the first and second embodiments can be realized. Further, according to the analysis device 10 of the present embodiment, by performing the processing taking into consideration the above-described phenomenon described with reference to FIG. 8, processing for grouping the same person across a plurality of images based on the distance condition Accuracy can be improved.
- the analysis device 10 of the present embodiment solves the same problem as that of the third embodiment by different means. This will be described below.
- the hardware configuration of the analysis device 10 is the same as in the first to third embodiments.
- FIG. 2 shows an example of a functional block diagram of the analysis device 10 of the present embodiment.
- the analysis device 10 includes an image analysis unit 11, a grouping unit 12, and a count unit 13.
- the configurations of the image analysis unit 11 and the count unit 13 are the same as in the first and second embodiments.
- the grouping unit 12 groups a plurality of faces detected from an image by the method 2 described in the first embodiment. That is, based on the coordinates of the vehicle coordinate system, the grouping unit 12 detects one or more faces detected from the first image and one or more detected from the second image captured immediately thereafter.
- the distance in the x-axis direction and the distance in the y-axis direction from each of the plurality of faces are calculated, “the distance in the x-axis direction is less than a first predetermined value and the distance in the y-axis direction is less than a second predetermined value” Make two faces that satisfy the distance condition belong to the same group.
- the grouping unit 12 sets the second predetermined value as a fixed value, and sets the first predetermined value as a variable value determined based on the position in the image of the face.
- the grouping unit 12 detects an in-frame of the first face.
- a representative value is determined based on the x-coordinates of the coordinates and the x-coordinates of the in-frame coordinates of the second face. For example, an average of x coordinates of both in-frame coordinates is used as a representative value.
- the in-frame coordinates are the same concept as in the third embodiment.
- the grouping unit 12 determines a first predetermined value based on the representative value of the x coordinate of the in-frame coordinates of the first face and the second face.
- the first predetermined value is increased.
- the representative value of the x coordinate of the in-frame coordinates of the first face and the second face indicates “a position in the image where the above-mentioned interval Dx appears to be relatively small”
- the first predetermined value Make it smaller.
- the grouping unit 12 holds in advance correspondence information (table, function, etc.) in which a first predetermined value is determined according to a representative value of x coordinates of in-frame coordinates of the first face and the second face You may leave it. Then, the grouping unit 12 may determine the distance condition using the correspondence information.
- correspondence information table, function, etc.
- the other configuration of the grouping unit 12 is the same as that of the first and second embodiments.
- the same effects as those of the first and second embodiments can be realized. Further, according to the analysis device 10 of the present embodiment, by performing the processing taking into consideration the above-described phenomenon described with reference to FIG. 8, processing for grouping the same person across a plurality of images based on the distance condition Accuracy can be improved.
- a predetermined location and person of the vehicle are detected from each of a plurality of images obtained by photographing the same vehicle a plurality of times from different directions, and each of the plurality of people in the coordinate system based on the detected predetermined location
- Image analysis means for detecting coordinates; Grouping means for grouping the persons detected from the different images based on the coordinates; Counting means for counting the number of groups; Analyzer with.
- the analysis device described in 1 The analysis device which groups the persons detected from the different images by using the distance between the coordinates. 3.
- the grouping means is Calculating a distance between the coordinates of each of the one or more persons detected from the first image and the coordinates of the one or more persons detected from the second image; The analyzer which groups the said persons detected from the said different image according to the comparison with the said distance and predetermined value. 4.
- the grouping means is Calculating a distance between representative coordinates of each of the plurality of groups and coordinates of the person who does not belong to the group; The analyzer which groups the said persons detected from the said different image according to the comparison with the said distance and predetermined value. 5.
- the grouping means is When there is a first person who is a person who belongs to a plurality of the groups, the first person can be selected based on the feature amount of the image of the first person and the feature amount of the image of the person An analysis device that determines whether to belong to any of the groups. 6.
- the grouping means is The first person is selected based on the feature amount of the image of the person included in the image determined based on the photographing timing of the image including the first person and the feature amount of the image of the first person. An analysis device that determines whether to belong to any of the groups. 7.
- the image is an image of the vehicle taken from the side of the vehicle
- the grouping unit corrects the coordinates of each of the plurality of persons detected from the plurality of images based on the position of each of the persons in the image, and groups the plurality of persons based on the corrected coordinates.
- Analysis device 8.
- the image analysis means detects a human face from the image, and detects an coordinate of the face in the coordinate system.
- the computer is A predetermined location and person of the vehicle are detected from each of a plurality of images obtained by photographing the same vehicle a plurality of times from different directions, and each of the plurality of people in the coordinate system based on the detected predetermined location
- a predetermined location and person of the vehicle are detected from each of a plurality of images obtained by photographing the same vehicle a plurality of times from different directions, and each of the plurality of people in the coordinate system based on the detected predetermined location
- Image analysis means for detecting coordinates, Grouping means for grouping the persons detected from the different images based on the coordinates;
- Counting means for counting the number of groups, A program to function as
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Investigating Or Analysing Biological Materials (AREA)
Abstract
Description
同一の車両を異なる方向から複数回撮影することで得られた複数の画像の各々から車両の所定箇所及び人を検出し、検出した前記所定箇所を基準とする座標系における複数の前記人各々の座標を検出する画像解析手段と、
異なる前記画像から検出された前記人同士を、前記座標に基づきグループ化するグループ化手段と、
グループの数をカウントするカウント手段と、
を有する解析装置が提供される。
コンピュータが、
同一の車両を異なる方向から複数回撮影することで得られた複数の画像の各々から車両の所定箇所及び人を検出し、検出した前記所定箇所を基準とする座標系における複数の前記人各々の座標を検出する画像解析工程と、
異なる前記画像から検出された前記人同士を、前記座標に基づきグループ化するグループ化工程と、
グループの数をカウントするカウント工程と、
を実行する解析方法が提供される。
コンピュータを、
同一の車両を異なる方向から複数回撮影することで得られた複数の画像の各々から車両の所定箇所及び人を検出し、検出した前記所定箇所を基準とする座標系における複数の前記人各々の座標を検出する画像解析手段、
異なる前記画像から検出された前記人同士を、前記座標に基づきグループ化するグループ化手段、
グループの数をカウントするカウント手段、
として機能させるプログラムが提供される。
まず、本実施形態の解析装置の概要を説明する。解析装置は、同一の車両を連続的に撮影した複数の画像各々の解析結果に基づき、その車両に乗車している人の数を検出する。
グループ化部12は、処理対象の複数の画像の中の最初の処理対象(例:一番初めに撮影された画像)である第1の画像から検出された1つ又は複数の顔各々を、互いに異なるグループに属させる。例えば第1の画像から検出された各々の顔の顔IDに異なるグループIDを対応付けてメモリ等の記憶媒体に記憶させる。第1の画像からM個(Mは1以上の整数)の顔が検出された場合、グループ化部12はM個のグループを生成し、各グループにM個の顔各々を属させる。例えば、M個の新たなグループIDを生成し、第1の画像から検出されたM個の顔各々のIDに異なるグループIDを対応付ける(図6参照)。
グループ化部12は、処理対象の複数の画像の中の最初の処理対象(例:一番初めに撮影された画像)である第1の画像から検出された1つ又は複数の顔各々を、互いに異なるグループに属させる。例えば第1の画像から検出された各々の顔の顔IDに異なるグループIDを対応付けてメモリ等の記憶媒体に記憶させる。第1の画像からM個(Mは1以上の整数)の顔が検出された場合、グループ化部12はM個のグループを生成し、各グループにM個の顔各々を属させる。例えば、M個の新たなグループIDを生成し、第1の画像から検出されたM個の顔各々のIDに異なるグループIDを対応付ける(図6参照)。
グループ化部12は、処理対象の複数の画像の中の最初の処理対象(例:一番初めに撮影された画像)である第1の画像から検出された1つ又は複数の顔各々を、互いに異なるグループに属させる。例えば第1の画像から検出された各々の顔の顔IDに異なるグループIDを対応付けてメモリ等の記憶媒体に記憶させる。第1の画像からM個(Mは1以上の整数)の顔が検出された場合、グループ化部12はM個のグループを生成し、各グループにM個の顔各々を属させる。例えば、M個の新たなグループIDを生成し、第1の画像から検出されたM個の顔各々のIDに異なるグループIDを対応付ける(図6参照)。
グループ化部12は、処理対象の複数の画像の中の最初の処理対象(例:一番初めに撮影された画像)である第1の画像から検出された1つ又は複数の顔各々を、互いに異なるグループに属させる。例えば第1の画像から検出された各々の顔の顔IDに異なるグループIDを対応付けてメモリ等の記憶媒体に記憶させる。第1の画像からM個(Mは1以上の整数)の顔が検出された場合、グループ化部12はM個のグループを生成し、各グループにM個の顔各々を属させる。例えば、M個の新たなグループIDを生成し、第1の画像から検出されたM個の顔各々のIDに異なるグループIDを対応付ける(図6参照)。
本実施形態の解析装置10は、画像解析部11及びグループ化部12の機能構成が第1の実施形態と異なる。以下、説明する。
まず、本実施形態の解析装置10が解決する課題を説明する。解析装置10により処理される画像は、図3に示すように車両の横から撮影されたものである。斜め前方から撮影してもよいし、真横から撮影してもよいし、斜め後方から撮影してもよい。カメラの光軸の向きを調整することで、撮影向きを調整できる。
本実施形態の解析装置10は、第3の実施形態と同じ課題を異なる手段で解決する。以下、説明する。
1. 同一の車両を異なる方向から複数回撮影することで得られた複数の画像の各々から車両の所定箇所及び人を検出し、検出した前記所定箇所を基準とする座標系における複数の前記人各々の座標を検出する画像解析手段と、
異なる前記画像から検出された前記人同士を、前記座標に基づきグループ化するグループ化手段と、
グループの数をカウントするカウント手段と、
を有する解析装置。
2. 1に記載の解析装置において、
前記グループ化手段は、異なる前記画像から検出された前記人同士を、前記座標間の距離を用いてグループ化する解析装置。
3. 2に記載の解析装置において、
前記グループ化手段は、
第1の前記画像から検出された1つ又は複数の前記人各々の座標と、第2の前記画像から検出された1つ又は複数の前記人各々の座標との距離を算出し、
前記距離と所定値との比較に応じて、異なる前記画像から検出された前記人同士をグループ化する解析装置。
4. 2に記載の解析装置において、
前記グループ化手段は、
複数の前記グループ各々の代表座標と、前記グループに属していない前記人の座標との距離を算出し、
前記距離と所定値との比較に応じて、異なる前記画像から検出された前記人同士をグループ化する解析装置。
5. 3又は4に記載の解析装置において、
前記グループ化手段は、
複数の前記グループに属する人である第1の人がある場合、前記第1の人の画像の特徴量と、前記グループ各々に属する前記人の画像の特徴量とに基づき、前記第1の人をいずれかの前記グループに属させるか否かを決定する解析装置。
6. 5に記載の解析装置において、
前記グループ化手段は、
前記第1の人を含む前記画像の撮影タイミングに基づき決定した前記画像に含まれる前記人の画像の特徴量と、前記第1の人の画像の特徴量とに基づき、前記第1の人をいずれかの前記グループに属させるか否かを決定する解析装置。
7. 3から6のいずれかに記載の解析装置において、
前記画像は、前記車両を前記車両の横から撮影した画像であり、
前記グループ化手段は、複数の前記画像から検出された複数の前記人各々の前記座標を前記人各々の前記画像内における位置に基づき補正し、補正後の座標に基づき複数の前記人をグループ化する解析装置。
8. 1から7のいずれかに記載の解析装置において、
前記画像解析手段は、前記画像から人の顔を検出し、前記座標系における前記顔の座標を検出する解析装置。
9. コンピュータが、
同一の車両を異なる方向から複数回撮影することで得られた複数の画像の各々から車両の所定箇所及び人を検出し、検出した前記所定箇所を基準とする座標系における複数の前記人各々の座標を検出する画像解析工程と、
異なる前記画像から検出された前記人同士を、前記座標に基づきグループ化するグループ化工程と、
グループの数をカウントするカウント工程と、
を実行する解析方法。
10. コンピュータを、
同一の車両を異なる方向から複数回撮影することで得られた複数の画像の各々から車両の所定箇所及び人を検出し、検出した前記所定箇所を基準とする座標系における複数の前記人各々の座標を検出する画像解析手段、
異なる前記画像から検出された前記人同士を、前記座標に基づきグループ化するグループ化手段、
グループの数をカウントするカウント手段、
として機能させるプログラム。
Claims (10)
- 同一の車両を異なる方向から複数回撮影することで得られた複数の画像の各々から車両の所定箇所及び人を検出し、検出した前記所定箇所を基準とする座標系における複数の前記人各々の座標を検出する画像解析手段と、
異なる前記画像から検出された前記人同士を、前記座標に基づきグループ化するグループ化手段と、
グループの数をカウントするカウント手段と、
を有する解析装置。 - 請求項1に記載の解析装置において、
前記グループ化手段は、異なる前記画像から検出された前記人同士を、前記座標間の距離を用いてグループ化する解析装置。 - 請求項2に記載の解析装置において、
前記グループ化手段は、
第1の前記画像から検出された1つ又は複数の前記人各々の座標と、第2の前記画像から検出された1つ又は複数の前記人各々の座標との距離を算出し、
前記距離と所定値との比較に応じて、異なる前記画像から検出された前記人同士をグループ化する解析装置。 - 請求項2に記載の解析装置において、
前記グループ化手段は、
複数の前記グループ各々の代表座標と、前記グループに属していない前記人の座標との距離を算出し、
前記距離と所定値との比較に応じて、異なる前記画像から検出された前記人同士をグループ化する解析装置。 - 請求項3又は4に記載の解析装置において、
前記グループ化手段は、
複数の前記グループに属する人である第1の人がある場合、前記第1の人の画像の特徴量と、前記グループ各々に属する前記人の画像の特徴量とに基づき、前記第1の人をいずれかの前記グループに属させるか否かを決定する解析装置。 - 請求項5に記載の解析装置において、
前記グループ化手段は、
前記第1の人を含む前記画像の撮影タイミングに基づき決定した前記画像に含まれる前記人の画像の特徴量と、前記第1の人の画像の特徴量とに基づき、前記第1の人をいずれかの前記グループに属させるか否かを決定する解析装置。 - 請求項3から6のいずれか1項に記載の解析装置において、
前記画像は、前記車両を前記車両の横から撮影した画像であり、
前記グループ化手段は、複数の前記画像から検出された複数の前記人各々の前記座標を前記人各々の前記画像内における位置に基づき補正し、補正後の座標に基づき複数の前記人をグループ化する解析装置。 - 請求項1から7のいずれか1項に記載の解析装置において、
前記画像解析手段は、前記画像から人の顔を検出し、前記座標系における前記顔の座標を検出する解析装置。 - コンピュータが、
同一の車両を異なる方向から複数回撮影することで得られた複数の画像の各々から車両の所定箇所及び人を検出し、検出した前記所定箇所を基準とする座標系における複数の前記人各々の座標を検出する画像解析工程と、
異なる前記画像から検出された前記人同士を、前記座標に基づきグループ化するグループ化工程と、
グループの数をカウントするカウント工程と、
を実行する解析方法。 - コンピュータを、
同一の車両を異なる方向から複数回撮影することで得られた複数の画像の各々から車両の所定箇所及び人を検出し、検出した前記所定箇所を基準とする座標系における複数の前記人各々の座標を検出する画像解析手段、
異なる前記画像から検出された前記人同士を、前記座標に基づきグループ化するグループ化手段、
グループの数をカウントするカウント手段、
として機能させるプログラム。
Priority Applications (6)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201780092899.6A CN110832493A (zh) | 2017-07-04 | 2017-10-23 | 分析装置、分析方法和程序 |
| US16/628,485 US11138755B2 (en) | 2017-07-04 | 2017-10-23 | Analysis apparatus, analysis method, and non transitory storage medium |
| JP2019528334A JP6863461B2 (ja) | 2017-07-04 | 2017-10-23 | 解析装置、解析方法及びプログラム |
| AU2017422614A AU2017422614A1 (en) | 2017-07-04 | 2017-10-23 | Analysis apparatus, analysis method, and program |
| EP17916838.0A EP3651114B1 (en) | 2017-07-04 | 2017-10-23 | Analysis device, analysis method, and program |
| ES17916838T ES2923934T3 (es) | 2017-07-04 | 2017-10-23 | Dispositivo de análisis, método de análisis y programa |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2017131088 | 2017-07-04 | ||
| JP2017-131088 | 2017-07-04 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2019008789A1 true WO2019008789A1 (ja) | 2019-01-10 |
Family
ID=64950806
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2017/038132 Ceased WO2019008789A1 (ja) | 2017-07-04 | 2017-10-23 | 解析装置、解析方法及びプログラム |
Country Status (7)
| Country | Link |
|---|---|
| US (1) | US11138755B2 (ja) |
| EP (1) | EP3651114B1 (ja) |
| JP (1) | JP6863461B2 (ja) |
| CN (1) | CN110832493A (ja) |
| AU (1) | AU2017422614A1 (ja) |
| ES (1) | ES2923934T3 (ja) |
| WO (1) | WO2019008789A1 (ja) |
Families Citing this family (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109919064B (zh) * | 2019-02-27 | 2020-12-22 | 湖南信达通信息技术有限公司 | 一种轨道交通车厢内实时人数统计方法和装置 |
| US20230401808A1 (en) * | 2022-06-10 | 2023-12-14 | Plantronics, Inc. | Group framing in a video system |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20090309974A1 (en) * | 2008-05-22 | 2009-12-17 | Shreekant Agrawal | Electronic Surveillance Network System |
| JP2012128862A (ja) * | 2010-12-14 | 2012-07-05 | Xerox Corp | Irイメージングシステムを介して得られるir画像における総人数の決定方法 |
| WO2014061195A1 (ja) | 2012-10-19 | 2014-04-24 | 日本電気株式会社 | 乗車人数計数システム、乗車人数計数方法および乗車人数計数プログラム |
| WO2014064898A1 (ja) | 2012-10-26 | 2014-05-01 | 日本電気株式会社 | 乗車人数計測装置、方法およびプログラム |
| WO2015052896A1 (ja) * | 2013-10-09 | 2015-04-16 | 日本電気株式会社 | 乗車人数計測装置、乗車人数計測方法およびプログラム記録媒体 |
| JP2017131088A (ja) | 2016-01-22 | 2017-07-27 | 三菱電機株式会社 | 固定子の製造方法および固定子 |
Family Cites Families (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2012103193A1 (en) * | 2011-01-26 | 2012-08-02 | Magna Electronics Inc. | Rear vision system with trailer angle detection |
| US20180197218A1 (en) * | 2017-01-12 | 2018-07-12 | Verizon Patent And Licensing Inc. | System and method for object detection in retail environment |
| US20180240180A1 (en) * | 2017-02-20 | 2018-08-23 | Grabango Co. | Contextually aware customer item entry for autonomous shopping applications |
| US10628667B2 (en) * | 2018-01-11 | 2020-04-21 | Futurewei Technologies, Inc. | Activity recognition method using videotubes |
| WO2019191002A1 (en) * | 2018-03-26 | 2019-10-03 | Nvidia Corporation | Object movement behavior learning |
| US11312372B2 (en) * | 2019-04-16 | 2022-04-26 | Ford Global Technologies, Llc | Vehicle path prediction |
-
2017
- 2017-10-23 AU AU2017422614A patent/AU2017422614A1/en not_active Abandoned
- 2017-10-23 US US16/628,485 patent/US11138755B2/en active Active
- 2017-10-23 ES ES17916838T patent/ES2923934T3/es active Active
- 2017-10-23 JP JP2019528334A patent/JP6863461B2/ja not_active Expired - Fee Related
- 2017-10-23 CN CN201780092899.6A patent/CN110832493A/zh active Pending
- 2017-10-23 WO PCT/JP2017/038132 patent/WO2019008789A1/ja not_active Ceased
- 2017-10-23 EP EP17916838.0A patent/EP3651114B1/en active Active
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20090309974A1 (en) * | 2008-05-22 | 2009-12-17 | Shreekant Agrawal | Electronic Surveillance Network System |
| JP2012128862A (ja) * | 2010-12-14 | 2012-07-05 | Xerox Corp | Irイメージングシステムを介して得られるir画像における総人数の決定方法 |
| WO2014061195A1 (ja) | 2012-10-19 | 2014-04-24 | 日本電気株式会社 | 乗車人数計数システム、乗車人数計数方法および乗車人数計数プログラム |
| WO2014064898A1 (ja) | 2012-10-26 | 2014-05-01 | 日本電気株式会社 | 乗車人数計測装置、方法およびプログラム |
| WO2015052896A1 (ja) * | 2013-10-09 | 2015-04-16 | 日本電気株式会社 | 乗車人数計測装置、乗車人数計測方法およびプログラム記録媒体 |
| JP2017131088A (ja) | 2016-01-22 | 2017-07-27 | 三菱電機株式会社 | 固定子の製造方法および固定子 |
Non-Patent Citations (1)
| Title |
|---|
| See also references of EP3651114A4 |
Also Published As
| Publication number | Publication date |
|---|---|
| EP3651114B1 (en) | 2022-06-08 |
| CN110832493A (zh) | 2020-02-21 |
| AU2017422614A1 (en) | 2020-01-16 |
| ES2923934T3 (es) | 2022-10-03 |
| JPWO2019008789A1 (ja) | 2020-03-19 |
| EP3651114A1 (en) | 2020-05-13 |
| EP3651114A4 (en) | 2020-05-27 |
| US20200184674A1 (en) | 2020-06-11 |
| US11138755B2 (en) | 2021-10-05 |
| JP6863461B2 (ja) | 2021-04-21 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| EP2842075B1 (en) | Three-dimensional face recognition for mobile devices | |
| US10007842B2 (en) | Same person determination device and method, and control program therefor | |
| JP7048157B2 (ja) | 解析装置、解析方法及びプログラム | |
| US12223143B2 (en) | Touch recognition method and device having LiDAR sensor | |
| US20250175581A1 (en) | Gate system, gate device controlling method, and non-transitory computer readable medium storing control program of gate device | |
| CN113632077A (zh) | 识别信息赋予装置、识别信息赋予方法以及程序 | |
| JP6863461B2 (ja) | 解析装置、解析方法及びプログラム | |
| JP2020135076A (ja) | 顔向き検出装置、顔向き検出方法、及びプログラム | |
| JP5448952B2 (ja) | 同一人判定装置、同一人判定方法および同一人判定プログラム | |
| US12154378B2 (en) | Techniques for detecting a three-dimensional face in facial recognition | |
| CN112801038A (zh) | 一种多视点的人脸活体检测方法及系统 | |
| HK40020475A (en) | Analysis device, analysis method, and program | |
| CN117121061A (zh) | 用于生成注意区域的计算机实现的方法 | |
| JP7396476B2 (ja) | 処理装置、処理方法及びプログラム | |
| CN119992613A (zh) | 人脸识别的方法、装置、车辆及存储介质 | |
| WO2023007730A1 (ja) | 情報処理システム、情報処理装置、情報処理方法、及び記録媒体 | |
| CN118736636A (zh) | 脸部检测方法及装置、计算机可读存储介质、终端 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17916838 Country of ref document: EP Kind code of ref document: A1 |
|
| ENP | Entry into the national phase |
Ref document number: 2019528334 Country of ref document: JP Kind code of ref document: A |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| ENP | Entry into the national phase |
Ref document number: 2017422614 Country of ref document: AU Date of ref document: 20171023 Kind code of ref document: A |
|
| ENP | Entry into the national phase |
Ref document number: 2017916838 Country of ref document: EP Effective date: 20200204 |