[go: up one dir, main page]

WO2020020022A1 - Procédé de reconnaissance visuelle et système associé - Google Patents

Procédé de reconnaissance visuelle et système associé Download PDF

Info

Publication number
WO2020020022A1
WO2020020022A1 PCT/CN2019/096277 CN2019096277W WO2020020022A1 WO 2020020022 A1 WO2020020022 A1 WO 2020020022A1 CN 2019096277 W CN2019096277 W CN 2019096277W WO 2020020022 A1 WO2020020022 A1 WO 2020020022A1
Authority
WO
WIPO (PCT)
Prior art keywords
eyes
eye
face image
image
specified area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2019/096277
Other languages
English (en)
Chinese (zh)
Inventor
卢帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of WO2020020022A1 publication Critical patent/WO2020020022A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions

Definitions

  • This application relates to visual recognition technology, and in particular, to a visual recognition method and system.
  • the main purpose of the present application is to provide a visual recognition method and a system thereof to solve the problem that the existing technology cannot effectively count the number of times a target is seen.
  • a visual recognition method includes:
  • each image in the image sequence includes a human face image, and if so, determine whether the eyes are looking at the specified area based on the face image, and count the number of people who are looking at the specified area.
  • the determining whether the eyes are gazing at the designated area based on the face image specifically includes: determining whether the eyes are gazing at the designated area using the face direction.
  • the determining whether the eyes are looking at a specified area according to the face image includes:
  • the step of determining whether the eyes are gazing at a specified area according to the face image specifically includes: determining position information of the eyes in the face image, determining whether the eyes are gazing according to a preset human eye recognition neural network and position information of the eyes specific area.
  • the gaze angle information of the eye is angle information between the eye and an observation object, and includes a horizontal direction angle and a vertical direction angle.
  • Horizontal angle (width of left eye white-width of right eye white) / (width of left eye white + right eye white width) x 60 degrees
  • Angle in the vertical direction (eye height / 2-distance of the pupil center point from the upper orbit) / height of the eye x 60 degrees, where "eye height” refers to the distance from the upper eyelid to the lower eyelid.
  • the method further includes, for a plurality of imaging devices, after recognizing the images acquired by them, respectively.
  • the method further includes recording the duration of the eye's fixation on a specified area, and counting the number of persons corresponding to the eyes.
  • An embodiment of the present invention also discloses a visual recognition system, including:
  • a determining module configured to determine whether each image in the image sequence includes a human face image
  • a determination module configured to determine whether to look at a specified area according to a face image if the determination result of the determination module is yes, and call the statistics module if it is;
  • the determining module specifically includes:
  • a detection unit configured to detect gaze angle information of eyes in the face image
  • the processing unit is configured to determine whether the eye is gazing at a specified area according to the gaze angle information of the eye.
  • the number of people watching the designated area can be counted through the face image, and the present application can effectively count the number of times and duration of the target being viewed.
  • FIG. 1A is a flowchart of a visual recognition method according to an embodiment of the present application.
  • FIG. 1B illustrates a positional relationship among the imaging device, the observation object, and the observer according to an embodiment of the present application
  • FIGS. 2A, 2B, and 3 are schematic diagrams according to an embodiment of the present application.
  • FIG. 4 is a structural block diagram of a visual recognition system according to an embodiment of the present application.
  • FIG. 5 is a structural block diagram of a terminal device according to an embodiment of the present application.
  • FIG. 6 is a structural block diagram of a processor according to an embodiment of the present application.
  • a visual recognition method is provided according to an embodiment of the present application.
  • FIG. 1A is a flowchart of a visual recognition method according to an embodiment of the present application. As shown in FIG. 1A, the method includes at least the following steps:
  • step S102 an image sequence is acquired, and the image sequence may be video image information.
  • video image information may be collected by a video acquisition device, and the video image information includes an image sequence.
  • video image information is collected by a video camera or a camera.
  • the application does not limit the type of the image sequence acquisition device.
  • the image sequence acquisition device may be an infrared video acquisition device or an ordinary video acquisition device.
  • the video capture device can be one (as shown in Figure 1B), or multiple.
  • the video acquisition device is usually set near the observation object, such as around the observation object, especially near the part to be evaluated.
  • the direction of the video acquisition device is to face the observation object to determine the observation characteristics of the observation object (such as the number of observers or observation time).
  • the position information includes the length, width, and height of the object from the ground.
  • Multiple camera devices can be installed on the top, bottom, left, and right of the object. Specific orientation.
  • Calculate the actual distance between the observer and the observation object Estimate the distance between the observer and the observation object based on the interpupillary distance, and first calculate the focal length (pre-calculated).
  • General focal length interpupillary pixel value when the observer distance is 0.5 meters in the imaging device x real distance 0.5 meters / real pupil distance of the observer
  • the real distance between the observer and the imaging device the true pupil distance of the observer x the universal focal distance / the pixel value of the observer's pupil distance in the imaging device
  • the interpupillary distance needs to be normalized using the rotation angle of the face. Since the interpupillary distance of each person is not the same, the approximate distance can only be estimated here, and the binocular image sequence acquisition device can be used to obtain the accurate observer and device distance.
  • the horizontal distance between the observer and the line in the observation is calculated by the pixel value and the focal distance of the observer in the imaging device. Then, according to the specific position of the observation object, it is determined whether the observer has injected the observation object through the observation angle range of the observer.
  • step S104 it is determined whether the video image information includes a human face image, and if yes, it is determined whether the eyes are looking at the designated area according to the human face image.
  • video image information (video stream) is converted into picture information for processing.
  • a frame picture is obtained from the acquired image series, and the picture is converted into a grayscale picture, and a technology of image binarization processing is adopted.
  • a pre-trained face detection neural network may be used to determine whether a picture contains a human face image, and if so, one or more detected human face image regions are extracted.
  • the face detection neural network trains the sample set on the fusion of local features and global features, and then extracts the feature information (feature point information) of key parts of the face.
  • the face detection neural network can train the pre-prepared face images and non-face images and obtain models. For pictures that are determined to have a real face, the coordinates of the face area are given.
  • the feature point extraction of dlib can find specific Including position coordinates of left and right eyes.
  • ASM active shape model
  • the black iris divides the eye white into two parts. When looking straight, the width of the whites of these two eyes is approximately equal. When squinting, the eye white on one side is wider than the other. In this embodiment, when (left-eye white-right eye white) is smaller than 1/12 eyeball width, it is determined to be a direct view. Otherwise, it is strabismus, and the horizontal strabismus angle is calculated according to formula (1).
  • the vertical angle of view is directly calculated if the position of the lens is deviated from the center of the eye, that is, the vertical angle of view can be determined according to the projection distance of the pupil shifted upward or downward from the center. 30 degrees, and so on ...
  • the vertical angle can also be calculated as follows:
  • the "eye height” refers to the distance from the upper eyelid to the lower eyelid.
  • the images acquired by them need to be panoramicized and recognized, that is, the images acquired by them are synthesized into a picture and panoramicized. This will increase the accuracy of image recognition.
  • a human eye neural network may also be used to perform model training on a large number of local images of left and right eyes through a deep neural network to generate an eye model library. Therefore, it is possible to directly recognize the directions of the human eyes such as the front view and the squint without detecting the human face.
  • the algorithm is used to process the picture to find the eye position distribution, so as to determine the eye's gaze angle information, and to determine whether the eye is looking at a specified area.
  • the direction of the eyes may be used to determine the gaze angle information of the eyes to determine Whether or not the eyes are looking at the specified area allows the eyes and angles to be skipped.
  • Step S106 Count the number of people corresponding to the eyes.
  • the designated area may be a designated area relative to the target, for example, it may be a visible area centered on the camera and within a certain range. In addition, it may be another designated area, which is not limited in this application.
  • the pre-trained human eye recognition neural network is used to determine the gaze angle of the left and right eyeballs in the face picture, and the angle value or approximate angle value relative to the camera is obtained. While looking at the designated area, counting is performed.
  • the gaze angle information of the eye may also be the gaze angle of the eyeball or the pupil.
  • the duration that the eye fixes on the designated area is recorded, and when the eye fixes on the designated area for a predetermined time, the number of people corresponding to the eyes is counted.
  • the features of the face are extracted, and information such as face features and fixation time are stored in the cache.
  • the feature points of the face can be stored in an array. Repeatedly obtaining new pictures from the video stream and repeating the above steps until it is detected that the saved face features no longer observe the designated area, and the total gaze time corresponding to the face is recorded.
  • a cache clearing period is set, and faces of the same feature are no longer counted separately before the cache expires, and only the gaze time is accumulated.
  • a 2m ⁇ 1m area centered on a camera is a designated area.
  • the picture shown includes only one human face.
  • the human eye shown in FIG. 2A is a direct-view camera, and the eye-gaze area is a designated area. Therefore, statistics that meet the statistical standards are counted and related information is counted;
  • the human eye shown in FIG. 2B is a squint camera, and the eye-gaze area is also a designated area. Therefore, the statistics that meet the statistical standards are counted and related information is counted.
  • the picture shown includes four human faces. Although the left-most face faces the camera, the left and right eye positions are not directly looking at the camera angle, but upward, and the eye gaze area is not the designated area. , So it will not be counted and statistical information, and the left and right eyes of other people in the picture are directly looking at the camera direction, so meet the statistical standards will be counted and statistics related information.
  • FIG. 4 is a structural block diagram of a visual recognition system according to an embodiment of the present application. As shown in FIG. 4, it includes:
  • An obtaining module 41 for obtaining an image sequence
  • a determining module 42 configured to determine whether each image in the image sequence includes a human face image
  • a determination module 43 is configured to determine whether the eyes are in a specified area of gaze according to the face image if the determination result of the determination module is yes, and call the statistics module if it is;
  • the statistics module 44 is configured to count the number of people who are watching the designated area.
  • the determining module 43 specifically includes:
  • a detecting unit 431, configured to detect gaze angle information of eyes in the face image
  • the processing unit 432 is configured to determine whether the eyes are gazing at a specified area according to the gaze angle information of the eyes.
  • the determining module 43 is further configured to determine position information of eyes in the face image, and determine gaze angle information of the eyes according to the position information of the eyes.
  • the determining module 43 is further configured to determine position information of eyes in the face image according to a preset human eye recognition neural network.
  • the gaze angle information of the eyes includes an angle with respect to a device that collects video image information.
  • processing unit 432 is further configured to count the number of persons corresponding to the eyes if the eyes are gazing at a specified area for a predetermined time.
  • the system can be implemented as a terminal device.
  • the terminal device 50 includes a memory 51 and a processor 52.
  • the memory 51 is configured to store a program.
  • the memory 51 may be configured to store various other data to support operations on the terminal device. Examples of this data include instructions, messages, pictures, audio and video, etc. for any application or method operating on the terminal device.
  • the memory 51 may be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as: static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM) ), Erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Disk, etc.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read-only memory
  • EPROM Erasable Programmable Read Only Memory
  • PROM Programmable Read Only Memory
  • ROM Read Only Memory
  • Magnetic Memory Flash Memory
  • Disk Disk
  • the processor 52 is coupled to the memory 51 and configured to process a program stored in the memory. Referring to FIG. 6, the processor 52 further includes:
  • An obtaining module 61 configured to obtain video image information
  • a judging module 62 configured to judge whether the video image information includes a human face image
  • a determination module 63 configured to determine gaze angle information of eyes in the face image if the determination result of the determination module is yes;
  • the processing module 64 is configured to determine whether the eyes are gazing at a specified area according to the gaze angle information of the eyes, and if yes, count the number of persons corresponding to the eyes.
  • the determining module 63 is further configured to determine position information of eyes in the face image, and determine gaze angle information of the eyes according to the position information of the eyes.
  • the determining module 63 is further configured to determine position information of eyes in the face image according to a preset human eye recognition neural network.
  • the gaze angle information of the eyes includes an angle with respect to a device that collects video image information.
  • processing module 64 is further configured to count the number of persons corresponding to the eyes if the eyes are gazing at a specified area for a predetermined time.
  • the terminal device 50 further includes: a communication component 53, a power supply component 54, an audio component 55, a display 56, and other components. It should be noted that only some components are shown schematically in FIG. 5, which does not mean that the server-side device includes only the components shown in the figure.
  • the communication component 53 is configured to facilitate wired or wireless communication between the terminal device and other devices.
  • the terminal device can access a wireless network based on a communication standard, such as WiFi, 2G, 3G, or 4G, or a combination thereof.
  • the communication component 53 further includes a near field communication (NFC) module to facilitate short-range communication.
  • NFC near field communication
  • the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared connection technology (IrDA) technology, ultra wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
  • RFID radio frequency identification
  • IrDA infrared connection technology
  • UWB ultra wideband
  • Bluetooth Bluetooth
  • the power source component 54 provides power to various components of the terminal device.
  • the power component 54 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for terminal equipment. In conditional outdoor areas, we can add battery modules and solar panels to solve the battery Charging problem.
  • the audio component 55 is configured to output and / or input audio signals.
  • the audio component 55 includes a microphone (MIC) configured to receive an external audio signal when the terminal device is in an operation mode, such as a call mode, a recording mode, and a voice recognition mode.
  • the received audio signal may be further stored in the memory 51 or transmitted via the communication component 53.
  • the audio component 55 further includes a speaker for outputting audio signals.
  • the display 56 includes a screen, which may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user.
  • the touch panel includes one or more touch sensors to sense touch, swipe, and gestures on the touch panel. The touch sensor can not only sense the boundary of a touch or slide action, but also detect duration and pressure related to the touch or slide operation.
  • the application by determining the gaze angle information of the eyes and judging whether the eyes are looking at the designated area based on the gaze angle information, thereby counting the number of people who are looking at the designated area, the application can effectively count the number of times the target is viewed.
  • the disclosed systems, devices, and methods may be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of the unit is only a logical function division.
  • multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, which may be electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objective of the solution of this embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the functions are implemented in the form of software functional units and sold or used as independent products, they can be stored in a computer-readable storage medium.
  • the technical solution of the present invention is essentially a part that contributes to the existing technology or a part of the technical solution can be embodied in the form of a software product.
  • the computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network-side device, etc.) to perform all or part of the steps of the method described in each embodiment of the present invention.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), SSD, hard disk and other media that can store program code.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Ophthalmology & Optometry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Image Analysis (AREA)

Abstract

La présente invention concerne un procédé de reconnaissance visuelle et un système associé. Le procédé consiste à : acquérir une séquence d'images (102) ; déterminer si chaque image dans la séquence d'images contient ou non une image faciale, et si tel est le cas, déterminer, selon l'image faciale, si les yeux d'un sujet sont ou non focalisés sur une région désignée (104) ; et si tel est le cas, calculer le nombre de personnes se focalisant sur la région désignée (106). Le procédé utilise des images faciales pour calculer le nombre de personnes se focalisant sur une région désignée, et permet un calcul efficace du nombre de fois où une cible est visualisée et de la durée de la visualisation.
PCT/CN2019/096277 2018-07-25 2019-07-17 Procédé de reconnaissance visuelle et système associé Ceased WO2020020022A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810829864 2018-07-25
CN201810829864.2 2018-07-25

Publications (1)

Publication Number Publication Date
WO2020020022A1 true WO2020020022A1 (fr) 2020-01-30

Family

ID=69180347

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/096277 Ceased WO2020020022A1 (fr) 2018-07-25 2019-07-17 Procédé de reconnaissance visuelle et système associé

Country Status (2)

Country Link
CN (1) CN110765828A (fr)
WO (1) WO2020020022A1 (fr)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111444789A (zh) * 2020-03-12 2020-07-24 深圳市时代智汇科技有限公司 一种基于视频感应技术的近视预防方法及其系统
CN111796874A (zh) * 2020-06-28 2020-10-20 北京百度网讯科技有限公司 一种设备唤醒的方法、装置、计算机设备和存储介质
CN112001230A (zh) * 2020-07-09 2020-11-27 浙江大华技术股份有限公司 睡觉行为的监控方法、装置、计算机设备和可读存储介质
CN112053600A (zh) * 2020-08-31 2020-12-08 上海交通大学医学院附属第九人民医院 眼眶内窥镜导航手术训练方法、装置、设备、及系统
CN112541400A (zh) * 2020-11-20 2021-03-23 小米科技(武汉)有限公司 基于视线估计的行为识别方法及装置、电子设备、存储介质
CN112784752A (zh) * 2021-01-22 2021-05-11 苏州朗捷通智能科技有限公司 一种基于视频图像可视化的人脸识别装置
CN113791559A (zh) * 2021-09-10 2021-12-14 佛山市南海区苏科大环境研究院 一种城市景观喷泉的节能控制方法及控制系统
CN113988931A (zh) * 2021-10-29 2022-01-28 杭州青橄榄网络技术有限公司 校园多媒体广告投放效果反馈方法、系统及装置
CN114241555A (zh) * 2021-12-09 2022-03-25 上海维智卓新信息科技有限公司 基于混合感知的观众数量确定方法及装置
CN115731611A (zh) * 2022-11-03 2023-03-03 电子科技大学 一种精准思政管理系统

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111353461B (zh) * 2020-03-11 2024-01-16 京东科技控股股份有限公司 广告屏的关注度检测方法、装置、系统及存储介质
CN112132616B (zh) * 2020-09-23 2021-06-01 广东省广汽车数字营销有限公司 一种基于大数据的移动多媒体广告智能推送管理系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101335864A (zh) * 2007-06-28 2008-12-31 当代天启技术(北京)有限公司 统计户外视频收视人数的方法和系统
US20170272768A1 (en) * 2016-03-17 2017-09-21 Facebook, Inc. System and method for data compressing optical sensor data prior to transferring to a host system
CN107909057A (zh) * 2017-11-30 2018-04-13 广东欧珀移动通信有限公司 图像处理方法、装置、电子设备及计算机可读存储介质
CN108021895A (zh) * 2017-12-07 2018-05-11 深圳云天励飞技术有限公司 人数统计方法、设备、可读存储介质及电子设备
CN108021852A (zh) * 2016-11-04 2018-05-11 株式会社理光 一种人数统计方法、人数统计系统及电子设备
CN108197570A (zh) * 2017-12-28 2018-06-22 珠海市君天电子科技有限公司 一种人数统计方法、装置、电子设备及存储介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1174337C (zh) * 2002-10-17 2004-11-03 南开大学 识别人眼注视与否的方法和装置及其应用
WO2014208761A1 (fr) * 2013-06-28 2014-12-31 株式会社Jvcケンウッド Dispositif d'aide au diagnostic et procédé d'aide au diagnostic
CN106503692A (zh) * 2016-11-25 2017-03-15 巴士在线科技有限公司 一种公交移动电视视频内容受众的统计方法及系统
CN107609896A (zh) * 2017-08-15 2018-01-19 福州东方智慧网络科技有限公司 一种统计广告机的广告内容有效受众量的方法
CN107633240B (zh) * 2017-10-19 2021-08-03 京东方科技集团股份有限公司 视线追踪方法和装置、智能眼镜

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101335864A (zh) * 2007-06-28 2008-12-31 当代天启技术(北京)有限公司 统计户外视频收视人数的方法和系统
US20170272768A1 (en) * 2016-03-17 2017-09-21 Facebook, Inc. System and method for data compressing optical sensor data prior to transferring to a host system
CN108021852A (zh) * 2016-11-04 2018-05-11 株式会社理光 一种人数统计方法、人数统计系统及电子设备
CN107909057A (zh) * 2017-11-30 2018-04-13 广东欧珀移动通信有限公司 图像处理方法、装置、电子设备及计算机可读存储介质
CN108021895A (zh) * 2017-12-07 2018-05-11 深圳云天励飞技术有限公司 人数统计方法、设备、可读存储介质及电子设备
CN108197570A (zh) * 2017-12-28 2018-06-22 珠海市君天电子科技有限公司 一种人数统计方法、装置、电子设备及存储介质

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111444789A (zh) * 2020-03-12 2020-07-24 深圳市时代智汇科技有限公司 一种基于视频感应技术的近视预防方法及其系统
CN111444789B (zh) * 2020-03-12 2023-06-20 深圳市时代智汇科技有限公司 一种基于视频感应技术的近视预防方法及其系统
CN111796874A (zh) * 2020-06-28 2020-10-20 北京百度网讯科技有限公司 一种设备唤醒的方法、装置、计算机设备和存储介质
CN112001230A (zh) * 2020-07-09 2020-11-27 浙江大华技术股份有限公司 睡觉行为的监控方法、装置、计算机设备和可读存储介质
CN112053600A (zh) * 2020-08-31 2020-12-08 上海交通大学医学院附属第九人民医院 眼眶内窥镜导航手术训练方法、装置、设备、及系统
CN112053600B (zh) * 2020-08-31 2022-05-03 上海交通大学医学院附属第九人民医院 眼眶内窥镜导航手术训练方法、装置、设备、及系统
CN112541400A (zh) * 2020-11-20 2021-03-23 小米科技(武汉)有限公司 基于视线估计的行为识别方法及装置、电子设备、存储介质
CN112784752A (zh) * 2021-01-22 2021-05-11 苏州朗捷通智能科技有限公司 一种基于视频图像可视化的人脸识别装置
CN113791559A (zh) * 2021-09-10 2021-12-14 佛山市南海区苏科大环境研究院 一种城市景观喷泉的节能控制方法及控制系统
CN113988931A (zh) * 2021-10-29 2022-01-28 杭州青橄榄网络技术有限公司 校园多媒体广告投放效果反馈方法、系统及装置
CN114241555A (zh) * 2021-12-09 2022-03-25 上海维智卓新信息科技有限公司 基于混合感知的观众数量确定方法及装置
CN115731611A (zh) * 2022-11-03 2023-03-03 电子科技大学 一种精准思政管理系统

Also Published As

Publication number Publication date
CN110765828A (zh) 2020-02-07

Similar Documents

Publication Publication Date Title
WO2020020022A1 (fr) Procédé de reconnaissance visuelle et système associé
TWI768641B (zh) 監測方法、電子設備和儲存介質
CN109086726B (zh) 一种基于ar智能眼镜的局部图像识别方法及系统
CN111552076B (zh) 一种图像显示方法、ar眼镜及存储介质
CN108427503B (zh) 人眼追踪方法及人眼追踪装置
CN105184246B (zh) 活体检测方法和活体检测系统
KR101569268B1 (ko) 얼굴 구성요소 거리를 이용한 홍채인식용 이미지 획득 장치 및 방법
CN111401296B (zh) 一种行为分析方法、设备及装置
WO2020024416A1 (fr) Procédé et appareil anti-observation pour un terminal intelligent, dispositif informatique et support de stockage
CN105426827A (zh) 活体验证方法、装置和系统
CN109375765B (zh) 眼球追踪交互方法和装置
CN113197542B (zh) 一种在线自助视力检测系统、移动终端及存储介质
CN108875468B (zh) 活体检测方法、活体检测系统以及存储介质
CN104143086A (zh) 人像比对在移动终端操作系统上的应用技术
JP5225870B2 (ja) 情動分析装置
CN111639702A (zh) 一种多媒体数据分析方法、设备、服务器及可读存储介质
JP2020140630A (ja) 注視点推定システム、注視点推定方法、注視点推定プログラム、及び、これが記録された情報記録媒体
CN106618479B (zh) 瞳孔追踪系统及其方法
JP2024521292A (ja) ビデオ会議エンドポイント
CN112257507A (zh) 一种基于人脸瞳间距判断距离及人脸有效性的方法和装置
CN113960788A (zh) 图像显示方法、装置、ar眼镜及存储介质
CN115083325A (zh) 一种设备控制方法、装置、电子设备和存储介质
CN115331282B (zh) 一种智能视力测试系统
CN115240222A (zh) 一种青少年观看习惯的监测装置及监测方法
CN110162949B (zh) 控制图像显示的方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19840837

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19840837

Country of ref document: EP

Kind code of ref document: A1