[go: up one dir, main page]

WO2013182101A1 - Procédé, dispositif et terminal mobile pour déterminer une personne cible - Google Patents

Procédé, dispositif et terminal mobile pour déterminer une personne cible Download PDF

Info

Publication number
WO2013182101A1
WO2013182101A1 PCT/CN2013/078185 CN2013078185W WO2013182101A1 WO 2013182101 A1 WO2013182101 A1 WO 2013182101A1 CN 2013078185 W CN2013078185 W CN 2013078185W WO 2013182101 A1 WO2013182101 A1 WO 2013182101A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
image data
matching
person
feature information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2013/078185
Other languages
English (en)
Chinese (zh)
Inventor
陈硕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZTE Corp
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Publication of WO2013182101A1 publication Critical patent/WO2013182101A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Definitions

  • Target person determination method device and mobile terminal
  • the present invention relates to the field of communications, and in particular to a method, device, and mobile terminal for determining a target person.
  • the invention provides a method, a device and a mobile terminal for determining a target person, so as to overcome the problems of low efficiency, inconvenience and inaccuracy in the prior art tracing method.
  • a method for determining a target person including: acquiring first feature information of a person in image data; and the first feature information and a target obtained in advance The second feature information of the person is matched; determining whether the person in the image data is the target person according to the matching result.
  • the first feature information includes: first face information of a person in the image data; and the second feature information includes: second face information of the target person.
  • determining whether the person in the image data is the target person according to the matching result includes: determining whether a matching degree of the first face information and the second face information meets a preset condition; A condition is set to determine that the person in the image data is the target person.
  • the method further includes: the first feature information further includes first location information of the person in the image data, and if the person in the image data is determined to be the target person, sending the report to the server First feature information of a person in the image data and first position information of a person in the image data.
  • the second feature information further includes: second location information of the target person; matching the first feature information with the second feature information, including: the first location information and the location The second location information is matched; determining whether the first location information has an intersection with the second location information; if there is an intersection, matching the first facial information with the second facial information.
  • the method further includes: determining, according to the determination result, a matching data set, where the matching data set is intersected with the first location information The set of the target person; matching the first face information with the second face information comprises: the first face information and the second face of the target person in the matching data set Information is matched.
  • acquiring the first feature information of the person in the image data includes: detecting whether there is new image data in the storage device, and if there is a new image, acquiring the first image from the newly added image data The facial information; and/or, the image data is directly acquired by the imaging device, and the first facial information is acquired from the acquired image data.
  • the method further includes: sending a request to the server, requesting to acquire the second feature information; and receiving the server according to the Said second feature information returned by the request; and/or directly receiving the second feature information pushed by the server.
  • the method further includes: displaying the determination result and/or the matching result to the user.
  • a determining apparatus for a target person including: an acquiring module, configured to acquire first feature information of a person in the image data; and a matching module, configured to set the first The feature information is matched with the second feature information of the target person obtained in advance; and the determining module is configured to determine, according to the matching result of the matching module, whether the person in the image data is the target person.
  • the acquiring module is configured to acquire a first facial message of a person in the image data Interest.
  • the determining module includes: a first determining unit, configured to determine whether a matching degree of the first facial information and the second facial information in the second feature information meets a preset condition; And the unit is configured to determine that the person in the image data is the target person if the determination result of the first determining unit is that the preset condition is met.
  • the device further includes: a reporting module, configured to send, to the server, the first feature of reporting the person in the image data, if the determination result of the first determining unit is that the preset condition is met Information and first location information of a person in the image data.
  • a reporting module configured to send, to the server, the first feature of reporting the person in the image data, if the determination result of the first determining unit is that the preset condition is met Information and first location information of a person in the image data.
  • the matching module includes: a first matching unit, configured to match the first location information with the second location information of the second feature information; and a second determining unit, configured to determine the Whether the first location information and the second location information intersect; the second matching unit is configured to: when the determination result of the second determining unit is that there is an intersection, the first facial information and the The second face information is matched.
  • the matching module further includes: a second determining unit, configured to determine, according to a determination result of the second determining unit, a matching data set, where the matching data set is intersected with the first location information The set of the target persons; the second matching unit is configured to match the first face information with the second face information of the target person in the matching data set.
  • the acquiring module includes: a first acquiring unit, configured to detect whether there is new image data in the storage device, and if there is a new image, obtain the first image from the newly added image data
  • the second acquisition unit is configured to directly acquire image data by the imaging device, and acquire the first facial information from the acquired image data.
  • a mobile terminal comprising the determining means of any of the above-mentioned target persons provided by the present invention.
  • the first feature information of the person in the image data is acquired, the first feature information is matched with the second feature information of the target person obtained in advance, and the person in the image data is determined as the target person according to the matching result.
  • FIG. 1 is a schematic diagram of a target person's determination system in accordance with an embodiment of the present invention
  • FIG. 2 is a structural block diagram of a determining device of a target person according to an embodiment of the present invention
  • FIG. 3 is a flow chart of a method of determining a target person according to an embodiment of the present invention.
  • FIG. 5 is a flowchart of a method for determining a target person according to Embodiment 2 of the present invention. Preferred embodiment of the invention
  • a database of the target person may be established, and information of the target person may be stored, for example, image data of the target person, information of a location where the target person may appear, and the like.
  • the target person information is pushed to the mobile terminal having the camera and the geographic positioning system, and the mobile terminal analyzes the information of the person and the target person in the captured image data, and determines whether the person in the image data captured by the mobile terminal is the target according to the analysis result. people.
  • FIG. 1 is a schematic diagram of a target person's determination system according to an embodiment of the present invention.
  • the system includes a server and a mobile terminal.
  • Establishing a target person information database on the server side storing the information of the target person
  • the mobile terminal acquires the information of the target person from the server, and analyzes the information of the person in the image data in the mobile terminal, and determines the mobile terminal according to the result of the analysis. Whether the person in the image data is the target person.
  • the mobile terminal obtains the tracing database information on the server through the wireless connection.
  • Each data item of the tracing database can include feature information of the person to be sought, as well as location and other related information that may be present, synchronized to the local tracing database.
  • the mobile terminal can be configured to automatically analyze the photographed photos in the background after the photograph is saved, for example, using facial recognition technology, geographic location technology, pattern recognition technology, etc., to compare the information of the person in the photo with the data in the local tracing database. A matching is performed, and it is determined whether the person in the image data in the mobile terminal is the target person according to the matching result.
  • the mobile terminal can send the tracing result to the server through a wireless connection. Moreover, the mobile terminal can promptly search for results to the user (for example, sound, vibration, text, graphic display, etc.).
  • a determining device for a target person is provided.
  • the device mainly includes: an obtaining module 10, a matching module 20, and a determining module 30.
  • the obtaining module 10 is configured to acquire first feature information of a person in the image data.
  • the matching module 20 is connected to the acquiring module 10, and configured to perform the first feature information and the second feature information of the target person obtained in advance.
  • the matching module 30 is connected to the matching module 20 and configured to determine whether the person in the image data is the target person according to the matching result of the matching module.
  • the first feature information acquired by the obtaining module 10 may include first face information of a person in the image data.
  • Matching the first feature information with the second feature information by the matching module 20 may refer to matching the first face information with the second facial feature of the second feature information of the target person.
  • the determining module 30 may determine whether the person in the image data is the target person based on the matching result of the first face information and the second face information.
  • the determining module 30 may include: a first determining unit, configured to determine whether a matching degree of the second facial information in the first facial information and the second feature information meets a preset condition; The first determining unit is configured to determine that the person in the image data is the target person if the determination result of the first determining unit is that the preset condition is satisfied.
  • the person search result can also be reported to the server.
  • the first feature information of the person in the image data further includes first location information.
  • the position at which the image data is captured is obtained by the positioning device, from
  • the apparatus of the embodiment of the present invention may further include: a reporting module, configured to send a report to the server if the determination result of the first determining unit is that the preset condition is met.
  • the reported information may further include a matching result and the like.
  • the second feature information may further include second location information of the target person, and the location information may be location information that may be present by the target person.
  • the matching module may include: a first matching unit configured to match the first location information with the second location information of the second feature information; and the second determining unit is configured to determine Whether the first location information and the second location information overlap; the second matching unit is configured to match the first facial information with the second facial information if the determination result of the second determining unit is that there is an intersection.
  • the matching module may further include: a second determining unit, configured to determine, according to a determination result of the second determining unit, a matching data set, where the matching data set is a set of target persons that intersect with the first location information;
  • the matching unit is configured to match the first face information with the second face information of the target person in the matching data set.
  • the acquiring module 10 may directly acquire image data by using an image capturing device, and acquire first face information of a person in the image data from the acquired image data.
  • the obtaining module 10 can also detect whether there is new image data in the storage device in real time or at a time, and if so, obtain the first face information from the newly added image data.
  • the obtaining module may include: a first acquiring unit configured to detect whether there is new image data in the storage device, and if there is a newly added image, obtain the newly added image data.
  • the first facial information; and/or the second acquiring unit is configured to directly acquire the image data by the imaging device, and acquire the first facial information from the acquired image data.
  • the second feature information of the target person may be acquired from the server.
  • a request may be sent to the server, requesting to acquire the second feature information, and receiving second feature information returned by the server according to the request, for example, the user may set the target person information of interest, for example, , the user only obtains the information of the target person in his or her area, and the like.
  • the second feature information pushed by the server may be directly received, and the server may periodically push the second feature information of the target person.
  • the method further includes: displaying the determination result and/or the matching result to the user.
  • a mobile terminal comprising the determining device of any of the above-mentioned target persons according to the embodiment of the present invention.
  • a method for determining a target person is also provided.
  • FIG. 3 is a flowchart of a method for determining a target person according to an embodiment of the present invention. As shown in FIG. 3, the method mainly includes steps S302 to S306.
  • Step S302 The mobile terminal acquires first feature information of a person in the image data.
  • the first feature information may include: first face information of a person in the image data.
  • the image data can be directly acquired by the image capturing device, and the first face information of the person in the image data can be obtained from the acquired image data.
  • the new image data can be detected in the storage device in real time or at a time, and if so, the new image data is updated.
  • the first facial information is acquired in the increased image data.
  • the first feature information may further include: first location information of the person in the image data.
  • first location information of the person in the image data For example, the position at which the image data is captured is acquired by the positioning device, thereby determining the position of the person in the image data.
  • Step S304 The mobile terminal matches the first feature information with the second feature information of the target person obtained in advance.
  • the second feature information may include: second face information of the target person. Matching the first feature information with the second feature information includes: matching the first face information with the second face information.
  • the second feature information may further include: second location information of the target person; matching the first feature information with the second feature information includes: matching the first location information with the second location information; determining the first Whether the location information intersects with the second location information; if there is an intersection, the first face information is matched with the second face information.
  • the range of matching can be narrowed to provide matching efficiency.
  • the mobile terminal may be determined according to the foregoing judgment result. And a matching data set, wherein the matching data set is a set of target persons that intersect with the first location information, and matches the first facial information with the second facial information of the target person in the matching data set.
  • Step S306 the mobile terminal determines, according to the matching result, whether the person in the image data is the target person.
  • the mobile terminal may determine whether the matching degree of the first facial information and the second facial information meets the preset condition; if the preset condition is met, determine that the person in the image data is the target person.
  • the matching degree threshold is set in advance, and the matching degree threshold may be set in the user segment or in the server. If the matching degree reaches the matching degree threshold, it is determined that the person in the image data is the target person.
  • the mobile terminal sends the first feature information of the person in the reported image data and the first location information of the person in the image data to the server in the case of determining the person targeted in the image data.
  • the mobile terminal may further send a request to the server to obtain the second feature information. Receiving the second feature information returned by the server according to the request. The mobile terminal can also directly receive the second feature information pushed by the server.
  • the method further includes: displaying the determination result and/or the matching result to the user.
  • the mobile terminal acquires the first feature information of the person in the image data, matches the first feature information with the second feature information of the target person obtained in advance, and determines, according to the matching result, whether the person in the image data is
  • the target person solves the problems existing in the existing tracing method and improves the accuracy and effectiveness of tracing.
  • the mobile terminal according to whether the mobile terminal supports location location, it is divided into two types: “including geographic location matching” and “excluding geographical location matching”.
  • the mobile terminal supports location location as an example for description.
  • the mobile terminal actively (timed or manually) acquires data from the server, or passively receives.
  • the information pushed by the server is added to the local tracing database, and each piece of data can include the photo of the target person and the geographical location that may appear.
  • FIG. 4 is a flowchart of a method for determining a target person according to Embodiment 1 of the present invention. As shown in FIG. 4, the method includes steps S402 to S420.
  • Step S406 The mobile terminal automatically analyzes the photo to perform face recognition.
  • Step S408 The mobile terminal determines whether there is a face image in the photo. If yes, step S410 is performed, and if no, the process ends.
  • Step S410 The mobile terminal determines the current location by using a geographic location positioning function.
  • Step S412 The mobile terminal searches for a tracing database by geographic location.
  • Step S414 The mobile terminal determines whether a matching data set is obtained, and if yes, performs step S416, and if not, ends.
  • Step S416 The mobile terminal identifies the face and the matching data set to perform face matching, and determines the similarity.
  • Step S418 The mobile terminal determines whether the similarity meets the preset condition, and if yes, performs the step
  • Step S420 The mobile terminal ends the matching result on the end of the month.
  • the information of the " ⁇ " on the mobile terminal may include: a photo and a geographical location at the time and matching information, such as a mobile terminal mobile number.
  • the mobile terminal does not support the location location as an example.
  • the mobile terminal actively (timingly or manually) obtains data from the server, or passively receives the data pushed by the server. , join the local tracing database, each piece of data can include the photo of the target person and the geographical location that may appear.
  • FIG. 5 is a flowchart of a method for determining a target person according to Embodiment 2 of the present invention. As shown in FIG. 5, the method includes steps S502 to S514. Step S502 to step S504, after the mobile terminal takes a picture, the photo is stored.
  • Step S506 The mobile terminal automatically analyzes the photo to perform face recognition.
  • Step S508 the mobile terminal determines whether there is a face image in the photo, and if yes, performs step S510, and if not, ends.
  • Step S510 The mobile terminal identifies the face and the data set in the database to perform face matching, and determines the similarity.
  • Step S512 The mobile terminal determines whether the similarity meets the preset condition, and if yes, performs step S514, and if not, ends.
  • Step S514 the mobile terminal ends the matching result on the service end.
  • the information on the mobile terminal can include: photo and current geographic location and matching information, mobile terminal mobile number, and the like.
  • the basic information of the elderly (including the name of the elderly, photos, lost location, time, family contact, etc.) is entered into the tracing database of the server. Since photos usually only need to reflect basic facial features, high-pixel photos are not required, so the amount of data in one piece of data is not too large. Even in the database, other descriptions of facial information can be used instead of pictures or photos, as well as the ability to support pattern matching operations.
  • the mobile phone with the target person's confirmation device installed contains a local tracing database.
  • This database can be set by the user to focus on a specific range of tracing information. For example, focus only on the lost information of the city, or focus on the tracing information released in the past week, or focus only on wanted information with high rewards. This can reduce the data footprint of the local tracing database and improve the speed of local matching. Specific users, such as public security personnel, can set up to focus on all tracing information.
  • the related information of the mobile phone or by the mobile phone user to manually synchronize with the database, thereby obtaining the missing elderly information.
  • the mobile phone user who installed the target person's confirmation device takes photos at stations, scenic spots, shopping malls, etc. After the user uses the mobile phone to take a photo event, the mobile phone can automatically call the target person's confirmation device in the background, without affecting the user's normal use of the mobile phone. In the case of functions, face recognition, geographic location information acquisition, information matching, and judgment are automatically performed.
  • the current mobile phone user can be explicitly prompted. Therefore, the mobile phone user can immediately ask the elderly to confirm and report the information to the police or family members, thereby effectively completing the search. Instead of explicitly prompting, the result of judging "find suspicious object" and related information (such as geographic location, original photo of the photo, mobile phone number of the information submission terminal) can be reported to the server. Confirmed by the police or family members.
  • the tracing task is completed.
  • the data is deleted from the server's tracing database and synchronized to the local tracing database in the mobile phone by active or passive means to prevent false positives.
  • the first feature information of the person in the image data is acquired, the first feature information is matched with the second feature information of the target person obtained in advance, and the person in the image data is determined according to the matching result.
  • the target person it solves the existing problems of the tracing method and improves the accuracy and effectiveness of tracing.

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Telephonic Communication Services (AREA)
PCT/CN2013/078185 2012-12-19 2013-06-27 Procédé, dispositif et terminal mobile pour déterminer une personne cible Ceased WO2013182101A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201210554578.2 2012-12-19
CN2012105545782A CN103051705A (zh) 2012-12-19 2012-12-19 目标人的确定方法、装置及移动终端

Publications (1)

Publication Number Publication Date
WO2013182101A1 true WO2013182101A1 (fr) 2013-12-12

Family

ID=48064199

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2013/078185 Ceased WO2013182101A1 (fr) 2012-12-19 2013-06-27 Procédé, dispositif et terminal mobile pour déterminer une personne cible

Country Status (2)

Country Link
CN (1) CN103051705A (fr)
WO (1) WO2013182101A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10181168B2 (en) 2014-03-31 2019-01-15 Hitachi Kokusa1 Electric, Inc. Personal safety verification system and similarity search method for data encrypted for confidentiality
CN111723618A (zh) * 2019-03-21 2020-09-29 浙江莲荷科技有限公司 信息处理方法及装置

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103051705A (zh) * 2012-12-19 2013-04-17 中兴通讯股份有限公司 目标人的确定方法、装置及移动终端
CN104580121B (zh) * 2013-10-28 2019-03-15 腾讯科技(深圳)有限公司 寻人/人员信息匹配推送的方法、系统、客户端和服务器
CN103632141A (zh) * 2013-11-28 2014-03-12 小米科技有限责任公司 一种识别人物的方法、装置及终端设备
CN103744895A (zh) * 2013-12-24 2014-04-23 深圳先进技术研究院 一种居民身份信息的获取方法及装置
CN103945001A (zh) * 2014-05-05 2014-07-23 百度在线网络技术(北京)有限公司 一种图片分享方法及装置
CN104901867B (zh) * 2015-04-30 2017-11-14 广东欧珀移动通信有限公司 消息交互方法和相关装置及通信系统
CN105222774B (zh) * 2015-10-22 2019-04-16 Oppo广东移动通信有限公司 一种室内定位方法及用户终端
CN106897726A (zh) * 2015-12-21 2017-06-27 北京奇虎科技有限公司 失踪人口的寻找方法和装置
CN106960172A (zh) * 2016-01-08 2017-07-18 中兴通讯股份有限公司 人员识别处理方法、装置及系统
CN107221151A (zh) * 2016-03-21 2017-09-29 滴滴(中国)科技有限公司 基于图像识别的接单司机识别乘客的方法及装置
CN105869015A (zh) * 2016-03-28 2016-08-17 联想(北京)有限公司 一种信息处理方法及系统
CN107295294A (zh) * 2016-03-30 2017-10-24 杭州海康威视数字技术股份有限公司 一种智能寻人方法、装置及系统
CN106203514B (zh) * 2016-07-12 2019-02-12 腾讯科技(深圳)有限公司 图像识别回调通知的方法和装置
CN106375380A (zh) * 2016-08-27 2017-02-01 蔡璟 一种基于走失信息搜索走失物的智能大数据管理系统
CN106681000B (zh) * 2016-11-22 2020-09-11 宇龙计算机通信科技(深圳)有限公司 一种增强现实注册装置及其方法
CN106528864A (zh) * 2016-11-30 2017-03-22 天脉聚源(北京)科技有限公司 一种智能运输的方法及装置
WO2018119599A1 (fr) * 2016-12-26 2018-07-05 深圳前海达闼云端智能科技有限公司 Procédé et dispositif de recherche de personne et système de communication
CN107172198A (zh) * 2017-06-27 2017-09-15 联想(北京)有限公司 一种信息处理方法、装置及系统
WO2019095221A1 (fr) * 2017-11-16 2019-05-23 深圳前海达闼云端智能科技有限公司 Procédé de recherche de personne, appareil, terminal et serveur en nuage
CN108038468A (zh) * 2017-12-26 2018-05-15 北斗七星(重庆)物联网技术有限公司 一种基于人脸识别的安防终端
CN108039007A (zh) * 2017-12-26 2018-05-15 北斗七星(重庆)物联网技术有限公司 一种安防方法及装置
CN110555876B (zh) * 2018-05-30 2022-05-03 百度在线网络技术(北京)有限公司 用于确定位置的方法和装置
CN110223493A (zh) * 2019-05-29 2019-09-10 李成 一种基于大数据的走失人员互助寻找系统及方法
CN113990104A (zh) * 2021-10-19 2022-01-28 马瑞利汽车零部件(芜湖)有限公司 车辆对外界的识别系统及方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101093542A (zh) * 2006-02-15 2007-12-26 索尼株式会社 查询系统、成像装置、查询装置、信息处理方法及其程序
CN101989300A (zh) * 2010-10-15 2011-03-23 江苏省莱科信息技术有限公司 失踪人员寻找系统和寻找失踪人员的实现方法
CN102521621A (zh) * 2011-12-16 2012-06-27 上海合合信息科技发展有限公司 基于图像匹配与地理位置信息来获取信息的方法及系统
CN103051705A (zh) * 2012-12-19 2013-04-17 中兴通讯股份有限公司 目标人的确定方法、装置及移动终端
CN103186590A (zh) * 2011-12-30 2013-07-03 牟颖 一种通过手机获取在逃通缉人员身份信息的方法

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002314984A (ja) * 2001-04-09 2002-10-25 Fuji Photo Film Co Ltd 監視カメラシステム

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101093542A (zh) * 2006-02-15 2007-12-26 索尼株式会社 查询系统、成像装置、查询装置、信息处理方法及其程序
CN101989300A (zh) * 2010-10-15 2011-03-23 江苏省莱科信息技术有限公司 失踪人员寻找系统和寻找失踪人员的实现方法
CN102521621A (zh) * 2011-12-16 2012-06-27 上海合合信息科技发展有限公司 基于图像匹配与地理位置信息来获取信息的方法及系统
CN103186590A (zh) * 2011-12-30 2013-07-03 牟颖 一种通过手机获取在逃通缉人员身份信息的方法
CN103051705A (zh) * 2012-12-19 2013-04-17 中兴通讯股份有限公司 目标人的确定方法、装置及移动终端

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10181168B2 (en) 2014-03-31 2019-01-15 Hitachi Kokusa1 Electric, Inc. Personal safety verification system and similarity search method for data encrypted for confidentiality
CN111723618A (zh) * 2019-03-21 2020-09-29 浙江莲荷科技有限公司 信息处理方法及装置

Also Published As

Publication number Publication date
CN103051705A (zh) 2013-04-17

Similar Documents

Publication Publication Date Title
WO2013182101A1 (fr) Procédé, dispositif et terminal mobile pour déterminer une personne cible
US10692505B2 (en) Personal assistant application
JP6531341B2 (ja) 情報処理装置、サーバ、記憶媒体、および制御方法
US8392957B2 (en) Automatic content tagging, such as tagging digital images via a wireless cellular network using metadata and facial recognition
KR101636488B1 (ko) 얼굴 인식을 이용하여 애드 혹 네트워크를 설정하는 방법
KR101759454B1 (ko) 사진을 공유하는 방법, 장치, 프로그램 및 기록매체
CN109639635B (zh) Cdn代理拉流方法、服务器、cdn及客户端
CN110313190B (zh) 控制装置和方法
TW201104587A (en) System and method for automatic tagging of a digital image
WO2017117879A1 (fr) Procédé, appareil et système de traitement d'identification personnelle
TW201516939A (zh) 查詢使用者標識的方法及裝置、獲取使用者標識的方法及裝置與即時通訊中添加好友的方法及裝置
JP5947196B2 (ja) 撮影サービスシステム及び撮影制御方法
GB2515563A (en) Media sharing
JP2012247841A (ja) 近隣人物特定装置、近隣人物特定方法、近隣人物特定プログラム及び近隣人物特定システム
CN105956091B (zh) 扩展信息获取方法及装置
CN114009003A (zh) 图像采集方法、装置、设备及存储介质
CN109413760B (zh) 一种无线连接方法及设备
JP2013219666A (ja) 情報共有システム、照合装置、端末装置、情報共有方法及び情報共有プログラム
CN110928425A (zh) 信息监控方法及装置
JP2015233204A (ja) 画像記録装置及び画像記録方法
KR101729206B1 (ko) 이미지 공유 시스템 및 방법
JP2009075916A (ja) 人物捜索システム及び人物捜索方法
JP2025100673A (ja) システム、サーバ装置、サーバ装置の制御方法及びプログラム
TW201138465A (en) Image capturing device and method for dynamic tracing a specified object using image capturing devices
JP2020009087A (ja) 情報収集サーバ、情報収集システム、及び情報収集方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13801123

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13801123

Country of ref document: EP

Kind code of ref document: A1