[go: up one dir, main page]

WO2018113433A1 - Procédé de criblage et de localisation spatiale de points caractéristiques de réalité virtuelle - Google Patents

Procédé de criblage et de localisation spatiale de points caractéristiques de réalité virtuelle Download PDF

Info

Publication number
WO2018113433A1
WO2018113433A1 PCT/CN2017/109794 CN2017109794W WO2018113433A1 WO 2018113433 A1 WO2018113433 A1 WO 2018113433A1 CN 2017109794 W CN2017109794 W CN 2017109794W WO 2018113433 A1 WO2018113433 A1 WO 2018113433A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
light
infrared
processing unit
virtual reality
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2017/109794
Other languages
English (en)
Chinese (zh)
Inventor
李宗乘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
VR Technology Holdings Ltd
Original Assignee
VR Technology Holdings Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by VR Technology Holdings Ltd filed Critical VR Technology Holdings Ltd
Publication of WO2018113433A1 publication Critical patent/WO2018113433A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes

Definitions

  • the present invention relates to the field of virtual reality, and more particularly to a virtual reality feature point screening space positioning method.
  • Spatial positioning generally uses optical or ultrasonic modes for positioning and measurement, and the model is used to derive the spatial position of the object to be measured.
  • the general virtual reality space positioning system uses the infrared point and the light-sensing camera to determine the spatial position of the object.
  • the infrared point is at the front end of the near-eye display device.
  • the light-sensing camera captures the position of the infrared point and then derives the user's position. Physical coordinates. If you know the correspondence between at least three light sources and projections, call the PnP algorithm to get the spatial position of the helmet.
  • the key to achieving this process is to determine the source ID (Identity) of the projection.
  • the current virtual reality spatial positioning is determined by the inaccurate image recognition in a certain distance and direction.
  • the corresponding projection light source I D ⁇ is too long and the image recognition is inaccurate, which affects the accuracy and efficiency of the positioning.
  • the present invention provides a virtual reality feature point screening spatial positioning method that can improve positioning accuracy and efficiency.
  • a virtual reality feature point screening space positioning method which includes the following steps:
  • S1 ensuring that all infrared point light sources are in an on state, the processing unit controls the infrared camera to take an image of the virtual reality helmet, and calculates coordinates of the light spots of each of the infrared point source images;
  • S2 the processing unit performs ID identification on each light spot in the imaged image to find an ID corresponding to all the light spots;
  • S3 the processing image control the infrared point light source corresponding to the ID is at least 4 ⁇ in a lighting state, turning off the remaining infrared point light sources, and the processing unit controls the infrared camera to capture the virtual reality
  • the image of the helmet is calculated and positioned using the PnP algorithm
  • S4 When the number of spots on the imaged picture does not satisfy the number required by the PnP algorithm, S1 to S3 are re-executed.
  • the shape of the image is rectangular, and the length of the long side of the image is d
  • the processing unit calculates the distance between the two spots, and selects the maximum distance d'. d'>d/2 ⁇
  • the processing unit finds a spot of light closest to the center of the imaged image, and maintains the infrared point source of the spot corresponding to the ID and the three closest to the infrared point source
  • the infrared point source is in a lit state, and the other infrared point sources are turned off at the same time.
  • the shape of the image is rectangular, and the length of the long side of the image is d, the processing unit calculates the distance between the two spots, and selects the maximum distance d'. d' ⁇ d/2 ⁇ , the processing unit finds at least four of the infrared point light sources outside the relative positions of the infrared point light sources corresponding to the light spots and keeps lighting, and turns off the other infrared point light sources.
  • the processing unit combines the known historical information of the previous frame to make a slight translation of the light spot of the image of the previous frame, so that the light spot of the image of the previous frame is corresponding to the light spot of the current frame image. And determining, according to the correspondence relationship and the historical information of the previous frame, a corresponding ID of each light spot having a corresponding relationship on the current frame image.
  • the present invention increases the efficiency of positioning by turning off the infrared point source which complicates the calculation, and uses the relative position of the infrared point source on the imaged image to filter the infrared point source that needs to be turned off.
  • a screening method is given. The method of comparing the maximum distance between the light spots with the long side distance of the imaged picture to distinguish the closing of the infrared point light source corresponding to the light spot is simple and easy, and the operability is strong.
  • the infrared point source corresponding to the four light spots selected in the middle is illuminated, which can be better calculated by the PnP algorithm, and the same is also ensured.
  • the positioned light spots do not quickly move out of the imaged image, preventing repeated ID recognition and costing a lot of time.
  • the infrared point source corresponding to at least 4 light spots selected from the outside is illuminated, and the PnP algorithm can be used for calculation, and the light is also ensured.
  • the distance between the spots is large enough that it will not be produced due to pixels or the like. A large error is caused by making a slight translation of the light spot to make the current light spot correspond to the light spot of the previous frame image, thereby avoiding repeated ID recognition and saving a lot of time.
  • FIG. 1 is a schematic diagram showing the principle of a virtual reality feature point screening spatial positioning method according to the present invention
  • FIG. 2 is a schematic diagram of an infrared point source distribution of a virtual reality feature point screening spatial positioning method according to the present invention
  • FIG. 3 shows one of images taken by an infrared camera
  • FIG. 4 illustrates one of the imaged images presented after the infrared point source is turned off
  • FIG. 5 shows two images taken by an infrared camera
  • FIG. 6 shows the second image of the image presented after the infrared point source is turned off.
  • the present invention provides a virtual reality feature point screening spatial positioning method that can improve positioning accuracy and efficiency.
  • the virtual reality feature point screening spatial positioning method comprises a virtual reality helmet 10, an infrared camera 20 and a processing unit 30, and the infrared camera 20 is electrically connected to the processing unit 30.
  • the virtual reality helmet 10 includes a front panel 11, and a plurality of infrared point light sources 13 are distributed on the front panel 11 of the virtual reality helmet 10 and the four side panels of the upper, lower, left, and right sides.
  • the number of infrared point sources 13 must be at least the minimum number that the PnP algorithm can operate.
  • the shape of the infrared point light source 13 is not particularly limited.
  • infrared point light sources 13 on the front panel 11 we take the number of infrared point light sources 13 on the front panel 11 to be seven, and the seven infrared point light sources form a shape of approximately "w".
  • a plurality of infrared point sources 13 can be illuminated or turned off as needed by the firmware interface of the virtual reality helmet 10.
  • the infrared point light source 13 on the virtual reality helmet 10 forms a light spot on the image by the shooting of the infrared camera 20. Due to the band pass characteristic of the infrared camera, only the infrared point light source 13 can form a spot projection on the image, and the remaining portions are uniformly formed. Background image.
  • the infrared point source 13 on the virtual reality helmet 10 can form a spot of light on the image. Referring to FIG. 3 and FIG.
  • FIG. 3 shows an image 4 of the infrared point source 13 captured by the infrared camera 20.
  • the image of the image 41 is rectangular, and the length of the longer side of the rectangle is d.
  • the processing unit 30 controls the infrared camera 20 to take an image of the virtual reality helmet 10 with seven spots on the image 41.
  • the processing unit 30 calculates the coordinates of each light spot based on the position of the light spot on the imaged picture 41, and measures the distance between the two light spots, from which the maximum distance d' is selected. When d'>d/2 ⁇ , it indicates that the range of the pupil spot on the image 41 is large, since each spot is sequentially ID
  • the processing unit 30 first performs ID identification on each spot in the image 41. Finding the ID corresponding to all the light spots, and then finding the light spot closest to the center position of the imaged image 41 as the center point, maintaining the infrared spot light source 13 corresponding to the ID of the light spot and the three infrared rays closest to the infrared point light source.
  • the point light source 13 is in a lighting state, and the other infrared point light sources 13 are turned off.
  • the processing unit 30 can track each light spot and calibrate the corresponding ID.
  • the method is: in spatial positioning, since the sampling time of each frame is small enough, generally 30 ms, in general, the position difference of each light spot of the previous frame and each light spot on the current frame is small, and the processing is small.
  • the unit 30 combines the known historical information of the previous frame to make a slight translation of the light spot of the previous frame image to make the light spot of the previous frame image Frame image before the light spot is generated corresponding relationship, on the basis of the correspondence and a history information is determined to have a correspondence between each light spot corresponding to the ID of the current frame image.
  • the processing unit 30 directly calls the PnP algorithm to obtain the spatial positioning position of the virtual reality helmet 10.
  • the virtual reality helmet 10 causes the number of spots in the image 41 to be smaller than the number of spots required by the PnP algorithm, the above method is re-executed to select a new infrared point source 13 to be lit.
  • FIG. 5 there are seven light spots on the imaged image 41.
  • the processing unit 30 calculates the coordinates of each light spot based on the position of the light spot on the imaged picture 41, and measures the distance between the two light spots, from which the maximum distance d' is selected.
  • d' d/2 ⁇
  • the PnP algorithm can also meet the needs of the PnP algorithm.
  • the processing unit 30 first ID identifies each spot in the image, finds the ID corresponding to all the light spots, and then finds the relative position of the infrared point source 13 corresponding to the IDs. Keeping these infrared point sources at least 4 infrared point sources 13 outside 13 is in the light state, and the other infrared point light sources 13 are turned off. This ensures that the light spots on the imaged image 41 are not too dense, thereby affecting the accuracy of the measurement.
  • the processing unit 30 directly calls the PnP algorithm to obtain the spatial positioning position of the virtual reality helmet 10.
  • the above method is re-executed to select a new infrared point source 13 that needs to be illuminated.
  • the processing unit 30 calls the PnP algorithm to obtain the spatial positioning position of the helmet, Pn.
  • the P algorithm belongs to the prior art, and the present invention will not be described again.
  • the present invention increases the efficiency of positioning by turning off the infrared point source 13 which complicates the calculation, and uses the relative position of the infrared point source 13 on the imaged picture 41 to filter the need to be closed.
  • the infrared point source 13 gives a screening method. The method of comparing the maximum distance between the light spots with the long side distance of the imaged image 41 to distinguish the closing of the infrared point light source 13 corresponding to the light spot is simple and easy, and the operability is strong.
  • the infrared point light source 13 corresponding to the four light spots selected in the middle portion is illuminated, and the PnP algorithm can be used for calculation, and the same is ensured.
  • the spot of light used for positioning does not quickly move out of the imaged picture 41, preventing repeated ID recognition and taking a lot of time.
  • the infrared point light source 13 corresponding to at least four light spots selected from the outside is illuminated, and the PnP algorithm can be used for calculation, and the same is also ensured.
  • the distance between the light spots is large enough, and there is no large error due to pixels or the like. By making a slight translation of the light spots, the current light spots are corresponding to the light spots of the previous frame image, thereby avoiding repeated ID recognition and saving. A lot of time.

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

L'invention concerne un procédé de criblage et de localisation spatiale de points caractéristiques de réalité virtuelle, comprenant les étapes suivantes : S1 : après que toutes les sources ponctuelles infrarouges sont dans un état allumé, une unité de traitement commande une caméra infrarouge pour capturer une image d'un casque de réalité virtuelle et calcule les coordonnées du point lumineux de chaque image source de point infrarouge ; S2 : l'unité de traitement effectue une identification d'ID sur chaque point lumineux dans une image d'imagerie et trouve des ID correspondant à tous les points lumineux ; S3 : l'unité de traitement commande au moins quatre sources ponctuelles infrarouges correspondant aux ID pour être dans un état éclairé et éteint les autres sources ponctuelles infrarouges, et l'unité de traitement commande la caméra infrarouge pour capturer une image du casque de réalité virtuelle et utilise un algorithme de point de perspective n (PnP) pour effectuer une localisation d'opération de l'image ; et S4 : lorsque le nombre de points lumineux dans l'image d'imagerie ne satisfait pas le nombre requis par l'algorithme PnP, les étapes S1 à S3 sont à nouveau exécutées. Par rapport à la technologie existante, ledit procédé éteint des sources de point infrarouge qui compliquent le calcul, ce qui permet d'augmenter l'efficacité de localisation.
PCT/CN2017/109794 2016-12-22 2017-11-07 Procédé de criblage et de localisation spatiale de points caractéristiques de réalité virtuelle Ceased WO2018113433A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611199871.6 2016-12-22
CN201611199871.6A CN106599929B (zh) 2016-12-22 2016-12-22 虚拟现实特征点筛选空间定位方法

Publications (1)

Publication Number Publication Date
WO2018113433A1 true WO2018113433A1 (fr) 2018-06-28

Family

ID=58601028

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/109794 Ceased WO2018113433A1 (fr) 2016-12-22 2017-11-07 Procédé de criblage et de localisation spatiale de points caractéristiques de réalité virtuelle

Country Status (2)

Country Link
CN (1) CN106599929B (fr)
WO (1) WO2018113433A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113739803A (zh) * 2021-08-30 2021-12-03 中国电子科技集团公司第五十四研究所 一种基于红外基准点的室内与地下空间定位方法

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106599929B (zh) * 2016-12-22 2021-03-19 深圳市虚拟现实技术有限公司 虚拟现实特征点筛选空间定位方法
CN107219963A (zh) * 2017-07-04 2017-09-29 深圳市虚拟现实科技有限公司 虚拟现实手柄图形空间定位方法和系统
CN107562189B (zh) * 2017-07-21 2020-12-11 广州励丰文化科技股份有限公司 一种基于双目摄像头的空间定位方法及服务设备
CN110555879B (zh) 2018-05-31 2023-09-08 京东方科技集团股份有限公司 一种空间定位方法、其装置、其系统及计算机可读介质
US12062189B2 (en) * 2020-08-25 2024-08-13 Htc Corporation Object tracking method and object tracking device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016132371A1 (fr) * 2015-02-22 2016-08-25 Technion Research & Development Foundation Limited Reconnaissance de gestes au moyen de données multi-sensorielles
CN106019265A (zh) * 2016-05-27 2016-10-12 北京小鸟看看科技有限公司 一种多目标定位方法和系统
CN106152937A (zh) * 2015-03-31 2016-11-23 深圳超多维光电子有限公司 空间定位装置、系统及其方法
CN106599929A (zh) * 2016-12-22 2017-04-26 深圳市虚拟现实技术有限公司 虚拟现实特征点筛选空间定位方法
CN106599930A (zh) * 2016-12-22 2017-04-26 深圳市虚拟现实技术有限公司 虚拟现实空间定位特征点筛选方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016132371A1 (fr) * 2015-02-22 2016-08-25 Technion Research & Development Foundation Limited Reconnaissance de gestes au moyen de données multi-sensorielles
CN106152937A (zh) * 2015-03-31 2016-11-23 深圳超多维光电子有限公司 空间定位装置、系统及其方法
CN106019265A (zh) * 2016-05-27 2016-10-12 北京小鸟看看科技有限公司 一种多目标定位方法和系统
CN106599929A (zh) * 2016-12-22 2017-04-26 深圳市虚拟现实技术有限公司 虚拟现实特征点筛选空间定位方法
CN106599930A (zh) * 2016-12-22 2017-04-26 深圳市虚拟现实技术有限公司 虚拟现实空间定位特征点筛选方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113739803A (zh) * 2021-08-30 2021-12-03 中国电子科技集团公司第五十四研究所 一种基于红外基准点的室内与地下空间定位方法
CN113739803B (zh) * 2021-08-30 2023-11-21 中国电子科技集团公司第五十四研究所 一种基于红外基准点的室内与地下空间定位方法

Also Published As

Publication number Publication date
CN106599929B (zh) 2021-03-19
CN106599929A (zh) 2017-04-26

Similar Documents

Publication Publication Date Title
WO2018113433A1 (fr) Procédé de criblage et de localisation spatiale de points caractéristiques de réalité virtuelle
US9268412B2 (en) Input apparatus having an input recognition unit and input recognition method by using the same
TWI734024B (zh) 指向判斷系統以及指向判斷方法
CN103677274B (zh) 一种基于主动视觉的互动投影方法及系统
JP6075122B2 (ja) システム、画像投影装置、情報処理装置、情報処理方法およびプログラム
WO2018107923A1 (fr) Procédé d'identification de point caractéristique de positionnement destiné à être utilisé dans un espace de réalité virtuelle
CN104717422B (zh) 显示设备以及显示方法
JP6104143B2 (ja) 機器制御システム、および、機器制御方法
JP6870474B2 (ja) 視線検出用コンピュータプログラム、視線検出装置及び視線検出方法
JP2016050972A (ja) 制御装置、制御方法、及びプログラム
CN115104134A (zh) 联合的红外及可见光视觉惯性对象跟踪
US20180300579A1 (en) Image processing apparatus, image processing method, and non-transitory computer-readable storage medium
CN104658462B (zh) 投影机以及投影机的控制方法
JP2016532217A (ja) グリントにより眼を検出する方法および装置
TWI526879B (zh) 互動系統、遙控器及其運作方法
JP2014086420A (ja) Ledライトの照明制御システム及びその方法
JP2016184362A (ja) 入力装置、入力操作検出方法及び入力操作検出用コンピュータプログラム
JP5336325B2 (ja) 画像処理方法
US9285927B2 (en) Exposure mechanism of optical touch system and optical touch system using the same
CN106599930B (zh) 虚拟现实空间定位特征点筛选方法
CN104376323A (zh) 一种确定目标距离的方法及装置
JP2014098625A (ja) 計測装置、方法及びプログラム
WO2013104313A1 (fr) Procédé et système destinés à être utilisés pour détecter des informations de position tridimensionnelles d'un dispositif d'entrée
CN103186233B (zh) 眼神定位全景互动控制方法
Choi et al. Improving the usability of remote eye gaze tracking for human-device interaction

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17883630

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 14.10.2019)

122 Ep: pct application non-entry in european phase

Ref document number: 17883630

Country of ref document: EP

Kind code of ref document: A1