WO2018036113A1 - Procédé et système de réalité augmentée - Google Patents
Procédé et système de réalité augmentée Download PDFInfo
- Publication number
- WO2018036113A1 WO2018036113A1 PCT/CN2017/073980 CN2017073980W WO2018036113A1 WO 2018036113 A1 WO2018036113 A1 WO 2018036113A1 CN 2017073980 W CN2017073980 W CN 2017073980W WO 2018036113 A1 WO2018036113 A1 WO 2018036113A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- display
- target object
- eyeball
- augmented reality
- real image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
Definitions
- the present invention relates to the field of virtual reality technologies, and in particular, to an augmented reality method and system.
- Augmented reality technology uses computer graphics technology and visualization technology to generate virtual objects that do not exist in the real environment, and accurately "embeds" virtual objects into the real environment, and integrates the virtual objects with the real environment by means of display devices. Applying virtual information to the real world, presenting the user with a new environment with realistic sensory effects to enhance the reality.
- an augmented reality system for implementing augmented reality technology requires that a large amount of positioning data and scene information be analyzed to ensure that a computer-generated virtual object can be accurately positioned in a real scene. Therefore, in the known augmented reality technology, the specific implementation process may include: acquiring real scene information; analyzing the obtained real scene information and camera position information; generating a virtual object; drawing on the visible plane according to the camera position information A virtual object, and the virtual object is displayed along with the real scene information.
- the overall feature of the image is used to match the recognition object, the amount of calculation in the corresponding processing is large.
- the target image captured by the camera is too small, which will result in too few features in the detected target image, and the reasonable requirements cannot be met. Matching the number of features, which in turn makes it impossible to detect the target object, making it impossible to superimpose the virtual object into the video.
- hardware devices that implement augmented reality are typically tablets with on-screen displays, or similar handheld electronic devices.
- the low-latency interactive experience and high-speed graphics rendering processing requirements are different from those for general handheld electronic products, and thus more design requirements for devices.
- the wearer's impulsive mood on an object (commodity, building, etc.) in the real scene of the outside world may be short-lived, how to display the virtual information frame or virtual object in the head display during the short-lived interest maintenance period.
- superimposed on the position of the real object displayed is a technical problem that the design needs to solve.
- the present invention provides an augmented reality method and system, which can reduce the calculation amount of the head-wearing device, and better obtain the information analysis of the real-time image object that the user pays attention at the moment in the headband device. , and superimpose virtual information and real world information on the display.
- the technical solution adopted by the present invention is: providing an augmented reality method, including the following steps,
- S1 divides the real image displayed in the display of the head mounted device into a plurality of display areas
- S2 pre-tests and records the instantaneous rotation state of the eyeball on any display area on the display screen of the wearable device and the specific pixel display position in the area;
- S3 forms a mapping relationship between the instantaneous rotation state of the eyeball pre-tested and recorded, and any display area on the display screen and a specific pixel display position in the area;
- S4 determines the gaze area of the gaze on the display screen according to the instantaneous rotation state of the acquired eyeball
- S5 collects a real image displayed by the gaze area, and uploads it to a cloud database for processing, to obtain association information of each target object in the real image;
- S6 determines the location of each target object, and determines the display position of the associated information of each target object on the display screen according to the location information of each target object;
- S7 superimposes the association information of each target object on each screen with each target object in the real image.
- step S5 further includes:
- S51 detects the overall/local features of the acquired real image, and identifies each target object.
- step S51 further includes:
- S511 matches the global/local features of the real image with the overall/local features of the target objects that may be matched, and retains a reasonable match according to the geometric relative feature relationship of the global/local feature positions, and identifies according to a reasonable match.
- Each target object is a target object that may be matched.
- the association information in step S5 includes direct correlation with the target object attribute.
- Product information includes direct correlation with the target object attribute.
- the present invention also provides an augmented reality system, including a head mounted device, further comprising:
- an infrared camera for performing a real infrared photograph on the eyeball of the wearer of the head mounted device, and obtaining a picture of the instantaneous rotation state of the eyeball;
- an eyeball state determining unit configured to analyze a picture of an instantaneous eyeball rotation state, and determine a gaze region of the eyeball
- an eyeball recognition unit configured to determine a realistic image that the wearer is paying attention at the moment according to the gaze region of the eyeball against the key value mapping table
- a screen control unit configured to control display content on the display screen according to a realistic image that the wearer is currently paying attention to
- the key value mapping table is a mapping relationship between an instantaneous rotation state of an eyeball that is pre-tested and recorded, and any display area on the display screen and a specific pixel display position in the area. table.
- the screen control unit is further configured to upload the real image that the wearer pays attention to to the cloud database for processing, and obtain association information of each target object in the real image.
- the screen control unit is further configured to determine a location of each target object, and determine a display position of the associated information of each target object on the display screen according to the location information of each target object.
- the screen control unit is further configured to superimpose and display the associated information of each target object on each of the target objects in the real image on the display screen.
- the augmented reality method and system provided by the present invention can map the display area where the focus is focused on the wearer at present by establishing a mapping relationship between the instantaneous rotation state of the wearer's eyeball and the display area on the display screen.
- Accurately search quickly respond to the target object and related information that the user pays attention to and display it on the display screen in combination with the real image, which can reduce the search and display of irrelevant information and enhance the effective information interaction experience.
- the head-mounted device reduces the amount of data information processing, the configuration of the device or the size of the device. Can get a big improvement.
- FIG. 1 is a flowchart of an augmented reality method according to an embodiment of the present invention.
- FIG. 2 is a flowchart of a sub-step of step S5 in the augmented reality method according to an embodiment of the present invention.
- FIG. 1 is a flowchart of an augmented reality method according to an embodiment of the present invention. As shown in FIG. 1, the embodiment provides an augmented reality method, including the following steps.
- S1 divides the real image displayed in the display of the head mounted device into a plurality of display areas.
- the head mounted device may be any one of an external headset, an integrated headset, and a mobile headset.
- the realistic image displayed in the display can be directly displayed and displayed in the external environment by providing a transparent display on the head-mounted device.
- the external display environment can be additionally photographed by the external device of the head-mounted device, and then displayed through the display.
- the screen displays the actual image taken, and in this way, instant shooting is achieved.
- the display area in this embodiment is divided into four areas of upper left, upper right, lower left, and lower right according to the real image displayed by the display screen. It can be understood that the number of spatial divisions and the division form of the display area of the present invention are not limited thereto. Regardless of how the head rotates, the realistic images presented in perspective in this embodiment are all zero-delayed.
- S2 pre-tests and records the instantaneous rotation state of the blinking eye on any display area on the display screen of the wearable device and the specific pixel display position in the area.
- the present embodiment accurately verifies the visual field range that can be seen by the eyeball before the wearer officially uses it. And the binocular eyes gaze at any display area on the display screen and the momentary rotation state of the eyeball on the specific pixel display position in the area, and establish the position or state of the wearer's eyeball rotation and the display area on the display screen. Contacted and accurate to the pixel level, ensuring higher accuracy for post-recognition.
- S3 forms a mapping relationship between the instantaneous rotation state of the eyeball that is previously tested and recorded, and any display area on the display screen and a specific pixel display position in the area.
- This step is used to establish a one-to-one relationship between the instantaneous rotation state of the wearer's eyeball and the display area on the display screen, so that the link can be quickly responded at the later inspection or identification, reducing the delay and improving the user experience.
- S4 determines the gaze area of the gaze on the display screen according to the instantaneous rotation state of the acquired eyeball.
- the infrared camera is used to take a picture of the eyeball, and the instantaneous rotation state of the eyeball can be obtained. Through the mapping relationship formed in the step S3, the specific display area that the eye is watching on the display screen can be quickly responded to.
- S5 collects the real image displayed by the gaze area, and uploads it to the cloud database for processing, and obtains association information of each target object in the real image.
- the gaze area of the wearer on the display screen that is, the realistic scene that the wearer pays attention to at present
- the device only parses and processes the realistic scene that the wearer pays attention at the moment, that is, only the gaze area is presented.
- the real image is searched and recognized.
- the user's real focus is obtained by information retrieval and key display, solving the user's needs, reducing the invalid information display that the user does not need.
- Parsing and processing all the real images seen on the display screen greatly reduces the data processing capacity and response delay of the device, and enhances the interactive experience of effective information.
- the cloud database has image recognition function and information search function, which can identify the target object of interest to the user in the real image and search for the related information of the target object.
- the related information includes product information directly related to the attribute of the target object, such as product correlation. Specifications, product function description, company profile, price information, contact information, etc.
- step S5 includes
- S51 detects the overall/local features of the acquired real image, and identifies each target object.
- This step detects the overall/local features of the real image, such as a feature matching graph in the cloud database, and then identifies the target object to find the target object.
- step S51 includes
- S511 enters the overall/local features of the real image with the overall/local features of the target objects that may be matched.
- Line matching according to the geometric relative feature relationship of the overall/local feature position, retains a reasonable match, and identifies each target object according to a reasonable match.
- the method of this step can screen out important objects, reduce the feature matching of irrelevant objects in the real image, and achieve the effect of reducing the calculation amount and quickly obtaining the matching result.
- S6 determines the location of each target object, and determines the display position of the association information of each target object on the display screen according to the location information of each target object.
- the target object obtained through the processing of the cloud database will be highlighted on the display screen, and the peer device will calculate the position of each target object on the screen, and determine each target object according to the position information of each target object.
- the associated information is displayed on the display.
- S7 superimposes the association information of each target object on each screen with each target object in the real image.
- Embodiments of the present invention further provide an augmented reality system, including a head mounted device, further comprising an infrared camera mounted on the head mounted device, an eyeball state determining unit, an eyeball identifying unit, and a screen control unit.
- Infrared camera for performing real-time infrared photographing on the eyeball of the wearer of the head-mounted device, and obtaining a picture of the instantaneous rotation state of the eyeball; an eyeball state determining unit for analyzing a picture of the instantaneous state of the eyeball rotation, determining the gaze area of the eyeball;
- the identification unit is configured to determine a realistic image that the wearer pays attention to at the moment according to the gaze area of the eyeball, and the key value mapping table is an instantaneous rotation state of the eyeball pre-tested and recorded, and any display area on the display screen and the area a mapping relationship table formed between specific pixel display positions; a screen control unit configured to control display content on the display screen according to a realistic image that the wearer is currently paying attention to.
- the screen control unit includes a network transmission module and an information matching module.
- the network transmission module is used to upload the real image that the wearer pays attention to at the cloud database, and then download the target object and related information processed by the cloud database to the head mounted device
- the information matching module is used to display the target object and the display screen.
- Realistic image on the horse Matching, determining the position of each target object, and determining the display position of the related information of each target object on the display screen according to the position information of each target object, and finally respectively, the related information of each target object is in the real image on the display screen. Goals Object overlay display.
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
L'invention concerne un procédé et un système de réalité augmentée, se rapportant au domaine technique de la réalité virtuelle. Le procédé de réalité augmentée comprend les étapes suivantes :
S1 : diviser une image réelle en plusieurs zones d'affichage ;
S2: pré-tester et enregistrer une zone d'affichage et un état de rotation instantanée de globe oculaire correspondant ;
S3 : former une relation de mappage entre l'état de rotation instantanée du globe oculaire et la zone d'affichage ;
S4 : déterminer une région de regard ;
S5 : acquérir des images réelles pour obtenir des informations associées d'objets cibles ;
S6 : déterminer la position d'affichage sur un écran d'affichage des informations associées des objets cibles ; et
S7 : superposer les informations associées et les objets cibles dans l'image réelle et les afficher. Le procédé et le système selon l'invention permettent de rechercher avec précision la zone d'affichage dans laquelle se trouve l'actuel point focal d'attention du porteur, et de répondre rapidement pour obtenir l'objet cible sur lequel l'utilisateur se concentre et les informations associées y afférentes, combinant celles-ci avec une image réelle pour un affichage en temps réel sur un écran d'affichage ; la recherche et l'affichage d'informations non liées sont réduits et l'expérience de détection interactive d'informations efficaces est améliorée.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201610708864.8A CN107765842A (zh) | 2016-08-23 | 2016-08-23 | 一种增强现实方法及系统 |
| CN201610708864.8 | 2016-08-23 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2018036113A1 true WO2018036113A1 (fr) | 2018-03-01 |
Family
ID=61246375
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2017/073980 Ceased WO2018036113A1 (fr) | 2016-08-23 | 2017-02-17 | Procédé et système de réalité augmentée |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN107765842A (fr) |
| WO (1) | WO2018036113A1 (fr) |
Cited By (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US11393198B1 (en) | 2020-06-02 | 2022-07-19 | State Farm Mutual Automobile Insurance Company | Interactive insurance inventory and claim generation |
| US11436828B1 (en) | 2020-06-02 | 2022-09-06 | State Farm Mutual Automobile Insurance Company | Insurance inventory and claim generation |
| CN116090478A (zh) * | 2022-12-15 | 2023-05-09 | 新线科技有限公司 | 基于无线耳机的翻译方法、耳机收纳装置及存储介质 |
| US11861137B2 (en) | 2020-09-09 | 2024-01-02 | State Farm Mutual Automobile Insurance Company | Vehicular incident reenactment using three-dimensional (3D) representations |
| US12450662B1 (en) | 2020-06-02 | 2025-10-21 | State Farm Mutual Automobile Insurance Company | Insurance claim generation |
Families Citing this family (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108769517B (zh) * | 2018-05-29 | 2021-04-16 | 亮风台(上海)信息科技有限公司 | 一种基于增强现实进行远程辅助的方法与设备 |
| CN109298780A (zh) * | 2018-08-24 | 2019-02-01 | 百度在线网络技术(北京)有限公司 | 基于ar的信息处理方法、装置、ar设备及存储介质 |
| CN109697918B (zh) * | 2018-12-29 | 2021-04-27 | 深圳市掌网科技股份有限公司 | 一种基于增强现实的打击乐器体验系统 |
| CN109714583B (zh) * | 2019-01-22 | 2022-07-19 | 京东方科技集团股份有限公司 | 增强现实的显示方法及增强现实的显示系统 |
| CN111563432A (zh) * | 2020-04-27 | 2020-08-21 | 歌尔科技有限公司 | 一种显示方法及增强现实显示设备 |
| CN112633273A (zh) * | 2020-12-18 | 2021-04-09 | 上海影创信息科技有限公司 | 基于余光区域的用户偏好处理方法和系统 |
| CN114972692B (zh) * | 2022-05-12 | 2023-04-18 | 北京领为军融科技有限公司 | 基于ai识别和混合现实的目标定位方法 |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101067716A (zh) * | 2007-05-29 | 2007-11-07 | 南京航空航天大学 | 具有视线跟踪功能的增强现实自然交互式头盔 |
| CN102981616A (zh) * | 2012-11-06 | 2013-03-20 | 中兴通讯股份有限公司 | 增强现实中对象的识别方法及系统和计算机 |
| CN103051942A (zh) * | 2011-10-14 | 2013-04-17 | 中国科学院计算技术研究所 | 基于遥控器的智能电视人机交互方法、装置和系统 |
| US20150212576A1 (en) * | 2014-01-28 | 2015-07-30 | Anthony J. Ambrus | Radial selection by vestibulo-ocular reflex fixation |
| CN104823152A (zh) * | 2012-12-19 | 2015-08-05 | 高通股份有限公司 | 使用视线追踪实现扩增实境 |
| CN105323539A (zh) * | 2014-07-17 | 2016-02-10 | 原相科技股份有限公司 | 车用安全系统及其操作方法 |
Family Cites Families (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US8408706B2 (en) * | 2010-12-13 | 2013-04-02 | Microsoft Corporation | 3D gaze tracker |
| US20160054795A1 (en) * | 2013-05-29 | 2016-02-25 | Mitsubishi Electric Corporation | Information display device |
-
2016
- 2016-08-23 CN CN201610708864.8A patent/CN107765842A/zh active Pending
-
2017
- 2017-02-17 WO PCT/CN2017/073980 patent/WO2018036113A1/fr not_active Ceased
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101067716A (zh) * | 2007-05-29 | 2007-11-07 | 南京航空航天大学 | 具有视线跟踪功能的增强现实自然交互式头盔 |
| CN103051942A (zh) * | 2011-10-14 | 2013-04-17 | 中国科学院计算技术研究所 | 基于遥控器的智能电视人机交互方法、装置和系统 |
| CN102981616A (zh) * | 2012-11-06 | 2013-03-20 | 中兴通讯股份有限公司 | 增强现实中对象的识别方法及系统和计算机 |
| CN104823152A (zh) * | 2012-12-19 | 2015-08-05 | 高通股份有限公司 | 使用视线追踪实现扩增实境 |
| US20150212576A1 (en) * | 2014-01-28 | 2015-07-30 | Anthony J. Ambrus | Radial selection by vestibulo-ocular reflex fixation |
| CN105323539A (zh) * | 2014-07-17 | 2016-02-10 | 原相科技股份有限公司 | 车用安全系统及其操作方法 |
Cited By (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US11393198B1 (en) | 2020-06-02 | 2022-07-19 | State Farm Mutual Automobile Insurance Company | Interactive insurance inventory and claim generation |
| US11436828B1 (en) | 2020-06-02 | 2022-09-06 | State Farm Mutual Automobile Insurance Company | Insurance inventory and claim generation |
| US12142039B1 (en) | 2020-06-02 | 2024-11-12 | State Farm Mutual Automobile Insurance Company | Interactive insurance inventory and claim generation |
| US12423972B2 (en) | 2020-06-02 | 2025-09-23 | State Farm Mutual Automobile Insurance Company | Insurance inventory and claim generation |
| US12450662B1 (en) | 2020-06-02 | 2025-10-21 | State Farm Mutual Automobile Insurance Company | Insurance claim generation |
| US11861137B2 (en) | 2020-09-09 | 2024-01-02 | State Farm Mutual Automobile Insurance Company | Vehicular incident reenactment using three-dimensional (3D) representations |
| US12229383B2 (en) | 2020-09-09 | 2025-02-18 | State Farm Mutual Automobile Insurance Company | Vehicular incident reenactment using three-dimensional (3D) representations |
| CN116090478A (zh) * | 2022-12-15 | 2023-05-09 | 新线科技有限公司 | 基于无线耳机的翻译方法、耳机收纳装置及存储介质 |
Also Published As
| Publication number | Publication date |
|---|---|
| CN107765842A (zh) | 2018-03-06 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2018036113A1 (fr) | Procédé et système de réalité augmentée | |
| US10228763B2 (en) | Gaze direction mapping | |
| JP6747504B2 (ja) | 情報処理装置、情報処理方法、及びプログラム | |
| US10685496B2 (en) | Saving augmented realities | |
| US10331209B2 (en) | Gaze direction mapping | |
| CN103777757B (zh) | 一种结合显著性检测的在增强现实中放置虚拟对象的系统 | |
| US20170213085A1 (en) | See-through smart glasses and see-through method thereof | |
| RU2602386C1 (ru) | Способ визуализации объекта | |
| WO2017204581A1 (fr) | Système de réalité virtuelle utilisant la réalité mixte, et procédé de mise en œuvre associé | |
| CN113467619B (zh) | 画面显示方法、装置和存储介质及电子设备 | |
| WO2014128751A1 (fr) | Appareil, programme et procédé visiocasque | |
| WO2020215960A1 (fr) | Procédé et dispositif pour déterminer une zone de regard et dispositif portable | |
| Rocca et al. | Head pose estimation by perspective-n-point solution based on 2d markerless face tracking | |
| CN115797439A (zh) | 基于双目视觉的火焰空间定位系统及方法 | |
| US11842453B2 (en) | Information processing device, information processing method, and program | |
| US11269405B2 (en) | Gaze direction mapping | |
| WO2017147826A1 (fr) | Procédé de traitement d'image destiné à être utilisé dans un dispositif intelligent, et dispositif | |
| CN109934930B (zh) | 一种基于用户精准位置的现实增强方法和系统 | |
| WO2018170678A1 (fr) | Dispositif visiocasque et procédé de reconnaissance gestuelle associé | |
| CN115690363A (zh) | 虚拟物体显示方法、装置和头戴式显示装置 | |
| WO2025176072A1 (fr) | Système et procédé de test d'effet d'imagerie tridimensionnelle, dispositif et support de stockage | |
| CN120014110A (zh) | 图像生成方法、装置及电子设备 | |
| Ning et al. | Markerless client-server augmented reality system with natural features | |
| CN118585060A (zh) | 提升微光夜视镜下人眼感知能力的方法及装置 | |
| CN120928974A (zh) | 显示对象交互方法、显示设备及电子设备 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17842539 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 17842539 Country of ref document: EP Kind code of ref document: A1 |