WO2018036113A1 - Augmented reality method and system - Google Patents
Augmented reality method and system Download PDFInfo
- Publication number
- WO2018036113A1 WO2018036113A1 PCT/CN2017/073980 CN2017073980W WO2018036113A1 WO 2018036113 A1 WO2018036113 A1 WO 2018036113A1 CN 2017073980 W CN2017073980 W CN 2017073980W WO 2018036113 A1 WO2018036113 A1 WO 2018036113A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- display
- target object
- eyeball
- augmented reality
- real image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
Definitions
- the present invention relates to the field of virtual reality technologies, and in particular, to an augmented reality method and system.
- Augmented reality technology uses computer graphics technology and visualization technology to generate virtual objects that do not exist in the real environment, and accurately "embeds" virtual objects into the real environment, and integrates the virtual objects with the real environment by means of display devices. Applying virtual information to the real world, presenting the user with a new environment with realistic sensory effects to enhance the reality.
- an augmented reality system for implementing augmented reality technology requires that a large amount of positioning data and scene information be analyzed to ensure that a computer-generated virtual object can be accurately positioned in a real scene. Therefore, in the known augmented reality technology, the specific implementation process may include: acquiring real scene information; analyzing the obtained real scene information and camera position information; generating a virtual object; drawing on the visible plane according to the camera position information A virtual object, and the virtual object is displayed along with the real scene information.
- the overall feature of the image is used to match the recognition object, the amount of calculation in the corresponding processing is large.
- the target image captured by the camera is too small, which will result in too few features in the detected target image, and the reasonable requirements cannot be met. Matching the number of features, which in turn makes it impossible to detect the target object, making it impossible to superimpose the virtual object into the video.
- hardware devices that implement augmented reality are typically tablets with on-screen displays, or similar handheld electronic devices.
- the low-latency interactive experience and high-speed graphics rendering processing requirements are different from those for general handheld electronic products, and thus more design requirements for devices.
- the wearer's impulsive mood on an object (commodity, building, etc.) in the real scene of the outside world may be short-lived, how to display the virtual information frame or virtual object in the head display during the short-lived interest maintenance period.
- superimposed on the position of the real object displayed is a technical problem that the design needs to solve.
- the present invention provides an augmented reality method and system, which can reduce the calculation amount of the head-wearing device, and better obtain the information analysis of the real-time image object that the user pays attention at the moment in the headband device. , and superimpose virtual information and real world information on the display.
- the technical solution adopted by the present invention is: providing an augmented reality method, including the following steps,
- S1 divides the real image displayed in the display of the head mounted device into a plurality of display areas
- S2 pre-tests and records the instantaneous rotation state of the eyeball on any display area on the display screen of the wearable device and the specific pixel display position in the area;
- S3 forms a mapping relationship between the instantaneous rotation state of the eyeball pre-tested and recorded, and any display area on the display screen and a specific pixel display position in the area;
- S4 determines the gaze area of the gaze on the display screen according to the instantaneous rotation state of the acquired eyeball
- S5 collects a real image displayed by the gaze area, and uploads it to a cloud database for processing, to obtain association information of each target object in the real image;
- S6 determines the location of each target object, and determines the display position of the associated information of each target object on the display screen according to the location information of each target object;
- S7 superimposes the association information of each target object on each screen with each target object in the real image.
- step S5 further includes:
- S51 detects the overall/local features of the acquired real image, and identifies each target object.
- step S51 further includes:
- S511 matches the global/local features of the real image with the overall/local features of the target objects that may be matched, and retains a reasonable match according to the geometric relative feature relationship of the global/local feature positions, and identifies according to a reasonable match.
- Each target object is a target object that may be matched.
- the association information in step S5 includes direct correlation with the target object attribute.
- Product information includes direct correlation with the target object attribute.
- the present invention also provides an augmented reality system, including a head mounted device, further comprising:
- an infrared camera for performing a real infrared photograph on the eyeball of the wearer of the head mounted device, and obtaining a picture of the instantaneous rotation state of the eyeball;
- an eyeball state determining unit configured to analyze a picture of an instantaneous eyeball rotation state, and determine a gaze region of the eyeball
- an eyeball recognition unit configured to determine a realistic image that the wearer is paying attention at the moment according to the gaze region of the eyeball against the key value mapping table
- a screen control unit configured to control display content on the display screen according to a realistic image that the wearer is currently paying attention to
- the key value mapping table is a mapping relationship between an instantaneous rotation state of an eyeball that is pre-tested and recorded, and any display area on the display screen and a specific pixel display position in the area. table.
- the screen control unit is further configured to upload the real image that the wearer pays attention to to the cloud database for processing, and obtain association information of each target object in the real image.
- the screen control unit is further configured to determine a location of each target object, and determine a display position of the associated information of each target object on the display screen according to the location information of each target object.
- the screen control unit is further configured to superimpose and display the associated information of each target object on each of the target objects in the real image on the display screen.
- the augmented reality method and system provided by the present invention can map the display area where the focus is focused on the wearer at present by establishing a mapping relationship between the instantaneous rotation state of the wearer's eyeball and the display area on the display screen.
- Accurately search quickly respond to the target object and related information that the user pays attention to and display it on the display screen in combination with the real image, which can reduce the search and display of irrelevant information and enhance the effective information interaction experience.
- the head-mounted device reduces the amount of data information processing, the configuration of the device or the size of the device. Can get a big improvement.
- FIG. 1 is a flowchart of an augmented reality method according to an embodiment of the present invention.
- FIG. 2 is a flowchart of a sub-step of step S5 in the augmented reality method according to an embodiment of the present invention.
- FIG. 1 is a flowchart of an augmented reality method according to an embodiment of the present invention. As shown in FIG. 1, the embodiment provides an augmented reality method, including the following steps.
- S1 divides the real image displayed in the display of the head mounted device into a plurality of display areas.
- the head mounted device may be any one of an external headset, an integrated headset, and a mobile headset.
- the realistic image displayed in the display can be directly displayed and displayed in the external environment by providing a transparent display on the head-mounted device.
- the external display environment can be additionally photographed by the external device of the head-mounted device, and then displayed through the display.
- the screen displays the actual image taken, and in this way, instant shooting is achieved.
- the display area in this embodiment is divided into four areas of upper left, upper right, lower left, and lower right according to the real image displayed by the display screen. It can be understood that the number of spatial divisions and the division form of the display area of the present invention are not limited thereto. Regardless of how the head rotates, the realistic images presented in perspective in this embodiment are all zero-delayed.
- S2 pre-tests and records the instantaneous rotation state of the blinking eye on any display area on the display screen of the wearable device and the specific pixel display position in the area.
- the present embodiment accurately verifies the visual field range that can be seen by the eyeball before the wearer officially uses it. And the binocular eyes gaze at any display area on the display screen and the momentary rotation state of the eyeball on the specific pixel display position in the area, and establish the position or state of the wearer's eyeball rotation and the display area on the display screen. Contacted and accurate to the pixel level, ensuring higher accuracy for post-recognition.
- S3 forms a mapping relationship between the instantaneous rotation state of the eyeball that is previously tested and recorded, and any display area on the display screen and a specific pixel display position in the area.
- This step is used to establish a one-to-one relationship between the instantaneous rotation state of the wearer's eyeball and the display area on the display screen, so that the link can be quickly responded at the later inspection or identification, reducing the delay and improving the user experience.
- S4 determines the gaze area of the gaze on the display screen according to the instantaneous rotation state of the acquired eyeball.
- the infrared camera is used to take a picture of the eyeball, and the instantaneous rotation state of the eyeball can be obtained. Through the mapping relationship formed in the step S3, the specific display area that the eye is watching on the display screen can be quickly responded to.
- S5 collects the real image displayed by the gaze area, and uploads it to the cloud database for processing, and obtains association information of each target object in the real image.
- the gaze area of the wearer on the display screen that is, the realistic scene that the wearer pays attention to at present
- the device only parses and processes the realistic scene that the wearer pays attention at the moment, that is, only the gaze area is presented.
- the real image is searched and recognized.
- the user's real focus is obtained by information retrieval and key display, solving the user's needs, reducing the invalid information display that the user does not need.
- Parsing and processing all the real images seen on the display screen greatly reduces the data processing capacity and response delay of the device, and enhances the interactive experience of effective information.
- the cloud database has image recognition function and information search function, which can identify the target object of interest to the user in the real image and search for the related information of the target object.
- the related information includes product information directly related to the attribute of the target object, such as product correlation. Specifications, product function description, company profile, price information, contact information, etc.
- step S5 includes
- S51 detects the overall/local features of the acquired real image, and identifies each target object.
- This step detects the overall/local features of the real image, such as a feature matching graph in the cloud database, and then identifies the target object to find the target object.
- step S51 includes
- S511 enters the overall/local features of the real image with the overall/local features of the target objects that may be matched.
- Line matching according to the geometric relative feature relationship of the overall/local feature position, retains a reasonable match, and identifies each target object according to a reasonable match.
- the method of this step can screen out important objects, reduce the feature matching of irrelevant objects in the real image, and achieve the effect of reducing the calculation amount and quickly obtaining the matching result.
- S6 determines the location of each target object, and determines the display position of the association information of each target object on the display screen according to the location information of each target object.
- the target object obtained through the processing of the cloud database will be highlighted on the display screen, and the peer device will calculate the position of each target object on the screen, and determine each target object according to the position information of each target object.
- the associated information is displayed on the display.
- S7 superimposes the association information of each target object on each screen with each target object in the real image.
- Embodiments of the present invention further provide an augmented reality system, including a head mounted device, further comprising an infrared camera mounted on the head mounted device, an eyeball state determining unit, an eyeball identifying unit, and a screen control unit.
- Infrared camera for performing real-time infrared photographing on the eyeball of the wearer of the head-mounted device, and obtaining a picture of the instantaneous rotation state of the eyeball; an eyeball state determining unit for analyzing a picture of the instantaneous state of the eyeball rotation, determining the gaze area of the eyeball;
- the identification unit is configured to determine a realistic image that the wearer pays attention to at the moment according to the gaze area of the eyeball, and the key value mapping table is an instantaneous rotation state of the eyeball pre-tested and recorded, and any display area on the display screen and the area a mapping relationship table formed between specific pixel display positions; a screen control unit configured to control display content on the display screen according to a realistic image that the wearer is currently paying attention to.
- the screen control unit includes a network transmission module and an information matching module.
- the network transmission module is used to upload the real image that the wearer pays attention to at the cloud database, and then download the target object and related information processed by the cloud database to the head mounted device
- the information matching module is used to display the target object and the display screen.
- Realistic image on the horse Matching, determining the position of each target object, and determining the display position of the related information of each target object on the display screen according to the position information of each target object, and finally respectively, the related information of each target object is in the real image on the display screen. Goals Object overlay display.
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
一种增强现实方法及系统 Augmented reality method and system
技术领域 Technical field
[0001] 本发明涉及虚拟现实技术领域, 具体而言, 涉及一种增强现实方法及系统。 [0001] The present invention relates to the field of virtual reality technologies, and in particular, to an augmented reality method and system.
背景技术 Background technique
[0002] 增强现实技术是借助计算机图形技术和可视化技术产生现实环境中不存在的虚 拟对象, 并将虚拟对象准确地 "嵌入 "到真实环境中, 借助显示设备将虚拟对象与 真实环境融为一体, 将虚拟的信息应用到真实世界, 从而呈现给用户一个感官 效果真实的新环境, 以实现对现实的增强。 [0002] Augmented reality technology uses computer graphics technology and visualization technology to generate virtual objects that do not exist in the real environment, and accurately "embeds" virtual objects into the real environment, and integrates the virtual objects with the real environment by means of display devices. Applying virtual information to the real world, presenting the user with a new environment with realistic sensory effects to enhance the reality.
[0003] 用于实现增强现实技术的增强现实系统需要通过分析大量的定位数据和场景信 息来保证计算机生成的虚拟物体可以精确地定位在真实场景中。 因此, 在已知 的增强现实技术中, 其具体实现过程可以包括: 获取真实场景信息; 对获取的 真实场景信息和摄像头位置信息进行分析; 生成虚拟物体; 根据摄像头位置信 息在可视平面上绘制虚拟物体, 并将虚拟物体与真实场景信息一起显示。 在上 述技术呈现中, 由于使用图像的整体特征匹配识别对象, 因而相应的处理过程 中计算量较大。 而且, 对摄像头采集到的目标图像的尺寸也有一定的要求, 若 目标远离摄像头, 摄像头采集到的目标图像过小, 则将导致检测到的目标图像 中的特征过少, 而无法达到要求的合理匹配特征个数, 进而导致无法检测出目 标对象, 使得无法完成将虚拟物体叠加到视频中的处理。 [0003] An augmented reality system for implementing augmented reality technology requires that a large amount of positioning data and scene information be analyzed to ensure that a computer-generated virtual object can be accurately positioned in a real scene. Therefore, in the known augmented reality technology, the specific implementation process may include: acquiring real scene information; analyzing the obtained real scene information and camera position information; generating a virtual object; drawing on the visible plane according to the camera position information A virtual object, and the virtual object is displayed along with the real scene information. In the above-described technical presentation, since the overall feature of the image is used to match the recognition object, the amount of calculation in the corresponding processing is large. Moreover, there is a certain requirement on the size of the target image collected by the camera. If the target is far away from the camera, the target image captured by the camera is too small, which will result in too few features in the detected target image, and the reasonable requirements cannot be met. Matching the number of features, which in turn makes it impossible to detect the target object, making it impossible to superimpose the virtual object into the video.
[0004] 另外, 实现增强现实的硬件设备通常是具有屏幕显示的平板, 或类似手持电子 设备。 但针对新兴的头显类产品而言, 低延吋的交互体验与高速图形渲染处理 要求则有别于对一般手持类电子产品, 进而也对设备提出更多的设计要求。 例 如, 佩戴者对外界真实场景中的某一物体 (商品, 建筑物等) 的即兴情绪可能 是短暂的, 如何在短暂的临吋兴趣维持期间在头显中一并将虚拟信息框或虚拟 物体叠加于显示的实景物体位置处是设计需要解决的技术问题。 除此之外, 如 何在繁杂的外界物体信息中, 建立一种虚拟信息展现机制, 仅将佩戴者此刻关 注搜寻的现实中物体给与展现, 对于头戴设备而言, 无需不间断的拍摄前景、 减少数据信息处理量、 有效增强有效信息交互感体验等也是目前亟待解决的问 题。 In addition, hardware devices that implement augmented reality are typically tablets with on-screen displays, or similar handheld electronic devices. However, for emerging head-end products, the low-latency interactive experience and high-speed graphics rendering processing requirements are different from those for general handheld electronic products, and thus more design requirements for devices. For example, the wearer's impulsive mood on an object (commodity, building, etc.) in the real scene of the outside world may be short-lived, how to display the virtual information frame or virtual object in the head display during the short-lived interest maintenance period. Superimposed on the position of the real object displayed is a technical problem that the design needs to solve. In addition, how to create a virtual information display mechanism in the complicated external object information, only to show the real object that the wearer pays attention to at the moment, for the head-mounted device, there is no need for uninterrupted shooting prospects. , Reducing the amount of data information processing and effectively enhancing the effective information interaction experience are also urgent problems to be solved.
技术问题 technical problem
[0005] 为解决上述技术问题, 本发明提供一种增强现实方法及系统, 可减少头戴设备 的计算量, 同吋较好的使用户此刻关注的现实图像物体在头带设备中得到信息 分析, 并将虚拟信息和真实世界信息在显示屏上叠加显示。 [0005] In order to solve the above technical problem, the present invention provides an augmented reality method and system, which can reduce the calculation amount of the head-wearing device, and better obtain the information analysis of the real-time image object that the user pays attention at the moment in the headband device. , and superimpose virtual information and real world information on the display.
问题的解决方案 Problem solution
技术解决方案 Technical solution
[0006] 本发明采用的技术方案是: 提供一种增强现实方法, 包括以下步骤, [0006] The technical solution adopted by the present invention is: providing an augmented reality method, including the following steps,
[0007] S1将头戴式设备显示屏中展现的现实图像划分为若干个显示区域; [0007] S1 divides the real image displayed in the display of the head mounted device into a plurality of display areas;
[0008] S2预先测试并记录头戴式设备佩戴者双眼凝视显示屏上任一显示区域以及该区 域中具体像素显示位置上吋眼球的瞬间转动状态; [0008] S2 pre-tests and records the instantaneous rotation state of the eyeball on any display area on the display screen of the wearable device and the specific pixel display position in the area;
[0009] S3将预先测试并记录的眼球的瞬间转动状态与显示屏上任一显示区域以及该区 域中具体像素显示位置之间构成映射关系; [0009] S3 forms a mapping relationship between the instantaneous rotation state of the eyeball pre-tested and recorded, and any display area on the display screen and a specific pixel display position in the area;
[0010] S4依据获取的眼球的瞬间转动状态判断目光在显示屏上的凝视区域; [0010] S4 determines the gaze area of the gaze on the display screen according to the instantaneous rotation state of the acquired eyeball;
[0011] S5采集该凝视区域展现的现实图像, 并将其上传至云端数据库处理, 得到现实 图像中各目标对象的关联信息; [0011] S5 collects a real image displayed by the gaze area, and uploads it to a cloud database for processing, to obtain association information of each target object in the real image;
[0012] S6确定各目标对象的位置, 依据各目标对象的位置信息, 确定各目标对象的关 联信息在显示屏上的显示位置; [0012] S6 determines the location of each target object, and determines the display position of the associated information of each target object on the display screen according to the location information of each target object;
[0013] S7在显示屏上将各目标对象的关联信息分别与现实图像中的各目标对象叠加显 示。 [0013] S7 superimposes the association information of each target object on each screen with each target object in the real image.
[0014] 本发明所述的增强现实方法, 步骤 S5还包括: [0014] The augmented reality method of the present invention, step S5 further includes:
[0015] S51检测所采集的现实图像的整体 /局部特征, 识别出各目标对象。 [0015] S51 detects the overall/local features of the acquired real image, and identifies each target object.
[0016] 本发明所述的增强现实方法, 步骤 S51还包括: [0016] The augmented reality method of the present invention, step S51 further includes:
[0017] S511将现实图像的整体 /局部特征与可能匹配的各目标对象的整体 /局部特征进 行匹配, 根据整体 /局部特征位置的几何相对特征关系, 保留合理的匹配, 根据 合理的匹配识别出各目标对象。 [0017] S511 matches the global/local features of the real image with the overall/local features of the target objects that may be matched, and retains a reasonable match according to the geometric relative feature relationship of the global/local feature positions, and identifies according to a reasonable match. Each target object.
[0018] 本发明所述的增强现实方法, 步骤 S5中的关联信息包括与目标对象属性直接相 关的产品信息。 [0018] According to the augmented reality method of the present invention, the association information in step S5 includes direct correlation with the target object attribute. Product information.
[0019] 本发明还提供一种增强现实系统, 包括头戴式设备, 还包括: [0019] The present invention also provides an augmented reality system, including a head mounted device, further comprising:
[0020] 红外摄像头, 用于对头戴式设备佩戴者的眼球进行实吋红外拍照, 得到眼球瞬 间转动状态的图片; [0020] an infrared camera for performing a real infrared photograph on the eyeball of the wearer of the head mounted device, and obtaining a picture of the instantaneous rotation state of the eyeball;
[0021] 眼球状态判定单元, 用于分析瞬间眼球转动状态的图片, 判断眼球的凝视区域 [0021] an eyeball state determining unit, configured to analyze a picture of an instantaneous eyeball rotation state, and determine a gaze region of the eyeball
[0022] 眼球识别单元, 用于根据眼球的凝视区域对照键值映射表, 确定佩戴者此刻关 注的现实图像; [0022] an eyeball recognition unit, configured to determine a realistic image that the wearer is paying attention at the moment according to the gaze region of the eyeball against the key value mapping table;
[0023] 屏幕控制单元, 用于根据佩戴者此刻关注的现实图像控制显示屏上的显示内容 [0023] a screen control unit, configured to control display content on the display screen according to a realistic image that the wearer is currently paying attention to
[0024] 本发明所述的增强现实系统, 所述键值映射表为预先测试并记录的眼球的瞬间 转动状态与显示屏上任一显示区域以及该区域中具体像素显示位置之间构成的 映射关系表。 [0024] In the augmented reality system of the present invention, the key value mapping table is a mapping relationship between an instantaneous rotation state of an eyeball that is pre-tested and recorded, and any display area on the display screen and a specific pixel display position in the area. table.
[0025] 本发明所述的增强现实系统, 所述屏幕控制单元还用于将佩戴者此刻关注的现 实图像上传至云端数据库处理, 得到现实图像中各目标对象的关联信息。 According to the augmented reality system of the present invention, the screen control unit is further configured to upload the real image that the wearer pays attention to to the cloud database for processing, and obtain association information of each target object in the real image.
[0026] 本发明所述的增强现实系统, 所述屏幕控制单元还用于确定各目标对象的位置 , 依据各目标对象的位置信息, 确定各目标对象的关联信息在显示屏上的显示 位置。 In the augmented reality system of the present invention, the screen control unit is further configured to determine a location of each target object, and determine a display position of the associated information of each target object on the display screen according to the location information of each target object.
[0027] 本发明所述的增强现实系统, 所述屏幕控制单元还用于在显示屏上将各目标对 象的关联信息分别与现实图像中的各目标对象叠加显示。 [0027] In the augmented reality system of the present invention, the screen control unit is further configured to superimpose and display the associated information of each target object on each of the target objects in the real image on the display screen.
发明的有益效果 Advantageous effects of the invention
有益效果 Beneficial effect
[0028] 与现有技术相比, 本发明提供的增强现实方法及系统通过将佩戴者的眼球瞬间 转动状态和显示屏上的显示区域建立映射关系, 能针对佩戴者此刻关注焦点所 在的显示区域进行精准检索, 快速响应得到用户关注的目标对象及关联信息并 实吋在显示屏上与现实图像结合显示, 可减少无关信息的搜索和展示, 增强有 效信息交互感体验。 同吋, 由于无需对显示屏所展现的所有现实图像进行分析 和处理, 使头戴式设备减少了数据信息处理量, 设备的配置或体型大小方面都 能得到较大的改善。 Compared with the prior art, the augmented reality method and system provided by the present invention can map the display area where the focus is focused on the wearer at present by establishing a mapping relationship between the instantaneous rotation state of the wearer's eyeball and the display area on the display screen. Accurately search, quickly respond to the target object and related information that the user pays attention to and display it on the display screen in combination with the real image, which can reduce the search and display of irrelevant information and enhance the effective information interaction experience. At the same time, since there is no need to analyze and process all the real images displayed on the display, the head-mounted device reduces the amount of data information processing, the configuration of the device or the size of the device. Can get a big improvement.
对附图的简要说明 Brief description of the drawing
附图说明 DRAWINGS
[0029] 下面将结合附图及实施例对本发明作进一步说明, 附图中: [0029] The present invention will be further described below in conjunction with the accompanying drawings and embodiments, in which:
[0030] 图 1为本发明实施例所提供的增强现实方法的流程图; 1 is a flowchart of an augmented reality method according to an embodiment of the present invention;
[0031] 图 2为本发明实施例所提供的增强现实方法中步骤 S5的子步骤流程图。 FIG. 2 is a flowchart of a sub-step of step S5 in the augmented reality method according to an embodiment of the present invention.
本发明的实施方式 Embodiments of the invention
[0032] 为了使本发明的目的、 技术方案及优点更加清楚明白, 以下结合附图及实施例 , 对本发明进行进一步详细说明。 应当理解, 此处所描述的具体实施例仅仅用 以解释本发明, 并不用于限定本发明。 The present invention will be further described in detail below with reference to the accompanying drawings and embodiments. It is understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
[0033] 图 1示出了本发明实施例所提供的增强现实方法的流程图, 如图 1所示, 本实施 例提供一种增强现实方法, 包括以下步骤, [0033] FIG. 1 is a flowchart of an augmented reality method according to an embodiment of the present invention. As shown in FIG. 1, the embodiment provides an augmented reality method, including the following steps.
[0034] S1将头戴式设备显示屏中展现的现实图像划分为若干个显示区域。 [0034] S1 divides the real image displayed in the display of the head mounted device into a plurality of display areas.
[0035] 在本步骤中, 头戴式设备可以是外接式头戴设备、 一体式头戴设备、 移动端头 戴设备中的任何一种。 显示屏中展现的现实图像可以通过在头戴式设备上配备 透明显示屏, 直接显示和展现外部环境的现实图像, 亦可由头戴式设备外部额 外配备摄像头对外界显示环境进行拍摄, 再通过显示屏对拍摄的现实图像进行 显示, 通过此方式可实现即拍即所得。 本实施例中的显示区域根据显示屏展现 的现实图像划分为左上、 右上、 左下、 右下四个区域。 可以理解的是, 本发明 对于显示区域的空间划分数量与划分形态并不局限于此。 无论头部如何转动, 本实施例中透视展现的现实图像均是零延吋的。 [0035] In this step, the head mounted device may be any one of an external headset, an integrated headset, and a mobile headset. The realistic image displayed in the display can be directly displayed and displayed in the external environment by providing a transparent display on the head-mounted device. The external display environment can be additionally photographed by the external device of the head-mounted device, and then displayed through the display. The screen displays the actual image taken, and in this way, instant shooting is achieved. The display area in this embodiment is divided into four areas of upper left, upper right, lower left, and lower right according to the real image displayed by the display screen. It can be understood that the number of spatial divisions and the division form of the display area of the present invention are not limited thereto. Regardless of how the head rotates, the realistic images presented in perspective in this embodiment are all zero-delayed.
[0036] S2预先测试并记录头戴式设备佩戴者双眼凝视显示屏上任一显示区域以及该区 域中具体像素显示位置上吋眼球的瞬间转动状态。 [0036] S2 pre-tests and records the instantaneous rotation state of the blinking eye on any display area on the display screen of the wearable device and the specific pixel display position in the area.
[0037] 由于每个使用者的眼睛形状及其视力所见范围都不相同, 为了提高增强现实的 精准度, 本实施例在佩戴者正式使用之前, 精确验证其眼球所能见到的视野范 围及其双眼凝视显示屏上任一显示区域及该区域中具体像素显示位置上吋眼球 的瞬间转动状态, 将佩戴者的眼球转动位置或状态与显示屏上的显示区域建立 起了联系, 并且精确到了像素级别, 从而确保了后期识别具有较高的精准度。 [0037] Since the shape of each user's eyes and the range of their visual acuity are different, in order to improve the accuracy of the augmented reality, the present embodiment accurately verifies the visual field range that can be seen by the eyeball before the wearer officially uses it. And the binocular eyes gaze at any display area on the display screen and the momentary rotation state of the eyeball on the specific pixel display position in the area, and establish the position or state of the wearer's eyeball rotation and the display area on the display screen. Contacted and accurate to the pixel level, ensuring higher accuracy for post-recognition.
[0038] S3将预先测试并记录的眼球的瞬间转动状态与显示屏上任一显示区域以及该区 域中具体像素显示位置之间构成映射关系。 [0038] S3 forms a mapping relationship between the instantaneous rotation state of the eyeball that is previously tested and recorded, and any display area on the display screen and a specific pixel display position in the area.
[0039] 本步骤用于将佩戴者的眼球瞬间转动状态与显示屏上的显示区域建立起一一对 应的关系, 从而在后期检验或识别的环节可快速响应, 减少延迟, 提高用户体 验。 [0039] This step is used to establish a one-to-one relationship between the instantaneous rotation state of the wearer's eyeball and the display area on the display screen, so that the link can be quickly responded at the later inspection or identification, reducing the delay and improving the user experience.
[0040] S4依据获取的眼球的瞬间转动状态判断目光在显示屏上的凝视区域。 [0040] S4 determines the gaze area of the gaze on the display screen according to the instantaneous rotation state of the acquired eyeball.
[0041] 通过红外摄像头对眼球进行实吋拍照, 可得到眼球的瞬间转动状态, 通过 S3步 骤中构成的映射关系, 可快速响应得到眼睛在显示屏上所注视的具体显示区域 [0041] The infrared camera is used to take a picture of the eyeball, and the instantaneous rotation state of the eyeball can be obtained. Through the mapping relationship formed in the step S3, the specific display area that the eye is watching on the display screen can be quickly responded to.
, 即目光在显示屏上的凝视区域。 , that is, the gaze area on the display.
[0042] S5采集该凝视区域展现的现实图像, 并将其上传至云端数据库处理, 得到现实 图像中各目标对象的关联信息。 [0042] S5 collects the real image displayed by the gaze area, and uploads it to the cloud database for processing, and obtains association information of each target object in the real image.
[0043] 佩戴者在显示屏上的凝视区域, 即佩戴者此刻关注的现实景象, 本实施例中, 设备只对佩戴者此刻关注的现实景象进行解析和处理, 即只对该凝视区域展现 的现实图像进行搜索和识别, 此方法一方面使用户真正的关注点得到了信息检 索和重点显示, 解决用户所需, 减少了用户不需要的无效信息显示, 另一方面 , 相比现有技术需要对显示屏所看到的所有现实图像进行解析和处理, 大大减 少了设备的数据信息处理量和响应延迟, 增强了有效信息的交互感体验。 云端 数据库具有图像识别功能和信息搜索功能, 能在现实图像中识别出用户感兴趣 的目标对象并对目标对象进行关联信息搜索, 关联信息包括与目标对象属性直 接相关的产品信息, 如产品的相关规格、 产品功能说明、 公司简介、 价格信息 、 联系方式等。 [0043] The gaze area of the wearer on the display screen, that is, the realistic scene that the wearer pays attention to at present, in this embodiment, the device only parses and processes the realistic scene that the wearer pays attention at the moment, that is, only the gaze area is presented. The real image is searched and recognized. On the one hand, the user's real focus is obtained by information retrieval and key display, solving the user's needs, reducing the invalid information display that the user does not need. On the other hand, compared with the prior art, Parsing and processing all the real images seen on the display screen greatly reduces the data processing capacity and response delay of the device, and enhances the interactive experience of effective information. The cloud database has image recognition function and information search function, which can identify the target object of interest to the user in the real image and search for the related information of the target object. The related information includes product information directly related to the attribute of the target object, such as product correlation. Specifications, product function description, company profile, price information, contact information, etc.
[0044] 优选的, 如图 2所示, 步骤 S5包括, [0044] Preferably, as shown in FIG. 2, step S5 includes
[0045] S51检测所采集的现实图像的整体 /局部特征, 识别出各目标对象。 [0045] S51 detects the overall/local features of the acquired real image, and identifies each target object.
[0046] 本步骤通过对现实图像的整体 /局部特征进行检测, 如在云端数据库中有特征 匹配的图形, 则对其进行识别, 从而找出目标对象。 [0046] This step detects the overall/local features of the real image, such as a feature matching graph in the cloud database, and then identifies the target object to find the target object.
[0047] 优选的, 如图 2所示, 步骤 S51包括, [0047] Preferably, as shown in FIG. 2, step S51 includes
[0048] S511将现实图像的整体 /局部特征与可能匹配的各目标对象的整体 /局部特征进 行匹配, 根据整体 /局部特征位置的几何相对特征关系, 保留合理的匹配, 根据 合理的匹配识别出各目标对象。 [0048] S511 enters the overall/local features of the real image with the overall/local features of the target objects that may be matched. Line matching, according to the geometric relative feature relationship of the overall/local feature position, retains a reasonable match, and identifies each target object according to a reasonable match.
[0049] 通过本步骤的方法可筛选出重要对象, 减少现实图像中无关对象的特征匹配, 达到减少计算量、 快速得到匹配结果的效果。 [0049] The method of this step can screen out important objects, reduce the feature matching of irrelevant objects in the real image, and achieve the effect of reducing the calculation amount and quickly obtaining the matching result.
[0050] S6确定各目标对象的位置, 依据各目标对象的位置信息, 确定各目标对象的关 联信息在显示屏上的显示位置。 [0050] S6 determines the location of each target object, and determines the display position of the association information of each target object on the display screen according to the location information of each target object.
[0051] 经过云端数据库的处理得到的目标对象将在显示屏上得到重点显示, 同吋设备 将计算各目标对象在屏幕上所处的位置, 并依据各目标对象的位置信息, 确定 各目标对象的关联信息在显示屏上的显示位置。 [0051] The target object obtained through the processing of the cloud database will be highlighted on the display screen, and the peer device will calculate the position of each target object on the screen, and determine each target object according to the position information of each target object. The associated information is displayed on the display.
[0052] S7在显示屏上将各目标对象的关联信息分别与现实图像中的各目标对象叠加显 示。 [0052] S7 superimposes the association information of each target object on each screen with each target object in the real image.
[0053] 将佩戴者关注的各目标对象的关系信息根据步骤 S6的显示位置直接在显示屏上 与现实图像的各目标对象结合显示, 从而实现虚拟信息和真实世界信息的融合 和无缝连接。 [0053] The relationship information of each target object that the wearer pays attention to is displayed in combination with each target object of the real image directly on the display screen according to the display position of step S6, thereby realizing the fusion and seamless connection of the virtual information and the real world information.
[0054] 本发明实施例还提供一种增强现实系统, 包括头戴式设备, 还包括安装在头戴 式设备上的红外摄像头、 眼球状态判定单元、 眼球识别单元、 屏幕控制单元。 红外摄像头, 用于对头戴式设备佩戴者的眼球进行实吋红外拍照, 得到眼球瞬 间转动状态的图片; 眼球状态判定单元, 用于分析瞬间眼球转动状态的图片, 判断眼球的凝视区域; 眼球识别单元, 用于根据眼球的凝视区域对照键值映射 表, 确定佩戴者此刻关注的现实图像, 键值映射表为预先测试并记录的眼球的 瞬间转动状态与显示屏上任一显示区域以及该区域中具体像素显示位置之间构 成的映射关系表; 屏幕控制单元, 用于根据佩戴者此刻关注的现实图像控制显 示屏上的显示内容, 具体的, 屏幕控制单元包括网络传输模块和信息匹配模块 , 网络传输模块用于将佩戴者此刻关注的现实图像上传至云端数据库处理, 再 将云端数据库处理得到的目标对象及其关联信息下载到头戴式设备, 信息匹配 模块用于将目标对象与显示屏上的现实图像进行匹配, 确定各目标对象的位置 , 并依据各目标对象的位置信息, 确定各目标对象的关联信息在显示屏上的显 示位置, 最后在显示屏上将各目标对象的关联信息分别与现实图像中的各目标 对象叠加显示。 [0054] Embodiments of the present invention further provide an augmented reality system, including a head mounted device, further comprising an infrared camera mounted on the head mounted device, an eyeball state determining unit, an eyeball identifying unit, and a screen control unit. Infrared camera for performing real-time infrared photographing on the eyeball of the wearer of the head-mounted device, and obtaining a picture of the instantaneous rotation state of the eyeball; an eyeball state determining unit for analyzing a picture of the instantaneous state of the eyeball rotation, determining the gaze area of the eyeball; The identification unit is configured to determine a realistic image that the wearer pays attention to at the moment according to the gaze area of the eyeball, and the key value mapping table is an instantaneous rotation state of the eyeball pre-tested and recorded, and any display area on the display screen and the area a mapping relationship table formed between specific pixel display positions; a screen control unit configured to control display content on the display screen according to a realistic image that the wearer is currently paying attention to. Specifically, the screen control unit includes a network transmission module and an information matching module. The network transmission module is used to upload the real image that the wearer pays attention to at the cloud database, and then download the target object and related information processed by the cloud database to the head mounted device, and the information matching module is used to display the target object and the display screen. Realistic image on the horse Matching, determining the position of each target object, and determining the display position of the related information of each target object on the display screen according to the position information of each target object, and finally respectively, the related information of each target object is in the real image on the display screen. Goals Object overlay display.
以上结合附图对本发明的实施例进行了描述, 但是本发明并不局限于上述的具 体实施方式, 上述的具体实施方式仅仅是示意性的, 而不是限制性的, 本领域 的普通技术人员在本发明的启示下, 在不脱离本发明宗旨和权利要求所保护的 范围情况下, 还可做出很多形式, 这些均属于本发明的保护之内。 The embodiments of the present invention have been described above with reference to the drawings, but the present invention is not limited to the specific embodiments described above, and the specific embodiments described above are merely illustrative and not restrictive, and those skilled in the art In the light of the present invention, many forms may be made without departing from the spirit and scope of the invention as claimed.
Claims
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201610708864.8 | 2016-08-23 | ||
| CN201610708864.8A CN107765842A (en) | 2016-08-23 | 2016-08-23 | A kind of augmented reality method and system |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2018036113A1 true WO2018036113A1 (en) | 2018-03-01 |
Family
ID=61246375
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2017/073980 Ceased WO2018036113A1 (en) | 2016-08-23 | 2017-02-17 | Augmented reality method and system |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN107765842A (en) |
| WO (1) | WO2018036113A1 (en) |
Cited By (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US11393198B1 (en) | 2020-06-02 | 2022-07-19 | State Farm Mutual Automobile Insurance Company | Interactive insurance inventory and claim generation |
| US11436828B1 (en) | 2020-06-02 | 2022-09-06 | State Farm Mutual Automobile Insurance Company | Insurance inventory and claim generation |
| CN116090478A (en) * | 2022-12-15 | 2023-05-09 | 新线科技有限公司 | Translation method based on wireless earphone, earphone storage device and storage medium |
| US11861137B2 (en) | 2020-09-09 | 2024-01-02 | State Farm Mutual Automobile Insurance Company | Vehicular incident reenactment using three-dimensional (3D) representations |
| US12450662B1 (en) | 2020-06-02 | 2025-10-21 | State Farm Mutual Automobile Insurance Company | Insurance claim generation |
Families Citing this family (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108769517B (en) * | 2018-05-29 | 2021-04-16 | 亮风台(上海)信息科技有限公司 | Method and equipment for remote assistance based on augmented reality |
| CN109298780A (en) * | 2018-08-24 | 2019-02-01 | 百度在线网络技术(北京)有限公司 | Information processing method, device, AR equipment and storage medium based on AR |
| CN109697918B (en) * | 2018-12-29 | 2021-04-27 | 深圳市掌网科技股份有限公司 | Percussion instrument experience system based on augmented reality |
| CN109714583B (en) * | 2019-01-22 | 2022-07-19 | 京东方科技集团股份有限公司 | Augmented reality display method and augmented reality display system |
| CN111563432A (en) * | 2020-04-27 | 2020-08-21 | 歌尔科技有限公司 | Display method and augmented reality display device |
| CN112633273A (en) * | 2020-12-18 | 2021-04-09 | 上海影创信息科技有限公司 | User preference processing method and system based on afterglow area |
| CN114972692B (en) * | 2022-05-12 | 2023-04-18 | 北京领为军融科技有限公司 | Target positioning method based on AI identification and mixed reality |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101067716A (en) * | 2007-05-29 | 2007-11-07 | 南京航空航天大学 | Augmented Reality Natural Interactive Helmet with Eye Tracking |
| CN102981616A (en) * | 2012-11-06 | 2013-03-20 | 中兴通讯股份有限公司 | Identification method and identification system and computer capable of enhancing reality objects |
| CN103051942A (en) * | 2011-10-14 | 2013-04-17 | 中国科学院计算技术研究所 | Smart television human-computer interaction method, device and system based on remote controller |
| US20150212576A1 (en) * | 2014-01-28 | 2015-07-30 | Anthony J. Ambrus | Radial selection by vestibulo-ocular reflex fixation |
| CN104823152A (en) * | 2012-12-19 | 2015-08-05 | 高通股份有限公司 | Enabling augmented reality using eye gaze tracking |
| CN105323539A (en) * | 2014-07-17 | 2016-02-10 | 原相科技股份有限公司 | Safety system for vehicle and operation method thereof |
Family Cites Families (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US8408706B2 (en) * | 2010-12-13 | 2013-04-02 | Microsoft Corporation | 3D gaze tracker |
| KR20160016907A (en) * | 2013-05-29 | 2016-02-15 | 미쓰비시덴키 가부시키가이샤 | Information display device |
-
2016
- 2016-08-23 CN CN201610708864.8A patent/CN107765842A/en active Pending
-
2017
- 2017-02-17 WO PCT/CN2017/073980 patent/WO2018036113A1/en not_active Ceased
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101067716A (en) * | 2007-05-29 | 2007-11-07 | 南京航空航天大学 | Augmented Reality Natural Interactive Helmet with Eye Tracking |
| CN103051942A (en) * | 2011-10-14 | 2013-04-17 | 中国科学院计算技术研究所 | Smart television human-computer interaction method, device and system based on remote controller |
| CN102981616A (en) * | 2012-11-06 | 2013-03-20 | 中兴通讯股份有限公司 | Identification method and identification system and computer capable of enhancing reality objects |
| CN104823152A (en) * | 2012-12-19 | 2015-08-05 | 高通股份有限公司 | Enabling augmented reality using eye gaze tracking |
| US20150212576A1 (en) * | 2014-01-28 | 2015-07-30 | Anthony J. Ambrus | Radial selection by vestibulo-ocular reflex fixation |
| CN105323539A (en) * | 2014-07-17 | 2016-02-10 | 原相科技股份有限公司 | Safety system for vehicle and operation method thereof |
Cited By (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US11393198B1 (en) | 2020-06-02 | 2022-07-19 | State Farm Mutual Automobile Insurance Company | Interactive insurance inventory and claim generation |
| US11436828B1 (en) | 2020-06-02 | 2022-09-06 | State Farm Mutual Automobile Insurance Company | Insurance inventory and claim generation |
| US12142039B1 (en) | 2020-06-02 | 2024-11-12 | State Farm Mutual Automobile Insurance Company | Interactive insurance inventory and claim generation |
| US12423972B2 (en) | 2020-06-02 | 2025-09-23 | State Farm Mutual Automobile Insurance Company | Insurance inventory and claim generation |
| US12450662B1 (en) | 2020-06-02 | 2025-10-21 | State Farm Mutual Automobile Insurance Company | Insurance claim generation |
| US11861137B2 (en) | 2020-09-09 | 2024-01-02 | State Farm Mutual Automobile Insurance Company | Vehicular incident reenactment using three-dimensional (3D) representations |
| US12229383B2 (en) | 2020-09-09 | 2025-02-18 | State Farm Mutual Automobile Insurance Company | Vehicular incident reenactment using three-dimensional (3D) representations |
| CN116090478A (en) * | 2022-12-15 | 2023-05-09 | 新线科技有限公司 | Translation method based on wireless earphone, earphone storage device and storage medium |
Also Published As
| Publication number | Publication date |
|---|---|
| CN107765842A (en) | 2018-03-06 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2018036113A1 (en) | Augmented reality method and system | |
| US10228763B2 (en) | Gaze direction mapping | |
| JP6747504B2 (en) | Information processing apparatus, information processing method, and program | |
| US10685496B2 (en) | Saving augmented realities | |
| US10331209B2 (en) | Gaze direction mapping | |
| CN103777757B (en) | A kind of place virtual objects in augmented reality the system of combination significance detection | |
| US20170213085A1 (en) | See-through smart glasses and see-through method thereof | |
| RU2602386C1 (en) | Method for imaging an object | |
| WO2017204581A1 (en) | Virtual reality system using mixed reality, and implementation method therefor | |
| CN113467619B (en) | Picture display method and device, storage medium and electronic equipment | |
| WO2017020489A1 (en) | Virtual reality display method and system | |
| WO2014128751A1 (en) | Head mount display apparatus, head mount display program, and head mount display method | |
| Rocca et al. | Head pose estimation by perspective-n-point solution based on 2d markerless face tracking | |
| CN115797439A (en) | Flame space positioning system and method based on binocular vision | |
| US11842453B2 (en) | Information processing device, information processing method, and program | |
| US11269405B2 (en) | Gaze direction mapping | |
| CN107105215B (en) | Method and display system for rendering images | |
| JP2019121991A (en) | Moving image manual preparing system | |
| CN109934930B (en) | A method and system for augmented reality based on precise user location | |
| WO2018170678A1 (en) | Head-mounted display device and gesture recognition method therefor | |
| CN115690363A (en) | Virtual object display method and device and head-mounted display device | |
| EP2887231A1 (en) | Saving augmented realities | |
| WO2025176072A1 (en) | Three-dimensional imaging effect testing system and method, device and storage medium | |
| CN120014110A (en) | Image generation method, device and electronic equipment | |
| Ning et al. | Markerless client-server augmented reality system with natural features |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17842539 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 17842539 Country of ref document: EP Kind code of ref document: A1 |