CN111813214A - Method, device, terminal device and storage medium for processing virtual content - Google Patents
Method, device, terminal device and storage medium for processing virtual content Download PDFInfo
- Publication number
- CN111813214A CN111813214A CN201910290641.8A CN201910290641A CN111813214A CN 111813214 A CN111813214 A CN 111813214A CN 201910290641 A CN201910290641 A CN 201910290641A CN 111813214 A CN111813214 A CN 111813214A
- Authority
- CN
- China
- Prior art keywords
- virtual
- content
- information
- target object
- terminal device
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/01—Indexing scheme relating to G06F3/01
- G06F2203/012—Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- User Interface Of Digital Computer (AREA)
- Processing Or Creating Images (AREA)
Abstract
Description
技术领域technical field
本申请涉及显示技术领域,更具体地,涉及一种虚拟内容的处理方法、装置、终端设备及存储介质。The present application relates to the field of display technology, and more particularly, to a method, apparatus, terminal device and storage medium for processing virtual content.
背景技术Background technique
随着科技的发展,机器智能化及信息智能化日益普及,通过机器视觉或者虚拟视觉等图像采集装置来识别用户影像以实现人机交互的技术越来越重要。增强现实技术(Augmented Reality,AR)借助计算机图形技术和可视化技术构建现实环境中不存在的虚拟内容,并通过图像识别定位技术将虚拟内容准确地融合到真实环境中,借助显示设备将虚拟内容与真实环境融为一体,并显示给使用者真实的感观体验。With the development of science and technology, machine intelligence and information intelligence are becoming more and more popular, and the technology of identifying user images through image acquisition devices such as machine vision or virtual vision to realize human-computer interaction is becoming more and more important. Augmented Reality (AR) uses computer graphics technology and visualization technology to construct virtual content that does not exist in the real environment, and accurately integrates the virtual content into the real environment through image recognition and positioning technology. The real environment is integrated and displayed to the user with a real sensory experience.
增强现实技术要解决的首要技术难题是如何将虚拟内容准确地融合到真实世界中,也就是要使虚拟内容以正确的姿态出现在真实场景的正确位置上,从而产生强烈的视觉真实感。在传统的技术中,通过在真实场景图像中叠加虚拟内容进行增强现实或混合现实显示时,通常仅是单纯地显示虚拟内容,用户仅能够通过传统的遥控器等控制器去控制虚拟内容的显示状态,如控制虚拟内容平移、旋转或者缩放等,而不能灵活地根据用户对虚拟内容的局部进行控制,因此,用户与显示的虚拟内容之间的互动性较差。The primary technical problem to be solved by augmented reality technology is how to accurately integrate the virtual content into the real world, that is, to make the virtual content appear in the correct position of the real scene with the correct posture, thereby generating a strong sense of visual reality. In traditional technologies, when augmented reality or mixed reality display is performed by superimposing virtual content on a real scene image, the virtual content is usually simply displayed, and the user can only control the display of the virtual content through a controller such as a traditional remote control. state, such as controlling the virtual content to translate, rotate or zoom, etc., but cannot flexibly control the part of the virtual content according to the user. Therefore, the interaction between the user and the displayed virtual content is poor.
发明内容SUMMARY OF THE INVENTION
本申请实施例提出了一种虚拟内容的处理方法、装置、终端设备及存储介质,能够提高用户与虚拟内容的交互性。The embodiments of the present application propose a method, apparatus, terminal device and storage medium for processing virtual content, which can improve the interaction between users and virtual content.
第一方面,本申请实施例提供了一种虚拟内容的处理方法,应用于终端设备,该处理方法包括:根据采集的标记物图像,确定交互装置的六自由度(6DoF)信息,标记物图像包含有设于交互装置的标记物;基于6DoF信息,获取目标对象的目标区域,目标区域为交互装置选择的区域;获取目标区域对应的内容数据;根据6DoF信息及内容数据生成虚拟内容;显示虚拟内容;以及接收交互装置发送的控制数据,根据所述控制数据生成对应的内容处理指令,根据内容处理指令对虚拟内容进行处理。In a first aspect, an embodiment of the present application provides a method for processing virtual content, which is applied to a terminal device. The processing method includes: determining six degrees of freedom (6DoF) information of an interaction device according to a collected marker image, and the marker image Including markers set in the interactive device; based on the 6DoF information, obtain the target area of the target object, the target area is the area selected by the interactive device; obtain the content data corresponding to the target area; generate virtual content according to the 6DoF information and content data; display virtual content and receiving control data sent by the interactive device, generating corresponding content processing instructions according to the control data, and processing virtual content according to the content processing instructions.
第二方面,本申请实施例提供了一种虚拟内容的处理装置,该装置包括信息确定模块、区域确定模块、数据获取模块、内容生成模块、内容显示模块以及内容处理模块,其中,信息确定模块用于根据采集的标记物图像,确定交互装置的六自由度(6DoF)信息,标记物图像包含有设于交互装置的标记物;区域确定模块用于基于6DoF信息,获取目标对象的目标区域,目标区域为交互装置选择的区域;数据获取模块用于获取目标区域对应的内容数据;内容生成模块用于根据6DoF信息及内容数据生成虚拟内容;内容显示模块用于显示虚拟内容;以及内容处理模块用于接收交互装置发送的控制数据,根据控制数据生成对应的内容处理指令,根据内容处理指令对虚拟内容进行处理。In a second aspect, an embodiment of the present application provides an apparatus for processing virtual content, the apparatus includes an information determination module, a region determination module, a data acquisition module, a content generation module, a content display module, and a content processing module, wherein the information determination module It is used to determine the six degrees of freedom (6DoF) information of the interactive device according to the collected marker image, and the marker image includes the marker provided in the interactive device; the area determination module is used to obtain the target area of the target object based on the 6DoF information, The target area is an area selected by the interactive device; the data acquisition module is used to obtain content data corresponding to the target area; the content generation module is used to generate virtual content according to the 6DoF information and the content data; the content display module is used to display the virtual content; and the content processing module It is used to receive the control data sent by the interactive device, generate corresponding content processing instructions according to the control data, and process the virtual content according to the content processing instructions.
第三方面,本申请实施例提供了一种终端设备,包括:一个或多个处理器;存储器;一个或多个应用程序,其中一个或多个应用程序被存储在存储器中并被配置为由一个或多个处理器执行,一个或多个程序配置用于执行上述第一方面提供的虚拟内容的处理方法。In a third aspect, embodiments of the present application provide a terminal device, including: one or more processors; a memory; and one or more application programs, wherein the one or more application programs are stored in the memory and configured to be One or more processors execute, and one or more programs are configured to execute the method for processing virtual content provided in the first aspect.
第四方面,本申请实施例提供了一种计算机可读取存储介质,计算机可读取存储介质中存储有程序代码,程序代码可被处理器调用执行上述第一方面提供的虚拟内容的处理方法。In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where a program code is stored in the computer-readable storage medium, and the program code can be invoked by a processor to execute the processing method for virtual content provided in the first aspect above .
本申请实施例提供的虚拟内容的处理方法、装置、终端设备及存储介质,根据交互装置的6DoF信息确定目标对象的目标区域,并采集目标区域的内容数据,然后根据6DoF信息和内容数据生成并显示相应的虚拟内容,最后通过交互装置发送的控制数据,以及根据控制数据生成对应的内容处理指令,并根据该内容处理指令对虚拟内容进行处理。因此,用户可以通过交互装置的6DoF信息选取需要进行处理的虚拟内容,并直接通过交互装置对该虚拟内容进行处理,方便快捷,可以提高用户与显示的虚拟内容之间的交互性。The virtual content processing method, device, terminal device, and storage medium provided by the embodiments of the present application determine the target area of the target object according to the 6DoF information of the interaction device, collect the content data of the target area, and then generate a The corresponding virtual content is displayed, and finally, the control data sent by the interactive device is used, and corresponding content processing instructions are generated according to the control data, and the virtual content is processed according to the content processing instructions. Therefore, the user can select the virtual content to be processed through the 6DoF information of the interactive device, and directly process the virtual content through the interactive device, which is convenient and quick, and can improve the interactivity between the user and the displayed virtual content.
附图说明Description of drawings
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to illustrate the technical solutions in the embodiments of the present application more clearly, the following briefly introduces the drawings that are used in the description of the embodiments. Obviously, the drawings in the following description are only some embodiments of the present application. For those skilled in the art, other drawings can also be obtained from these drawings without creative effort.
图1为本申请实施例提供的虚拟内容的处理系统的示意图。FIG. 1 is a schematic diagram of a virtual content processing system provided by an embodiment of the present application.
图2为本申请实施例提供的虚拟内容的处理方法的流程示意图。FIG. 2 is a schematic flowchart of a method for processing virtual content provided by an embodiment of the present application.
图3为本申请实施例提供的另一虚拟内容的处理方法的流程示意图。FIG. 3 is a schematic flowchart of another virtual content processing method provided by an embodiment of the present application.
图4为图3所示方法中叠加显示虚拟对象的流程示意图。FIG. 4 is a schematic flowchart of superimposing and displaying virtual objects in the method shown in FIG. 3 .
图5为图3所示方法中另一种叠加显示虚拟对象的流程示意图。FIG. 5 is a schematic flowchart of another superimposed display of virtual objects in the method shown in FIG. 3 .
图6为图5所示显示虚拟对象的一种过程的示意图。FIG. 6 is a schematic diagram of a process of displaying virtual objects shown in FIG. 5 .
图7为图5所示显示虚拟对象的另一过程的示意图。FIG. 7 is a schematic diagram of another process of displaying a virtual object shown in FIG. 5 .
图8为本图3所示方法中确定目标区域的流程示意图。FIG. 8 is a schematic flowchart of determining a target area in the method shown in FIG. 3 .
图9为图8所示方法中的确定目标区域的示意图。FIG. 9 is a schematic diagram of determining a target area in the method shown in FIG. 8 .
图10为图3所示方法中另一种确定目标区域的示意图。FIG. 10 is a schematic diagram of another determination of the target area in the method shown in FIG. 3 .
图11为图10所示方法中的确定目标区域的流程示意图。FIG. 11 is a schematic flowchart of determining a target area in the method shown in FIG. 10 .
图12为图3所示方法中划分目标区域的示意图FIG. 12 is a schematic diagram of dividing the target area in the method shown in FIG. 3
图13为图3所示方法中一种内容处理的过程示意图。FIG. 13 is a schematic diagram of a content processing process in the method shown in FIG. 3 .
图14为图3所示方法中另一内容处理的过程示意图。FIG. 14 is a schematic diagram of another content processing process in the method shown in FIG. 3 .
图15为图3所示方法中又一内容处理的过程示意图。FIG. 15 is a schematic diagram of another content processing process in the method shown in FIG. 3 .
图16为图3所示方法中一种处理内容叠加到目标对象的示意图。FIG. 16 is a schematic diagram of superimposing a processing content on a target object in the method shown in FIG. 3 .
图17为图3所示方法中再一种内容处理的过程示意图FIG. 17 is a schematic diagram of another content processing process in the method shown in FIG. 3
图18为本申请实施例提供的虚拟内容的处理装置的结构框图。FIG. 18 is a structural block diagram of an apparatus for processing virtual content provided by an embodiment of the present application.
图19为本申请实施例提供的终端设备的结构框图。FIG. 19 is a structural block diagram of a terminal device provided by an embodiment of the present application.
图20为本申请实施例提供的一种计算机可读存储介质的结构框图。FIG. 20 is a structural block diagram of a computer-readable storage medium provided by an embodiment of the present application.
具体实施方式Detailed ways
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。In order to make those skilled in the art better understand the solutions of the present application, the following will clearly and completely describe the technical solutions in the embodiments of the present application with reference to the accompanying drawings in the embodiments of the present application.
近年来,随着增强现实技术的发展,AR相关的电子设备逐渐走入了人们的日常生活中。其中,AR是通过计算机系统提供的信息增加用户对现实世界感知的技术,其将计算机生成的虚拟对象、场景或系统提示信息等内容对象叠加到真实场景中,来增强或修改对现实世界环境或表示现实世界环境的数据的感知。但传统的AR,通常仅是单纯地显示虚拟内容,显示方式单一,且用户与显示的虚拟内容互动性差。In recent years, with the development of augmented reality technology, AR-related electronic devices have gradually entered people's daily life. Among them, AR is a technology that increases the user's perception of the real world through the information provided by the computer system. It superimposes computer-generated virtual objects, scenes or system prompt information and other content objects into the real scene to enhance or modify the real-world environment or Perception of data representing a real-world environment. However, traditional AR usually only simply displays virtual content, the display method is single, and the user's interaction with the displayed virtual content is poor.
发明人经过研究,提出了本申请实施例中的虚拟内容的处理方法、装置、终端设备及存储介质,可以提高用户与显示的虚拟内容之间的交互性,提高沉浸感。After research, the inventor proposes the virtual content processing method, device, terminal device and storage medium in the embodiments of the present application, which can improve the interaction between the user and the displayed virtual content and improve the immersion.
请参阅图1,示出了本申请实施例提供的一种虚拟内容的处理系统10,该虚拟内容的处理系统10包括:终端设备100、交互装置200以及目标对象300。其中,终端设备100与交互装置200可以通过蓝牙、WIFI(Wireless-Fidelity,无线保真)、ZigBee(紫峰技术)等通信方式连接,也可以采用数据线等有线方式进行通信连接。当然,终端设备100与交互装置200的连接方式在本申请实施例中可以不作为限定。Referring to FIG. 1 , a virtual
在本申请实施例中,终端设备100可以是头戴显示装置,也可以是手机、平板等移动设备。终端设备100为头戴显示装置时,头戴显示装置可以为一体式头戴显示装置。终端设备100也可以是与外接式头戴显示装置连接的手机等智能终端,即终端设备100可作为头戴显示装置的处理和存储设备,用以插入或者接入外接式头戴显示装置以显示虚拟内容。In this embodiment of the present application, the
在本申请实施例中,交互装置200为平板状电子设备,其设有标记物210。交互装置200的具体形态结构不受限制,可以为各种形状,例如正方形、圆形。其中,交互装置200的标记物210可以是一个或多个。作为一种实施方式,标记物210设于交互装置200的表面,此时,交互装置200是设有标记物210的电子设备。In the embodiment of the present application, the
在使用交互装置200时,可使标记物210位于终端设备100的视野范围内,进而使得终端设备100可以采集到包含标记物210的图像,以对标记物210进行识别追踪,进而实现对交互装置200的定位追踪。在一些实施例中,交互装置200可以被用户手持并进行操控,标记物210可以集成于交互装置200中,也可以粘贴附着于交互装置200,还可以显示于交互装置200的显示屏。When using the
在一些实施方式中,标记物210可以包括至少一个具有一个或者多个特征点的子标记物。当上述标记物210处于终端设备100的视野范围内时,终端设备100可将上述处于视野范围内的标记物210作为目标标记物,并采集包含该目标标记物的图像。在采集到包含该目标标记物的图像后,可以通过识别采集到的目标标记物的图像,来得到目标标记物相对终端设备100的位置、姿态等空间位置信息,以及目标标记物的身份信息等识别结果,进而得到交互装置200相对终端设备100的位置、姿态等空间位置信息,也即交互装置200的六自由度信息(Six Degrees of Freedom,6DoF),从而实现对交互装置200的定位及跟踪。In some embodiments,
其中,交互装置200的6DoF信息,是指交互装置在空间具有六个自由度信息,即在空间坐标系中沿X、Y、Z三个直角坐标轴方向的移动自由度和绕这三个坐标轴的转动自由度信息。在本申请实施例中,交互装置200的6DoF信息至少包括:交互装置200相对终端设备100的移动方向、移动距离、旋转方向及旋转角度。进一步地,通过获取交互装置200的6DoF信息,可以对交互装置200进行定位追踪,监测交互装置200的绝对位置,进而准确地追踪交互装置200在现实空间中的位置,以在虚拟世界中进行准确地映射,确保交互过程的准确性。The 6DoF information of the
进一步地,终端设备100可基于与交互装置200之间的相对空间位置关系显示相应的虚拟内容。可以理解地,具体的标记物210在本申请实施例中并不作限定,仅需能够被终端设备100识别追踪即可。Further, the
在本申请实施例中,目标对象300为第三方对象,即,目标对象300可以是现实世界中的任一物理实体,也可以是通过终端设备100显示的任一虚拟对象。当通过交互装置200选择目标对象300的某一区域时,终端设备100可以根据交互装置200所选择的区域,获取选择的区域所对应的内容数据,然后根据交互装置200的6DoF信息和获取的内容数据生成虚拟内容,并将虚拟内容与交互装置200对准,以AR的方式叠加显示在交互装置200上。在一些实施方式中,交互装置200上设有至少一个操控区,以供用户对操控区进行操作动作,从而处理虚拟内容。其中,操控区可以包括按键和触摸屏中的至少一种。交互装置200可以通过操控区检测到的操作动作,生成与该操作动作对应的内容处理指令,并将该内容处理指令发送给终端设备100。当终端设备100接收到交互装置200发送的内容处理指令时,可根据内容处理指令对所显示虚拟内容进行处理,例如对虚拟内容进行编辑、标记等。In this embodiment of the present application, the
在一些实施方式中,交互装置200也可以是带有触摸屏的移动终端,比如智能手机、平板电脑等,具有可显示画面且可进行操控的触摸屏,标记物210可以设置在移动终端的外壳上,也可以在触摸屏上进行显示,或是以配件的方式插入移动终端,比如通过USB接口或耳机接口等插入移动终端等,但不限于此。In some embodiments, the
基于上述处理系统,本申请实施例提供了虚拟内容的处理方法,应用于上述处理系统的终端设备以及交互装置。下面对具体的虚拟内容的处理方法进行介绍。Based on the above processing system, an embodiment of the present application provides a method for processing virtual content, which is applied to a terminal device and an interaction device of the above processing system. The specific processing method of virtual content will be introduced below.
请参阅图2,图2示出了本申请实施例提供的一种虚拟内容的处理方法,可应用于上述终端设备。该虚拟内容的处理方法,使用户可以通过交互装置的6DoF信息对显示的虚拟内容进行处理,进而提高用户与显示的虚拟内容之间的交互性。具体地,该虚拟内容的处理方法可以包括步骤S110~S160。Please refer to FIG. 2. FIG. 2 shows a method for processing virtual content provided by an embodiment of the present application, which can be applied to the above-mentioned terminal device. The method for processing virtual content enables the user to process the displayed virtual content through the 6DoF information of the interactive device, thereby improving the interactivity between the user and the displayed virtual content. Specifically, the virtual content processing method may include steps S110-S160.
步骤S110:根据采集的标记物图像,确定交互装置的六自由度(6DoF)信息。其中,标记物图像包含有设于交互装置的标记物。Step S110: Determine six degrees of freedom (6DoF) information of the interactive device according to the collected marker image. Wherein, the marker image includes markers provided on the interactive device.
在一些实施方式中,可以通过终端设备的图像采集装置采集包含标记物的标记物图像,其中,标记物可以集成于交互装置中,也可以粘贴附着于交互装置,还可以显示于交互装置的显示屏上。In some embodiments, the image of the marker including the marker can be collected by the image acquisition device of the terminal device, wherein the marker can be integrated into the interactive device, can also be pasted and attached to the interactive device, and can also be displayed on the display of the interactive device on the screen.
在本申请实施例中,可以通过视觉装置(如,图像传感器等)获取交互装置的标记物的图像,终端设备再识别图像中的标记物且根据标记物的识别结果,获取交互装置相对终端设备的位置及姿态信息,从而对交互装置的空间位置进行定位追踪,其中,姿态信息可以包括交互装置相对终端设备的旋转方向及旋转角度等。在一些实施方式中,上述识别结果至少包括标记物相对终端设备的位置信息、旋转方向及旋转角度等,从而终端设备可以根据标记物在交互装置的设定位置,获取交互装置相对终端设备的位置及姿态信息,也即交互装置的六自由度信息(Six Degrees of Freedom,6DoF),从而实现对交互装置的定位及跟踪。In this embodiment of the present application, the image of the marker of the interaction device may be acquired through a visual device (eg, an image sensor, etc.), and the terminal device re-recognizes the marker in the image and obtains the relative value of the interaction device relative to the terminal device according to the recognition result of the marker. The position and attitude information of the interaction device can be positioned and tracked, wherein the attitude information may include the rotation direction and rotation angle of the interaction device relative to the terminal device. In some embodiments, the above identification result includes at least position information, rotation direction and rotation angle of the marker relative to the terminal device, so that the terminal device can obtain the position of the interaction device relative to the terminal device according to the set position of the marker on the interaction device and attitude information, that is, six degrees of freedom (6DoF) information of the interactive device, so as to realize the positioning and tracking of the interactive device.
步骤S120:基于6DoF信息,获取目标对象的目标区域,目标区域为交互装置选择的区域。Step S120: Based on the 6DoF information, obtain a target area of the target object, where the target area is an area selected by the interaction device.
在本申请实施例中,终端设备可以根据获取到的交互装置的6DoF信息,获取交互装置与目标对象的相对位置关系,其中,目标对象可以是现实空间中的实体对象,也可以是通过终端设备以AR的方式叠加显示在现实空间中的虚拟对象。需要说明的是,交互装置与目标对象的相对位置关系为第一相对位置关系,第一相对位置关系可以是交互装置与实体对象的相对位置关系,也可以是交互装置与虚拟对象的相对位置关系。In this embodiment of the present application, the terminal device may acquire the relative positional relationship between the interaction device and the target object according to the acquired 6DoF information of the interaction device, where the target object may be an entity object in the real space, or a terminal device The virtual objects displayed in the real space are superimposed in the way of AR. It should be noted that the relative positional relationship between the interaction device and the target object is the first relative positional relationship, and the first relative positional relationship may be the relative positional relationship between the interaction device and the physical object, or the relative positional relationship between the interaction device and the virtual object. .
具体地,可以获取目标对象与终端设备之间的第二相对位置关系,然后根据交互装置相对终端设备的位置及姿态信息,以终端设备作为参照,则可以获取到交互装置与目标对象的第一相对位置关系,进而获取交互装置选择的区域,也即目标对象的目标区域。其中,第二相对位置关系为目标对象与终端设备之间的相对位置关系,相应地,第二相对位置关系可以是虚拟对象与终端设备之间的相对位置关系,也可以是实体对象与终端设备之间的相对位置关系。需要说明的是,第一相对位置关系可包括但不限于交互装置相对目标对象的位置信息、旋转方向及旋转角度等。第二相对位置关系可以包括但不限于目标对象相对终端设备的位置信息、旋转方向即旋转角度等。Specifically, the second relative positional relationship between the target object and the terminal device can be obtained, and then according to the position and attitude information of the interaction device relative to the terminal device, and the terminal device is used as a reference, the first relative position relationship between the interaction device and the target object can be obtained. The relative position relationship is obtained, and then the area selected by the interactive device, that is, the target area of the target object is obtained. The second relative positional relationship is the relative positional relationship between the target object and the terminal device. Correspondingly, the second relative positional relationship may be the relative positional relationship between the virtual object and the terminal device, or the physical object and the terminal device. relative positional relationship between them. It should be noted that the first relative position relationship may include, but is not limited to, position information, rotation direction, and rotation angle of the interaction device relative to the target object. The second relative positional relationship may include, but is not limited to, the positional information of the target object relative to the terminal device, the rotation direction, that is, the rotation angle, and the like.
进一步地,目标区域可以是目标对象的一部分区域,或者目标对象的一部分组成结构所在的区域,也可以是目标对象的一个或多个组成部件或结构。可选地,目标对象的目标区域,可以为交互装置对目标对象的进行遮挡的部分区域,也可以是交互装置所指向目标对象的区域。也即,可以通过交互装置与目标对象的第一相对位置关系,确定交互装置对目标对象进行遮挡或所指向的区域。Further, the target area may be a part of the area of the target object, or the area where a part of the component structure of the target object is located, or may be one or more components or structures of the target object. Optionally, the target area of the target object may be a partial area of the target object blocked by the interaction device, or may be an area of the target object pointed by the interaction device. That is, the first relative positional relationship between the interaction device and the target object can be used to determine the area where the interaction device blocks or points to the target object.
步骤S130:获取目标区域对应的内容数据。Step S130: Acquire content data corresponding to the target area.
在本申请实施例中,目标区域对应的内容数据指的是目标对象在该目标区域对应的信息数据,该信息数据可为目标对象在目标区域的内容,比如目标对象在目标区域内的部件形状外观、内部结构等信息,该信息数据也可以为目标对象在目标区域的内容的对应数据,比如在该目标区域内的部件参数等,但不限于此。In the embodiment of the present application, the content data corresponding to the target area refers to the information data corresponding to the target object in the target area, and the information data may be the content of the target object in the target area, such as the shape of the component of the target object in the target area Appearance, internal structure and other information, the information data can also be data corresponding to the content of the target object in the target area, such as component parameters in the target area, etc., but not limited to this.
进一步地,确定目标区域对应的内容数据,可以由终端设备根据目标区域与目标对象的对应关系从服务器中下载,也可以由终端设备根据目标区域与目标对象的对应关系从其他设备中获取,还可以是终端设备根据上述对应关系从本地存储器中获取。Further, determining the content data corresponding to the target area can be downloaded from the server by the terminal device according to the corresponding relationship between the target area and the target object, or obtained by the terminal device from other devices according to the corresponding relationship between the target area and the target object, and also It may be obtained by the terminal device from the local storage according to the above-mentioned corresponding relationship.
步骤S140:根据6DoF信息及内容数据生成虚拟内容。Step S140: Generate virtual content according to the 6DoF information and content data.
在本申请实施例中,6DoF信息是终端设备获取的交互装置的6DoF信息,也即交互装置相对终端设备的位置及姿态信息。In the embodiment of the present application, the 6DoF information is the 6DoF information of the interaction device acquired by the terminal device, that is, the position and attitude information of the interaction device relative to the terminal device.
在一些实施方式中,虚拟内容可以为目标对象在该目标区域内对应的外观、内部结构等内容,也即,虚拟内容可以为基于上述的内容数据所生成的内容。进一步地,根据交互装置的6DoF信息与获取的内容数据生成虚拟内容时,可以根据交互装置的6DoF信息对获取的内容数据进行渲染以生成虚拟内容。其中,获取的内容数据来自根据交互装置的6DoF信息确定的目标区域,并通过获取的内容数据生成虚拟内容。可选地,虚拟内容可以为图形用户界面、二维图形或三维模型。In some embodiments, the virtual content may be content such as the appearance, internal structure, etc. corresponding to the target object in the target area, that is, the virtual content may be content generated based on the above-mentioned content data. Further, when virtual content is generated according to the 6DoF information of the interaction device and the acquired content data, the acquired content data may be rendered according to the 6DoF information of the interaction device to generate virtual content. The acquired content data comes from the target area determined according to the 6DoF information of the interactive device, and virtual content is generated by using the acquired content data. Alternatively, the virtual content may be a graphical user interface, a two-dimensional graphic or a three-dimensional model.
步骤S150:显示虚拟内容。Step S150: Display the virtual content.
在本申请实施例中,终端设备生成虚拟内容后,可以对虚拟内容进行显示。In this embodiment of the present application, after the terminal device generates the virtual content, the virtual content may be displayed.
在一些实施方式中,若交互装置设有显示区域,终端设备生成的虚拟内容可以直接显示于交互装置的显示区域。其中,交互装置的显示区域可以为设于交互装置的显示屏所在的区域,即通过交互装置的显示屏对虚拟内容进行显示,终端设备可将生成的虚拟内容发送给交互装置,以在交互装置的显示屏上进行显示。In some embodiments, if the interactive device is provided with a display area, the virtual content generated by the terminal device can be directly displayed on the display area of the interactive device. Wherein, the display area of the interaction device may be the area where the display screen of the interaction device is located, that is, the virtual content is displayed through the display screen of the interaction device, and the terminal device may send the generated virtual content to the interaction device, so that the virtual content can be displayed on the interaction device. displayed on the display.
在另一些实施方式中,交互装置可具有对应的虚拟显示区,虚拟显示区与交互装置具备第三相对位置关系,也可以是虚拟显示区与交互装置的标记物具备第三相对位置关系,该虚拟显示区可用于定义虚拟内容叠加显示在现实空间的区域,以适应未设有显示屏的交互装置,或交互装置的显示屏不适合显示当前虚拟内容的情况。终端设备获取交互装置的6DoF信息及虚拟显示区与交互装置之间的第三相对位置关系后,可根据交互装置的6DoF信息及第三相对位置关系计算虚拟显示区相对终端设备的空间位置,并基于该虚拟显示区的空间位置生成虚拟内容,从而可实现将虚拟内容叠加显示于该虚拟显示区的视觉效果。该虚拟显示区与交互装置之间的第三相对位置关系可根据实际需求进行设定,例如,该虚拟显示区所在的平面可以与交互装置的触控平面相垂直,或相重合,或相并列,或相对倾斜设置,该虚拟显示区与交互装置的第三相对位置关系,也可以根据用户的观看习惯进行调节,以便于用户的使用。In other embodiments, the interaction device may have a corresponding virtual display area, the virtual display area and the interaction device have a third relative positional relationship, or the virtual display area and the marker of the interaction device may have a third relative positional relationship, the The virtual display area can be used to define an area where the virtual content is superimposed and displayed in the real space, so as to adapt to the interactive device without a display screen, or the display screen of the interactive device is not suitable for displaying the current virtual content. After the terminal device obtains the 6DoF information of the interactive device and the third relative positional relationship between the virtual display area and the interactive device, it can calculate the spatial position of the virtual display area relative to the terminal device according to the 6DoF information of the interactive device and the third relative positional relationship, and The virtual content is generated based on the spatial position of the virtual display area, so that the visual effect of superimposing and displaying the virtual content on the virtual display area can be realized. The third relative positional relationship between the virtual display area and the interactive device can be set according to actual requirements. For example, the plane where the virtual display area is located can be perpendicular to, or overlap with, or be juxtaposed with the touch plane of the interactive device. , or a relatively inclined setting, the third relative positional relationship between the virtual display area and the interactive device can also be adjusted according to the user's viewing habits, so as to facilitate the user's use.
此时,步骤S150可以包括:基于交互装置的6DoF信息及虚拟显示区与交互装置之间预设的第三相对位置关系确定虚拟显示区,根据用户在交互装置的操作动作,重新确定虚拟显示区与交互装置的空间相对位置关系,并基于该空间相对位置关系确定虚拟显示区的位置信息,再根据该虚拟显示区的位置信息生成虚拟内容,进而将虚拟内容叠加显示于虚拟显示区,其中,用户可通过在交互装置的操作动作调节虚拟显示区所在的平面与交互装置之间的夹角,也可以调节虚拟显示区的区域大小,或是调节虚拟显示区相对交互装置的方位信息等。需要说明的是,虚拟显示区的尺寸大小,可以为预先设定好的尺寸大小,也可以根据实际应用时的需求对虚拟显示区的尺寸进行调节。At this time, step S150 may include: determining the virtual display area based on the 6DoF information of the interactive device and a preset third relative positional relationship between the virtual display area and the interactive device, and re-determining the virtual display area according to the operation action of the user on the interactive device The relative spatial positional relationship with the interactive device, and the positional information of the virtual display area is determined based on the relative spatial positional relationship, and then virtual content is generated according to the positional information of the virtual display area, and then the virtual content is superimposed and displayed in the virtual display area, wherein, The user can adjust the angle between the plane where the virtual display area is located and the interactive device by operating the interactive device, adjust the area size of the virtual display area, or adjust the orientation information of the virtual display area relative to the interactive device, etc. It should be noted that the size of the virtual display area may be a preset size, or the size of the virtual display area may be adjusted according to actual application requirements.
步骤S160:接收交互装置发送的控制数据,根据控制数据生成对应的内容处理指令,根据内容处理指令对虚拟内容进行处理。Step S160: Receive the control data sent by the interaction device, generate a corresponding content processing instruction according to the control data, and process the virtual content according to the content processing instruction.
在一些实施方式中,交互装置发送的控制数据,是基于用户在交互装置输入的操作动作(如,触控动作、按压动作等等)而生成,该操作动作可以包括触控操作、按压按键操作等操作动作,其中,该触控操作的类型至少包括点击、滑动、多点触控中任一种或多种。终端设备能够根据交互装置发送的控制数据确定对应的内容处理指令,其中,该内容处理指令的种类至少包括缩放、修改、标记、移动、旋转等,但不限于此。In some embodiments, the control data sent by the interactive device is generated based on an operation action (eg, a touch action, a pressing action, etc.) input by the user on the interactive device, and the operational action may include a touch operation, a button pressing operation, etc. and other operation actions, wherein the type of the touch operation includes at least any one or more of click, slide, and multi-touch. The terminal device can determine the corresponding content processing instruction according to the control data sent by the interaction device, wherein the types of the content processing instruction at least include zooming, modifying, marking, moving, rotating, etc., but not limited thereto.
进一步地,根据内容处理指令对虚拟内容进行处理,可以对当前显示的虚拟内容进行处理,或者对当前显示的虚拟内容中的部分对象,例如某一零部件或结构进行处理。例如,若接收到的内容处理指令为缩放或旋转的指令时,终端设备将当前显示的虚拟内容或部分对象进行缩放或旋转。若接收到的内容处理指令为修改的指令时,终端设备对当前显示的虚拟内容或部分对象的参数或数据进行修改。若接收到的内容处理指令为标记的指令时,终端设备对当前显示的虚拟内容或部分对象进行文字标记或简单标记。若接收到的内容处理指令为移动的指令时,可对当前显示的虚拟内容或部分对象进行移动。Further, the virtual content is processed according to the content processing instruction, the currently displayed virtual content may be processed, or some objects in the currently displayed virtual content, such as a certain component or structure, may be processed. For example, if the received content processing instruction is a zoom or rotation instruction, the terminal device zooms or rotates the currently displayed virtual content or some objects. If the received content processing instruction is a modification instruction, the terminal device modifies the parameters or data of the currently displayed virtual content or some objects. If the received content processing instruction is a marked instruction, the terminal device performs a text mark or simple mark on the currently displayed virtual content or part of the object. If the received content processing instruction is a moving instruction, the currently displayed virtual content or some objects can be moved.
本申请实施例提供的虚拟内容的处理方法,通过根据交互装置的6DoF信息确定目标对象的目标区域,并采集目标区域的内容数据,然后根据6DoF信息和内容数据生成并显示相应的虚拟内容,最后通过接收交互装置发送的控制数据,以及根据控制数据生成对应的内容处理指令,并根据该内容处理指令对虚拟内容进行处理。因此,用户可以通过交互装置的6DoF信息选取需要进行处理的虚拟内容,并直接通过交互装置对该虚拟内容进行处理,方便快捷,可以提高用户与显示的虚拟内容之间的交互性,提高沉浸感。In the method for processing virtual content provided by the embodiment of the present application, the target area of the target object is determined according to the 6DoF information of the interactive device, and the content data of the target area is collected, and then the corresponding virtual content is generated and displayed according to the 6DoF information and the content data, and finally By receiving the control data sent by the interactive device, and generating corresponding content processing instructions according to the control data, and processing the virtual content according to the content processing instructions. Therefore, the user can select the virtual content to be processed through the 6DoF information of the interactive device, and directly process the virtual content through the interactive device, which is convenient and quick, can improve the interaction between the user and the displayed virtual content, and improve the sense of immersion. .
本申请另一实施例提供了一种虚拟内容的处理方法,可应用于上述的终端设备。该虚拟内容的处理方法,使用户可以通过交互装置的6DoF信息选取需要进行处理的虚拟内容,并直接通过交互装置对该虚拟内容进行处理,同时将处理的内容叠加显示至目标区域,可以提高用户与显示的虚拟内容之间的交互性。Another embodiment of the present application provides a method for processing virtual content, which can be applied to the above-mentioned terminal device. The method for processing virtual content enables the user to select the virtual content to be processed through the 6DoF information of the interactive device, directly process the virtual content through the interactive device, and at the same time superimpose and display the processed content to the target area, which can improve the user experience. Interactivity with displayed virtual content.
具体地,请参阅图3,该虚拟内容的处理方法可以包括步骤S210~S300。Specifically, referring to FIG. 3 , the virtual content processing method may include steps S210-S300.
步骤S210:获取待显示的虚拟对象,将待显示的虚拟对象显示至指定的显示位置。Step S210: Acquire the virtual object to be displayed, and display the virtual object to be displayed to a designated display position.
在本申请实施例中,目标对象可以是终端设备显示的虚拟对象。终端设备在对虚拟对象进行显示时,终端设备需要获取待显示的虚拟对象的参数数据。其中,该参数数据可以包括待显示的虚拟对象的模型数据,模型数据为用于渲染虚拟对象的数据,进而实现虚拟对象的显示。例如,模型数据可以包括用于建立虚拟对象对应的颜色数据、顶点坐标数据、轮廓数据等。另外,待显示的虚拟对象的模型数据可以是存储于终端设备中,也可以是从交互装置、服务器等其他电子设备获取。In this embodiment of the present application, the target object may be a virtual object displayed by the terminal device. When the terminal device displays the virtual object, the terminal device needs to acquire parameter data of the virtual object to be displayed. The parameter data may include model data of the virtual object to be displayed, and the model data is data used to render the virtual object, thereby realizing the display of the virtual object. For example, the model data may include color data, vertex coordinate data, outline data, and the like for establishing the corresponding virtual objects. In addition, the model data of the virtual object to be displayed may be stored in the terminal device, or may be obtained from other electronic devices such as an interaction device and a server.
在本申请实施例中,将待显示的虚拟对象显示至指定的显示位置,可以是将终端设备显示的虚拟对象叠加显示到现实空间(例如,在终端设备预览待显示的虚拟对象,并将该待显示的虚拟对象叠加显示于特定区域),以便通过终端设备对叠加显示到现实空间的虚拟对象进行查看时,可以清楚地观看到该虚拟对象的结构,其中,显示位置可用虚拟对象在虚拟空间中的空间坐标表示。具体地,请参阅图4,该显示步骤包括步骤S410~S430。In this embodiment of the present application, displaying the virtual object to be displayed at a specified display position may be superimposing and displaying the virtual object displayed by the terminal device in the real space (for example, previewing the virtual object to be displayed on the terminal device, and The virtual object to be displayed is superimposed and displayed in a specific area), so that when viewing the virtual object superimposed and displayed in the real space through the terminal device, the structure of the virtual object can be clearly viewed, wherein the display position can be displayed by the virtual object in the virtual space. The spatial coordinate representation in . Specifically, please refer to FIG. 4 , the display step includes steps S410-S430.
步骤410:确定虚拟对象在真实环境中的叠加位置。Step 410: Determine the superimposed position of the virtual object in the real environment.
其中,叠加位置为上述虚拟对象进行显示时叠加在现实空间中的位置,该位置可以为现实空间中的指定位置,通过终端设备可以查看叠加显示于该位置的虚拟对象。请参阅图5,本申请实施例中,确定叠加位置还包括步骤S411~S413。其中,步骤S411~S413为步骤S410的另一种实施方式,可选地,步骤S411~S413可以替换步骤S410。The superimposed position is the position superimposed in the real space when the virtual object is displayed, and the position may be a designated position in the real space, and the virtual object superimposed and displayed at the position can be viewed through the terminal device. Referring to FIG. 5 , in this embodiment of the present application, determining the superimposition position further includes steps S411 to S413 . Wherein, steps S411-S413 are another implementation manner of step S410, and optionally, steps S411-S413 may replace step S410.
步骤S411:获取包含显示标记物的图像。Step S411: Acquire an image containing a display marker.
在本实施例中,显示标记物可以设于现实空间的任意位置。显示标记物可以包括至少一个子标记物,子标记物可以为具有一定形状的图案。在一个实施例中,每个子标记物可具有一个或多个特征点,其中,特征点的形状不作限定,可以是矩形、圆点、圆环、三角形、或其他形状。作为一种实施方式,显示标记物的轮廓为矩形,其中,显示标记物的形状也可以为其他形状,在此不做限定,矩形的区域以及该区域内的多个子标记物构成一个显示标记物。In this embodiment, the display marker can be set at any position in the real space. The display marker may include at least one sub-marker, and the sub-marker may be a pattern having a certain shape. In one embodiment, each sub-marker may have one or more feature points, wherein the shape of the feature points is not limited, and may be a rectangle, a circle, a circle, a triangle, or other shapes. As an embodiment, the outline of the display marker is a rectangle, and the shape of the display marker may also be other shapes, which are not limited here. The rectangular area and multiple sub-markers in the area constitute a display marker .
需要说明的是,此时虚拟内容的处理系统中至少包括两种标记物,一种为设置于交互装置的标记物,其用于获取交互装置的6DoF信息;一种为设置于交互装置之外的显示标记物,其用于确定显示区域。因此,需要区分两种标记物,以准确地识别不同的标记物,实现该标记物对应的作用。具体地,每个标记物可以包括至少一个子标记物,不同标记物内的子标记物的分布规则不同,例如,每个标记物可具备不同的身份信息。终端设备通过识别标记物中包含的子标记物,可以获取与标记物对应的身份信息,该身份信息为编码等可用于唯一标识标记物的信息,但不限于此。其中,该身份信息可以表征该标记物是用于获取交互装置的6DoF信息的标记物,还是确定显示区域的显示标记物。It should be noted that, at this time, the virtual content processing system includes at least two kinds of markers, one is a marker set on the interactive device, which is used to obtain the 6DoF information of the interactive device; the other is set outside the interactive device display marker, which is used to determine the display area. Therefore, it is necessary to distinguish the two markers in order to accurately identify the different markers and realize the corresponding functions of the markers. Specifically, each marker may include at least one sub-marker, and the distribution rules of the sub-markers in different markers are different, for example, each marker may have different identity information. The terminal device can acquire identity information corresponding to the marker by identifying the sub-marker contained in the marker, and the identity information is information such as code that can be used to uniquely identify the marker, but is not limited to this. Wherein, the identity information can represent whether the marker is a marker used to obtain 6DoF information of the interactive device, or a display marker for determining the display area.
作为另一种实施方式,上述提到的两种标记物也可以均为由光点构成的自身可以发光的物体等。由于光点标记物可以发射不同波段或不同颜色的光,终端设备通过识别光点标记物发出的光的波段或颜色等信息获取与标记物对应的身份信息,以对两种标记物做区分。需要说明的是,具体的标记物的形状、样式、尺寸、颜色、特征点数量以及分布在本实施例中并不作为限定,仅需要标记物能被终端设备识别追踪即可。As another implementation manner, the two kinds of markers mentioned above may also be objects composed of light spots that can emit light by themselves, or the like. Since the light point marker can emit light of different wavelength bands or different colors, the terminal device obtains the identity information corresponding to the marker by identifying the wavelength band or color of the light emitted by the light point marker, so as to distinguish the two markers. It should be noted that the specific shape, style, size, color, number of feature points, and distribution of the specific marker are not limited in this embodiment, as long as the marker can be identified and tracked by the terminal device.
进一步地,获取包含显示标记物的图像,可以通过终端设备的图像采集装置对包含显示标记物的图像进行采集。Further, to acquire the image containing the displayed marker, the image containing the displayed marker can be acquired by the image acquisition device of the terminal device.
步骤S412:识别图像中的显示标记物,并获取显示标记物与终端设备之间的相对空间位置关系。Step S412: Identify the display marker in the image, and obtain the relative spatial positional relationship between the display marker and the terminal device.
本申请实施例中,对图像中的显示标记物进行识别,是为了获取显示标记物的空间位置信息。根据获取到的显示标记物的空间位置信息,以终端设备为参照,可以获取到显示标记物与终端设备之间的相对空间位置关系,该相对空间位置关系可包括显示标记物相对终端设备位置及姿态等信息。In the embodiment of the present application, the identification of the display marker in the image is to obtain the spatial position information of the display marker. According to the obtained spatial position information of the display marker, and taking the terminal device as a reference, the relative spatial positional relationship between the display marker and the terminal device can be obtained, and the relative spatial positional relationship may include the position of the display marker relative to the terminal device and attitude, etc.
步骤S413:基于相对空间位置关系,确定叠加位置。Step S413: Determine the superposition position based on the relative spatial positional relationship.
在本申请实施例中,以终端设备为原点,建立虚拟空间坐标系,基于显示标记物与终端设备之间的相对空间位置关系确定虚拟对象在真实环境中的叠加位置,从而可根据该叠加位置获取虚拟对象在虚拟空间中的显示位置。终端设备可获取显示标记物在虚拟空间坐标系的坐标,该虚拟空间坐标系可以是虚拟空间中以世界原点建立的坐标系,也可以是虚拟空间中以虚拟摄像头(用于模拟虚拟空间中人眼的位置)为原点建立的坐标系。In the embodiment of the present application, a virtual space coordinate system is established with the terminal device as the origin, and the superimposed position of the virtual object in the real environment is determined based on the relative spatial position relationship between the displayed marker and the terminal device, so that the superimposed position of the virtual object can be determined according to the superimposed position. Get the display position of the virtual object in the virtual space. The terminal device can obtain the coordinates of the displayed marker in the virtual space coordinate system. The virtual space coordinate system can be a coordinate system established with the world origin in the virtual space, or a virtual camera in the virtual space (used to simulate people in the virtual space. eye position) is the coordinate system established as the origin.
在一些实施方式中,叠加位置与显示标记物具有一定的相对关系,该相对关系可包括相对尺寸比例、相对位置关系等。终端设备获取显示标记物在虚拟空间坐标系的坐标后,可根据显示标记物与终端设备之间的相对空间位置关系,以及叠加位置与显示标记物的相对关系,确定真实环境中的叠加位置。。在一些实施方式中,叠加位置的尺寸大小,可以根据待显示的虚拟对象的尺寸大小进行调整,也可以根据叠加位置的固定尺寸大小调整虚拟对象的模型尺寸,其中,叠加位置的固定尺寸大小可以是设定的尺寸数值,也可以与显示标记物具备一定的比例关系,比如是显示标记物在现实空间中的实际物理大小的2倍、3倍、5倍等,或与显示标记物的实际物理大小等同,在此不进行限定。In some embodiments, the superimposed position and the display marker have a certain relative relationship, and the relative relationship may include a relative size ratio, a relative positional relationship, and the like. After acquiring the coordinates of the displayed marker in the virtual space coordinate system, the terminal device can determine the superimposed position in the real environment according to the relative spatial position relationship between the displayed marker and the terminal device, as well as the relative relationship between the superimposed position and the displayed marker. . In some embodiments, the size of the superimposed position may be adjusted according to the size of the virtual object to be displayed, or the size of the model of the virtual object may be adjusted according to the fixed size of the superimposed position, wherein the fixed size of the superimposed position may be It is the set size value, and it can also have a certain proportional relationship with the display marker, such as 2 times, 3 times, 5 times, etc., of the actual physical size of the display marker in the real space, or with the actual size of the display marker. The physical size is the same, which is not limited here.
步骤S420:接收交互装置发送的第一显示指令。Step S420: Receive the first display instruction sent by the interaction device.
在本申请实施例中,第一显示指令与用户在交互装置上输入的操作动作相对应。其中,用户在交互装置上输入的操作动作可以为滑动、点击等。第一显示指令可以包括:选取虚拟对象、确定虚拟对象叠加显示时的缩放倍数、确定虚拟对象叠加显示时的动态效果中至少一个。In this embodiment of the present application, the first display instruction corresponds to an operation action input by the user on the interaction device. The operation action input by the user on the interaction device may be sliding, clicking, and the like. The first display instruction may include at least one of: selecting a virtual object, determining a zoom factor when the virtual object is superimposed and displayed, and determining a dynamic effect when the virtual object is superimposed and displayed.
在一些实施方式中,交互装置发送的第一显示指令,可以是将指定的虚拟对象叠加显示到叠加位置。其中,当存在多个叠加位置时,交互装置发送的第一显示指令包括将所选择的虚拟对象叠加显示到指定的一个叠加位置。作为一种具体实施方式,终端设备可根据用户在交互装置上的滑动方向确定与滑动方向对应的叠加位置,并根据该对应的叠加位置重新渲染并叠加显示虚拟对象,以使用户通过终端设备可以看到虚拟对象叠加显示在该对应的叠加位置上。请参阅图6,叠加显示于交互装置的虚拟对象包括虚拟对象A和虚拟对象B,叠加位置包括叠加位置C和叠加位置D。用户在交互装置上选取虚拟对象A,并通过滑动的方式将虚拟对象A沿朝向叠加位置C的方向滑出交互装置。进一步地,此时用户的操作动作对应将虚拟对象A显示到叠加位置C的指令。In some implementation manners, the first display instruction sent by the interaction apparatus may be to superimpose and display the specified virtual object to the superimposed position. Wherein, when there are multiple superimposition positions, the first display instruction sent by the interaction device includes superimposing and displaying the selected virtual object to a designated superposition position. As a specific implementation manner, the terminal device can determine the superimposed position corresponding to the sliding direction according to the sliding direction of the user on the interaction device, and re-render and superimpose the virtual object according to the corresponding superimposed position, so that the user can use the terminal device to See the virtual object overlay displayed on the corresponding overlay position. Referring to FIG. 6 , the virtual objects superimposed and displayed on the interactive device include virtual object A and virtual object B, and the superimposed positions include superimposed position C and superimposed position D. The user selects the virtual object A on the interactive device, and slides the virtual object A out of the interactive device in a direction toward the superimposed position C by sliding. Further, the user's operation action at this time corresponds to an instruction to display the virtual object A to the superimposed position C.
在一些实施方式中,可以根据交互装置发送的第一显示指令,可以是将多个虚拟对象分别显示到不同的叠加位置。可以理解的是,交互装置发送的第一显示指令具有指向性,其能够将某个虚拟对象显示到特定的叠加位置,以便用户通过不同的叠加位置浏览不同的虚拟对象,避免发生因在同一叠加位置显示多个虚拟对象而浏览效果不佳的情况。请参阅图7,叠加显示于交互装置的虚拟对象包括虚拟对象A和虚拟对象B,叠加位置包括叠加位置C和叠加位置D。用户在交互装置上先选取虚拟对象A,并通过向左滑动的方式将虚拟对象A滑出交互装置,再选取虚拟对象B,并通过向右滑动的方式将虚拟对象B滑出交互装置。进一步地,此时用户的操作动作对应将虚拟对象A叠加显示到叠加位置C的指令,以及将虚拟对象B叠加显示到叠加位置D的指令。In some implementation manners, according to the first display instruction sent by the interaction device, the multiple virtual objects may be displayed at different superimposed positions respectively. It can be understood that the first display instruction sent by the interactive device is directional, which can display a certain virtual object to a specific overlapping position, so that the user can browse different virtual objects through different overlapping positions, so as to avoid the occurrence of the same overlapping. A situation where multiple virtual objects are displayed in a location and browsing is not good. Referring to FIG. 7 , the virtual objects superimposed and displayed on the interactive device include virtual object A and virtual object B, and the superimposed positions include superimposed position C and superimposed position D. The user first selects virtual object A on the interactive device, slides virtual object A out of the interactive device by sliding left, then selects virtual object B, and slides virtual object B out of the interactive device by sliding right. Further, the operation action of the user at this time corresponds to the instruction to superimpose and display the virtual object A to the superimposition position C, and the instruction to superimpose and display the virtual object B to the superimposition position D.
步骤S430:基于第一显示指令,按照预设的显示效果显示虚拟对象,虚拟对象的显示位置与叠加位置相对应。Step S430: Based on the first display instruction, display the virtual object according to the preset display effect, and the display position of the virtual object corresponds to the superimposed position.
在本申请实施例中,预设的叠加显示效果包括虚拟对象叠加显示于显示区域时的尺寸,即虚拟对象进行缩放的比例大小。其中,虚拟对象进行放大的比例大小,可以为原虚拟对象的尺寸大小的1倍、2倍、3倍、4倍等。其中,原虚拟对象的尺寸大小为虚拟对象显示于终端设备进行预览时的尺寸大小。In this embodiment of the present application, the preset superimposed display effect includes the size of the virtual object when it is superimposed and displayed in the display area, that is, the scale of the virtual object to be scaled. The scale at which the virtual object is enlarged may be 1, 2, 3, or 4 times the size of the original virtual object. The size of the original virtual object is the size when the virtual object is displayed on the terminal device for preview.
在本申请实施例中,终端设备获取到虚拟对象的参数数据后,可以根据参数数据和叠加位置的位置,生成虚拟对象。作为一种实施方式,可以是根据参数数据构建出待虚拟内容,并获取叠加位置在虚拟空间中的空间坐标,根据该空间坐标确定虚拟对象在虚拟空间的三维渲染坐标,使渲染的三维虚拟对象在虚拟空间中的渲染位置与叠加位置吻合,从而根据该渲染位置进行渲染以叠加显示虚拟对象于叠加位置。In this embodiment of the present application, after acquiring the parameter data of the virtual object, the terminal device can generate the virtual object according to the parameter data and the position of the superimposed position. As an embodiment, the virtual content to be created may be constructed according to the parameter data, and the spatial coordinates of the superimposed position in the virtual space may be obtained, and the three-dimensional rendering coordinates of the virtual object in the virtual space may be determined according to the spatial coordinates, so that the rendered three-dimensional virtual object The rendering position in the virtual space is consistent with the superimposing position, so that rendering is performed according to the rendering position to superimpose and display the virtual object at the superimposing position.
在一些实施方式中,由于终端设备已经得到叠加位置,以终端设备为基准点,建立虚拟空间坐标系,可以得到虚拟对象在虚拟空间中的渲染坐标,即得到了虚拟对象的渲染位置。该渲染位置包括虚拟对象的渲染坐标,进而在该渲染位置渲染以显示出虚拟对象。其中,上述渲染坐标可以是虚拟对象在虚拟空间中以终端设备的虚拟摄像头为原点(也可看作是以人眼为原点)的三维空间坐标。In some embodiments, since the terminal device has obtained the superimposed position, a virtual space coordinate system is established with the terminal device as a reference point, and the rendering coordinates of the virtual object in the virtual space can be obtained, that is, the rendering position of the virtual object is obtained. The rendering position includes the rendering coordinates of the virtual object, and is then rendered at the rendering position to display the virtual object. The above-mentioned rendering coordinates may be the three-dimensional space coordinates of the virtual object in the virtual space with the virtual camera of the terminal device as the origin (it may also be regarded as the human eye as the origin).
可以理解的是,终端设备确定用于虚拟空间中渲染虚拟对象的渲染坐标之后,终端设备可以根据获取到的虚拟内容对应的参数数据,构建三维的虚拟内容,以及根据上述渲染坐标渲染该虚拟对象。其中,终端设备可以从参数数据得到三维的待显示的虚拟对象中各个顶点的RGB值及对应的坐标等。It can be understood that, after the terminal device determines the rendering coordinates for rendering the virtual object in the virtual space, the terminal device can construct the three-dimensional virtual content according to the acquired parameter data corresponding to the virtual content, and render the virtual object according to the above-mentioned rendering coordinates. . The terminal device may obtain, from the parameter data, the RGB values and corresponding coordinates of each vertex in the three-dimensional virtual object to be displayed.
在一些实施方式中,预设的显示效果还包括虚拟对象自交互装置显示到叠加位置的过程时的动画效果,例如,请参阅图7,该动画效果可以为原虚拟对象的尺寸大小逐渐增大到显示于叠加位置时的尺寸大小的过程。可选地,在虚拟对象自交互装置显示到叠加位置的过程,还可以增添过渡的动画效果,例如虚拟对象自交互装置按照预定轨迹“飞出”,并落于叠加位置的过程。In some embodiments, the preset display effect also includes an animation effect when the virtual object is displayed from the interactive device to the superimposed position. For example, please refer to FIG. 7 , the animation effect may be that the size of the original virtual object gradually increases The process to the size when displayed in the overlay position. Optionally, in the process of displaying the virtual object from the interactive device to the superimposed position, a transitional animation effect can also be added, for example, the process of the virtual object "flying out" from the interactive device according to a predetermined trajectory and falling into the superimposed position.
在一些实施方式中,对应于将虚拟对象自交互装置向叠加位置进行显示的过程,还包括将虚拟对象自叠加位置进行回收的过程。该过程可以包括:接收交互装置的回收指令,将虚拟对象自叠加位置进行回收,其中,回收效果可以参照预设的显示效果,在此不做赘述。In some embodiments, corresponding to the process of displaying the virtual object from the interactive device to the superimposed position, the process further includes a process of recovering the virtual object from the superimposed position. The process may include: receiving a recycling instruction from the interaction device, and recycling the virtual object from the superimposed position, wherein the recycling effect may refer to a preset display effect, which will not be repeated here.
因此,当目标对象为终端设备显示的虚拟对象时,可以通过上述提供的显示步骤,对目标对象进行放大以生成虚拟对象,进而在对目标对象进行处理时,能较准确地获取和处理目标区域的内容数据,可以扩大该虚拟内容的处理方法的适用范围。Therefore, when the target object is a virtual object displayed by the terminal device, the above-mentioned display steps can be used to enlarge the target object to generate a virtual object, and then when processing the target object, the target area can be more accurately acquired and processed The content data can expand the scope of application of the processing method of the virtual content.
步骤S220:根据采集的标记物图像,确定交互装置的六自由度(6DoF)信息,标记物图像包含标记物。Step S220: Determine six degrees of freedom (6DoF) information of the interactive device according to the collected marker image, where the marker image contains the marker.
在一些实施方式中,交互装置的标记物的个数可以是多个。因此,作为一种方式,可以通过识别多个标记物中每个标记物相对终端设备的位置信息、旋转方向及旋转角度,并根据每个标记物相对终端设备的位置信息、旋转方向及旋转角度,得到交互装置相对终端设备的位置及姿态信息。例如,终端设备识别的交互装置的标记物包括第一标记物以及第二标记物,第二标记物区别于第一标记物,终端设备可以分别计算第一标记物以及第二标记物与终端设备之间的相对位置关系及旋转关系,以确定交互装置相对终端设备的位置及姿态信息,使位置及姿态信息的获取更为准确。In some embodiments, the number of markers of the interactive device may be multiple. Therefore, as a way, by identifying the position information, rotation direction and rotation angle of each marker relative to the terminal device in the plurality of markers, and according to the position information, rotation direction and rotation angle of each marker relative to the terminal device , to obtain the position and attitude information of the interaction device relative to the terminal device. For example, the markers of the interaction device recognized by the terminal device include a first marker and a second marker, the second marker is different from the first marker, and the terminal device can calculate the first marker and the second marker respectively and the terminal device. The relative positional relationship and rotation relationship between the two can be determined to determine the position and attitude information of the interaction device relative to the terminal device, so that the acquisition of the position and attitude information is more accurate.
在一些实施方式中,识别交互装置的标记物,可以由终端设备先通过图像采集装置采集包含标记物的图像,然后再对该图像中的标记物进行识别。其中,终端设备采集包含标记物的图像之前,可以通过调整终端设备在现实空间中的空间位置,也可以通过调整交互装置在现实空间中的空间位置,以使该交互装置的标记物处于终端设备的图像采集装置的视野范围内,从而使终端设备可以对该标记物进行图像采集和图像识别。其中,图像采集装置的视野范围可以由视场角的方位以及大小决定。In some embodiments, to identify the marker of the interaction device, the terminal device may first collect an image containing the marker through the image acquisition device, and then identify the marker in the image. Before the terminal device collects the image containing the marker, it can adjust the spatial position of the terminal device in the real space, or adjust the spatial position of the interaction device in the real space, so that the marker of the interaction device is in the terminal device. Within the field of view of the image acquisition device, the terminal device can perform image acquisition and image recognition on the marker. Wherein, the field of view of the image acquisition device may be determined by the orientation and size of the field of view angle.
在一些实施例中,交互装置可设有多个标记物,终端设备可将交互装置处于视野范围内的标记物作为目标标记物,此时,终端设备采集到包含目标标记物的图像,若交互装置的所有标记物都处于图像采集装置的视野范围内,那么终端设备采集到的图像中的目标标记物可以是交互装置的所有标记物。若交互装置的部分标记物处于图像采集装置的视野范围内,从而终端设备采集到的图像中的目标标记物可以是交互装置的部分标记物。In some embodiments, the interaction device may be provided with multiple markers, and the terminal device may use the marker within the visual field of the interaction device as the target marker. At this time, the terminal device collects an image containing the target marker. All the markers of the device are within the field of view of the image acquisition device, so the target markers in the image collected by the terminal device may be all the markers of the interaction device. If a part of the marker of the interaction device is within the field of view of the image acquisition device, the target marker in the image collected by the terminal device may be a part of the marker of the interaction device.
在另一些实施方式中,终端设备识别交互装置的标记物时,还可以先通过其他传感器装置采集包含标记物的图像,然后终端设备再对该图像中的标记物进行识别。其中,该传感器装置具有采集标记物的图像功能,可以为光传感器(如,红外光接收器,用于接收能够反射红外光的标记物反射的红外光)等。在另一些实施方式中,也可以通过调整终端设备在现实空间中的空间位置,或者调整交互装置在现实空间中的空间位置,以使该交互装置的标记物处于传感器装置的感应范围内,从而使终端设备可以对该标记物进行图像采集和图像识别。其中,传感器装置的感应范围可以由灵敏度大小决定。进一步地,交互装置上设有多个标记物时,终端设备可将交互装置处于传感器装置的感应范围内的标记物作为目标标记物,其中,该目标标记物可以为交互装置的所有标记物,也可以为交互装置的部分标记物。In other embodiments, when the terminal device recognizes the marker of the interaction device, an image containing the marker may be collected by other sensor devices, and then the terminal device recognizes the marker in the image. Wherein, the sensor device has the function of collecting the image of the marker, and can be a light sensor (eg, an infrared light receiver for receiving the infrared light reflected by the marker capable of reflecting infrared light) and the like. In other embodiments, the spatial position of the terminal device in the real space or the spatial position of the interaction device in the real space can also be adjusted, so that the marker of the interaction device is within the sensing range of the sensor device, thereby The terminal device can perform image acquisition and image recognition on the marker. The sensing range of the sensor device may be determined by the sensitivity. Further, when a plurality of markers are provided on the interaction device, the terminal device may use the marker of the interaction device within the sensing range of the sensor device as the target marker, wherein the target marker may be all markers of the interaction device, It can also be part of the marker of the interactive device.
在另一些实施方式中,交互装置还可以包括惯性测量传感器,惯性测量传感器包括惯性测量单元(Inertial measurement unit,IMU)。IMU可以检测交互装置的六自由度信息,也可以仅检测交互装置的三自由度信息。其中,三自由度信息可包括交互装置沿空间中三个直角坐标轴(X、Y、Z轴)的转动自由度,六自由度信息可包括交互装置沿空间中三个直角坐标轴的移动自由度和转动自由度,上述三个直角坐标轴对应的移动自由度可构成交互装置的位置信息,对应的转动自由度可构成交互装置的姿态信息。因此,终端设备可以通过接收交互装置发送的上述惯性测量传感器的感应数据,来得到IMU检测到的交互装置的姿态信息或者检测到的位置及姿态信息,进而获取到交互装置与终端设备之间的相对空间位置关系。In other embodiments, the interaction device may further include an inertial measurement sensor, and the inertial measurement sensor includes an inertial measurement unit (Inertial measurement unit, IMU). The IMU can detect the six-degree-of-freedom information of the interactive device, or can only detect the three-degree-of-freedom information of the interactive device. The three-degree-of-freedom information may include the rotational degrees of freedom of the interactive device along three rectangular coordinate axes (X, Y, and Z axes) in space, and the six-degree-of-freedom information may include the freedom of movement of the interactive device along the three rectangular coordinate axes in space. The movement degrees of freedom corresponding to the above three rectangular coordinate axes can constitute the position information of the interactive device, and the corresponding rotational degrees of freedom can constitute the attitude information of the interactive device. Therefore, the terminal device can obtain the attitude information of the interaction device detected by the IMU or the detected position and attitude information by receiving the sensing data of the above-mentioned inertial measurement sensor sent by the interaction device, and then obtain the information between the interaction device and the terminal device. Relative spatial positional relationship.
进一步地,为精确地获取到交互装置的位置及姿态信息,终端设备可以获取包含交互装置的图像以及惯性测量传感器的感应数据,以根据图像的识别数据和IMU的检测数据,得到交互装置的位置及姿态信息(也即6DoF信息)。Further, in order to accurately obtain the position and attitude information of the interactive device, the terminal device can obtain the image including the interactive device and the sensing data of the inertial measurement sensor, so as to obtain the position of the interactive device according to the identification data of the image and the detection data of the IMU. and attitude information (that is, 6DoF information).
步骤S230:基于6DoF信息,获取目标对象的目标区域,目标区域为交互装置选择的区域。Step S230: Based on the 6DoF information, acquire a target area of the target object, where the target area is an area selected by the interaction device.
在本申请实施例中,目标对象可以包括终端设备显示的虚拟对象以及现实空间中的实体对象中的至少一种。其中,终端设备显示的虚拟对象可以指的是通过终端设备的镜片投射到人眼的三维虚拟对象,此时,用户透过终端设备的镜片可以查看虚拟对象以及镜片前的现实空间场景,因此,用户所观察到的是叠加显示于现实空间的虚拟对象,如虚拟人体、虚拟动物、虚拟房屋等。或者,终端设备显示的虚拟对象可以是终端设备利用混合现实的显示技术(如全息投影技术)所显示的虚拟对象,此时,通过终端设备可以查看叠加显示于现实空间的虚拟对象,如虚拟人体、虚拟动物、虚拟房屋等。现实空间中的实体对象可以是车辆、书、海报、移动终端、人、动物等任一真实存在的物理实体。In this embodiment of the present application, the target object may include at least one of a virtual object displayed by a terminal device and a physical object in the real space. The virtual object displayed by the terminal device may refer to a three-dimensional virtual object projected to the human eye through the lens of the terminal device. At this time, the user can view the virtual object and the real space scene in front of the lens through the lens of the terminal device. Therefore, What the user observes is the virtual objects superimposed on the real space, such as virtual human body, virtual animal, virtual house, etc. Alternatively, the virtual object displayed by the terminal device may be a virtual object displayed by the terminal device using a mixed reality display technology (such as a holographic projection technology). , virtual animals, virtual houses, etc. The physical objects in the real space can be any real physical entities such as vehicles, books, posters, mobile terminals, people, and animals.
在一些实施方式中,可以通过实时获取交互装置的6DoF信息,并根据目标对象的显示位置,确定目标对象中被选择的目标区域,进而实现根据交互装置的6DoF信息实现对目标区域的选择。该目标区域为交互装置与目标对象相对时,目标对象中被交互装置所指向的区域,可与交互装置的6DoF信息对应。该目标区域可以根据用户的意愿进行确定,也就是说,用户可以通过改变交互装置的位置及姿态信息,来确定目标对象中被选中的目标区域。在一些实施方式中,终端设备获取交互装置的6DoF信息,可以参照上述获取交互装置的6DoF信息的方式,在此不再赘述。In some embodiments, the 6DoF information of the interactive device can be acquired in real time, and the selected target area in the target object can be determined according to the display position of the target object, thereby realizing the selection of the target area according to the 6DoF information of the interactive device. The target area is the area in the target object pointed to by the interactive device when the interactive device is opposite to the target object, which may correspond to the 6DoF information of the interactive device. The target area can be determined according to the user's will, that is, the user can determine the selected target area in the target object by changing the position and posture information of the interactive device. In some embodiments, for the terminal device to acquire the 6DoF information of the interaction device, reference may be made to the above-mentioned method for acquiring the 6DoF information of the interaction device, which will not be repeated here.
可以理解的是,交互装置指向的方向与交互装置所在的平面之间的角度固定,当终端设备获取到交互装置的6DoF信息时,可得到交互装置的空间位置信息,因此,可以根据指向的方向与交互装置的角度固定关系以及交互装置的空间位置信息,得到交互装置当前所指向的方向,从而可以根据目标对象的显示位置以及交互装置当前所指向的方向,得到目标对象中被选择的目标区域。It can be understood that the angle between the direction in which the interaction device points and the plane where the interaction device is located is fixed. When the terminal device obtains the 6DoF information of the interaction device, the spatial position information of the interaction device can be obtained. The fixed angle relationship with the interactive device and the spatial position information of the interactive device can obtain the current direction of the interactive device, so that the selected target area in the target object can be obtained according to the display position of the target object and the current direction of the interactive device .
在一些实施例中,具体而言,请参阅图8,步骤S230可以包括步骤S231a~S232a。In some embodiments, specifically, referring to FIG. 8 , step S230 may include steps S231a-S232a.
步骤S231a:根据6DoF信息及第一空间位置关系生成虚拟路径。Step S231a: Generate a virtual path according to the 6DoF information and the first spatial position relationship.
其中,第一空间位置关系为虚拟路径与交互装置之间的空间位置关系,虚拟路径用于表示交互装置的指示方向。因此,终端设备可以通过交互装置的虚拟路径获取目标对象中被选择的目标区域。具体地,终端设备可以根据第一空间位置关系和交互装置的6DoF信息,生成虚拟路径并进行显示,然后根据显示的虚拟路径和目标对象的显示位置,确定目标对象中与虚拟路径相交的相交区域,并将相交区域作为目标对象中被选择的目标区域。Wherein, the first spatial positional relationship is the spatial positional relationship between the virtual path and the interaction device, and the virtual path is used to represent the pointing direction of the interaction device. Therefore, the terminal device can acquire the selected target area in the target object through the virtual path of the interaction device. Specifically, the terminal device can generate and display a virtual path according to the first spatial position relationship and the 6DoF information of the interaction device, and then determine the intersection area of the target object that intersects the virtual path according to the displayed virtual path and the display position of the target object , and use the intersecting area as the selected target area in the target object.
作为一种实施方式,第一空间位置信息可以为预设的空间位置信息,预设的空间位置信息用于确定虚拟路径与交互装置的指定位置,例如,交互装置相对于虚拟路径的位置。此时,交互装置指向的方向为虚拟路径的指引方向。在本申请实施例中,虚拟路径可以为自交互装置发射出的虚拟射线,虚拟射线以交互装置上的一点为发射点,其朝向目标对象的指引方向遵循预设的第一空间位置。其中,该发射点可以为虚拟射线与交互装置叠加显示的点。在一些实施方式中,虚拟路径的指引方向可以与交互装置的平面平行,也可是一定角度,如斜上方45°,在此不作限定。可选地,虚拟路径还可以为虚拟曲线,该虚拟曲线的指引方向可根据曲线的曲率的方向和大小进行变化。在其他实施例中,虚拟路径还可以是虚拟键头、虚拟指示标记等,虚拟路径的形式在本申请实施例中不作限定,只需虚拟路径的移动与交互装置的6DoF信息的变化对应即可。As an embodiment, the first spatial position information may be preset spatial position information, and the preset spatial position information is used to determine the virtual path and the designated position of the interaction device, for example, the position of the interaction device relative to the virtual path. At this time, the direction in which the interaction device points is the guiding direction of the virtual path. In this embodiment of the present application, the virtual path may be a virtual ray emitted from the interaction device, and the virtual ray takes a point on the interaction device as the emission point, and its guiding direction toward the target object follows a preset first spatial position. The emission point may be a point where the virtual ray and the interactive device are superimposed and displayed. In some embodiments, the guiding direction of the virtual path may be parallel to the plane of the interaction device, or may be at a certain angle, such as 45° obliquely upward, which is not limited herein. Optionally, the virtual path can also be a virtual curve, and the guiding direction of the virtual curve can be changed according to the direction and magnitude of the curvature of the curve. In other embodiments, the virtual path may also be a virtual key head, a virtual indicator mark, etc. The form of the virtual path is not limited in the embodiments of the present application, and it is only necessary that the movement of the virtual path corresponds to the change of the 6DoF information of the interactive device. .
在一些实施方式中,可以根据交互装置接收的触控指令,调整虚拟路径的指向或形状。例如,可以获取交互装置接收的触控指令,根据该触控指令调整第一空间位置关系,并根据调整后的第一空间位置关系以及6DoF信息生成虚拟路径。具体地,触控指令可以改变虚拟路径在交互装置上叠加显示的点的位置,也可以改变虚拟路径的指引方向。例如,请参阅图9,可以根据用户的操作动作改变虚拟路径与交互装置之间的相对夹角,从而改变虚拟路径的指引方向,将指引方向为A的虚拟路径调整为指引方向为B的虚拟路径。In some embodiments, the direction or shape of the virtual path can be adjusted according to the touch command received by the interactive device. For example, the touch command received by the interaction device may be acquired, the first spatial position relationship may be adjusted according to the touch command, and the virtual path may be generated according to the adjusted first spatial position relationship and the 6DoF information. Specifically, the touch command can change the position of the point on which the virtual path is superimposed and displayed on the interactive device, and can also change the guiding direction of the virtual path. For example, referring to FIG. 9, the relative angle between the virtual path and the interactive device can be changed according to the user's operation action, thereby changing the guiding direction of the virtual path, and adjusting the virtual path with the guiding direction A to the virtual path with the guiding direction B path.
在一些实施方式中,通过交互装置的6DoF信息,可以以终端设备为基准点,建立虚拟空间坐标系,并得到交互装置在虚拟空间坐标系中的空间位置坐标,再根据该空间位置坐标渲染虚拟路径。终端设备渲染出虚拟路径后,可对虚拟路径进行显示。用户通过头戴显示装置的显示镜片,可以看到虚拟路径叠加显示在真实世界中的交互装置上,实现增强现实的效果。In some embodiments, through the 6DoF information of the interactive device, a virtual space coordinate system can be established with the terminal device as a reference point, and the spatial position coordinates of the interactive device in the virtual space coordinate system can be obtained, and then the virtual space can be rendered according to the spatial position coordinates. path. After the terminal device renders the virtual path, the virtual path can be displayed. Through the display lens of the head-mounted display device, the user can see that the virtual path is superimposed and displayed on the interactive device in the real world to realize the effect of augmented reality.
终端设备可以根据目标对象的显示位置以及显示的虚拟路径,获取目标对象中与虚拟路径相交的相交区域,并将相交区域作为目标对象中被选择的目标区域。在一些实施方式中,上述获取目标对象中与虚拟路径相交的相交区域,可以是获取目标对象中与虚拟路径的坐标相同的坐标点区域,该坐标点区域可直接作为上述相交区域,也可以是根据坐标点区域获取到对应的目标对象所在的区域,将该目标对象所在的区域作为相交区域,即作为目标对象中被选择的目标区域。其中,根据坐标点区域获取到对应的目标对象所在的区域,可以是根据坐标点区域获取距离最近的目标对象所在的区域。这样,用户改变虚拟路径的显示位置,使显示的虚拟路径可以与想选择的目标区域相交,从而可以在目标对象中准确选择目标区域。The terminal device can obtain the intersection area of the target object that intersects the virtual path according to the display position of the target object and the displayed virtual path, and use the intersection area as the selected target area in the target object. In some embodiments, the above-mentioned acquisition of the intersection area in the target object intersecting with the virtual path may be the acquisition of the coordinate point area in the target object with the same coordinates as the virtual path, and the coordinate point area may be directly used as the above-mentioned intersection area, or may be The area where the corresponding target object is located is obtained according to the coordinate point area, and the area where the target object is located is taken as the intersecting area, that is, the selected target area in the target object. Wherein, the area where the corresponding target object is obtained according to the coordinate point area may be the area where the closest target object is located according to the coordinate point area. In this way, the user changes the display position of the virtual path, so that the displayed virtual path can intersect with the target area to be selected, so that the target area can be accurately selected in the target object.
步骤S232a:确定虚拟路径指向的目标对象的区域为目标区域。Step S232a: Determine the area of the target object pointed to by the virtual path as the target area.
在本申请实施例中,目标对象划分为多个区域,对于区域的划分可以根据空间位置进行划分,例如划分为上下左右四个区域;也可以根据目标对象的本身结构进行划分,例如,请参阅图9,虚拟对象为汽车400时,汽车400所划分的区域可以包括:前轮430、后轮450、后视镜470和车前灯490等多个区域,在此仅举例说明。In the embodiment of the present application, the target object is divided into multiple areas, and the division of the area can be divided according to the spatial position, for example, divided into four areas, upper, lower, left and right; it can also be divided according to the structure of the target object itself, for example, please refer to In FIG. 9 , when the virtual object is a
进一步地,虚拟路径的指引方向自交互装置指向目标对象,并与目标对象相交。其中,若虚拟路径与目标对象的表面相交为一点,则确定该点所在的区域为目标对象的目标区域。Further, the guiding direction of the virtual path points from the interactive device to the target object, and intersects the target object. Wherein, if the intersection of the virtual path and the surface of the target object is a point, it is determined that the area where the point is located is the target area of the target object.
在另一些实施例中,请参阅图10,目标区域为交互装置遮挡的区域,也即,根据6DoF信息,确定目标对象被交互装置遮挡的区域为目标区域。请参阅图11,具体而言,步骤230还包括步骤S231b~S233b。此时,步骤S231a~S232a可以省略,即步骤S231b~S233b可以为步骤S231a~S232a的替代方案。In other embodiments, please refer to FIG. 10 , the target area is the area blocked by the interaction device, that is, according to the 6DoF information, the area where the target object is blocked by the interaction device is determined as the target area. Please refer to FIG. 11. Specifically, step 230 further includes steps S231b-S233b. In this case, steps S231a-S232a may be omitted, that is, steps S231b-S233b may be alternatives to steps S231a-S232a.
步骤S231b:基于6DoF信息,获取交互装置与目标对象的第一相对位置关系。Step S231b: Based on the 6DoF information, obtain the first relative positional relationship between the interaction device and the target object.
在本申请实施例中,可以基于同一空间坐标系,以虚拟空间坐标系为例,分别获取交互装置和目标对象的空间坐标,再根据交互装置和目标对象的空间坐标,确定交互装置与目标对象的第一相对位置关系。其中,交互装置与目标对象的第一相对位置关系可以是交互装置与实体对象的相对位置关系,也可以是交互装置与虚拟对象的相对位置关系。In the embodiment of the present application, based on the same spatial coordinate system, taking the virtual space coordinate system as an example, the spatial coordinates of the interaction device and the target object can be obtained respectively, and then the interaction device and the target object can be determined according to the spatial coordinates of the interaction device and the target object. the first relative positional relationship. The first relative positional relationship between the interaction device and the target object may be the relative positional relationship between the interaction device and the physical object, or the relative positional relationship between the interaction device and the virtual object.
具体地,终端设备通过识别交互装置的标记物,可以得到交互装置相对终端设备的6DoF信息,因此,终端设备可以获取到现实空间中的交互装置的空间位置坐标,并将该空间位置坐标转换为虚拟空间坐标系中的空间坐标。在一些实施方式中,交互装置的空间坐标可以是交互装置在虚拟空间坐标系中,以指定的点(如,虚拟摄像头,也可看作是以人眼所在点)为原点的三维空间坐标。同理,终端设备也根据目标对象与终端设备之间的位置关系,得到目标对象在虚拟空间坐标系中的空间坐标。可以理解的是,虚拟空间坐标系的原点也可以为虚拟空间中的任意一点,基于该点建立空间坐标系,并获取同一空间坐标系中交互装置和目标对象的三维空间坐标。该空间坐标系也可以是现实空间的空间坐标系等,在此不作限定。Specifically, the terminal device can obtain the 6DoF information of the interactive device relative to the terminal device by identifying the marker of the interactive device. Therefore, the terminal device can obtain the spatial position coordinates of the interactive device in the real space, and convert the spatial position coordinates into Space coordinates in the virtual space coordinate system. In some embodiments, the spatial coordinates of the interactive device may be three-dimensional spatial coordinates of the interactive device in a virtual space coordinate system with a specified point (eg, a virtual camera, which can also be regarded as the point where the human eye is located) as the origin. Similarly, the terminal device also obtains the spatial coordinates of the target object in the virtual space coordinate system according to the positional relationship between the target object and the terminal device. It can be understood that the origin of the virtual space coordinate system can also be any point in the virtual space, and a space coordinate system is established based on this point, and the three-dimensional space coordinates of the interactive device and the target object in the same space coordinate system are obtained. The space coordinate system may also be a space coordinate system in a real space, etc., which is not limited here.
在一些实施方式中,若目标对象为终端设备已经显示在虚拟空间坐标系中的虚拟对象时,终端设备可以直接获取到虚拟空间坐标系中虚拟对象与终端设备的之间的第二相对位置关系。终端设备可以根据交互装置相对终端设备的位置及姿态信息,获取到现实空间中的交互装置的空间位置坐标,并将该空间位置坐标转换为虚拟空间坐标系中的空间坐标。终端设备可根据虚拟空间坐标系中虚拟对象与终端设备的之间的第二相对位置关系及交互装置在虚拟空间坐标系的空间坐标,以终端设备作为参照,则可以获取到虚拟空间坐标系中交互装置与虚拟对象的第一相对位置关系。In some embodiments, if the target object is a virtual object that the terminal device has displayed in the virtual space coordinate system, the terminal device can directly obtain the second relative positional relationship between the virtual object and the terminal device in the virtual space coordinate system . The terminal device can obtain the spatial position coordinates of the interaction device in the real space according to the position and attitude information of the interaction device relative to the terminal device, and convert the spatial position coordinates into the spatial coordinates in the virtual space coordinate system. The terminal device can obtain the virtual space coordinate system based on the second relative positional relationship between the virtual object and the terminal device in the virtual space coordinate system and the spatial coordinates of the interaction device in the virtual space coordinate system, using the terminal device as a reference. The first relative positional relationship between the interaction device and the virtual object.
在一些实施例中,若目标对象为现实空间中的实体对象时,终端设备需要获取实体对象的空间位置信息,以得到交互装置与实体对象的第一相对位置关系。具体地,终端设备可以通过识别实体对象,以得到实体对象的识别结果,该识别结果至少包括实体对象的形状、大小、实体对象相对终端设备的空间位置关系,从而终端设备可以获取到实体对象与终端设备的第二相对位置关系。其中,第二相对位置关系可以包括实体对象相对终端设备的位置、旋转方向、旋转角度等。In some embodiments, if the target object is an entity object in the real space, the terminal device needs to acquire the spatial position information of the entity object to obtain the first relative position relationship between the interaction device and the entity object. Specifically, the terminal device can obtain the identification result of the entity object by identifying the entity object. The identification result includes at least the shape and size of the entity object, and the spatial position relationship of the entity object relative to the terminal device, so that the terminal device can obtain the entity object and the entity object. The second relative positional relationship of the terminal device. Wherein, the second relative position relationship may include the position, rotation direction, rotation angle, etc. of the entity object relative to the terminal device.
可选地,终端设备获取实体对象相对终端设备的位置关系时,可以通过图像采集装置(如景深摄像头等立体摄像头)采集包含实体对象的图像,然后再对该图像中的实体对象进行识别。在一个实施例中,终端设备采集包含实体对象的图像,可将该图像上传至服务器,服务器对图像中的实体对象进行识别后,可将识别结果返回给终端设备。Optionally, when the terminal device acquires the positional relationship of the entity object relative to the terminal device, an image containing the entity object can be captured by an image acquisition device (such as a stereo camera such as a depth camera), and then the entity object in the image can be identified. In one embodiment, the terminal device collects an image containing the entity object, and the image can be uploaded to the server, and after the server recognizes the entity object in the image, the identification result can be returned to the terminal device.
或者,终端设备获取实体对象相对终端设备的位置关系,也可以通过识别实体对象上设置的标记物(如在实体对象上粘贴、打印的标记物等),获取实体对象相对终端设备的位置关系。进一步地,终端设备识别实体对象后,还可以得到实体对象的详细信息(如名称、类别、颜色、图案等),也就是说,终端设备在识别实体对象或者识别包含有实体对象的图像之后,可以得到实体对象相对终端设备的位置关系,以及实体对象的详细信息。Alternatively, the terminal device obtains the positional relationship of the entity object relative to the terminal device, and can also obtain the positional relationship of the entity object relative to the terminal device by identifying markers set on the entity object (eg, markers pasted or printed on the entity object). Further, after the terminal device recognizes the entity object, it can also obtain the detailed information of the entity object (such as name, category, color, pattern, etc.), that is, after the terminal device recognizes the entity object or recognizes the image containing the entity object, The positional relationship of the entity object relative to the terminal device and the detailed information of the entity object can be obtained.
由于终端设备识别实体对象得到的识别结果包括实体对象相对终端设备的位置关系,因此,终端设备可以根据交互装置相对终端设备的位置及姿态信息,并结合实体对象相对终端设备的位置关系,进而得到交互装置和实体对象的第一相对位置关系。具体地例如,以终端设备为原点,建立一个空间坐标系,基于交互装置相对终端设备的位置信息获取交互装置的空间坐标,基于实体对象相对于终端设备的位置信息获取实体对象的空间坐标。因此,可以获取虚拟空间坐标系中交互装置与实体对象的第一相对位置关系。其中,上述第一相对位置关系可以是交互装置与实体对象在虚拟空间坐标系中的相对位置关系,也可以是用户通过头戴显示设备看到的交互装置与实体对象在现实世界中的相对位置关系。Since the recognition result obtained by the terminal device recognizing the entity object includes the positional relationship of the entity object relative to the terminal device, the terminal device can obtain the positional relationship of the entity object relative to the terminal device according to the position and attitude information of the interaction device relative to the terminal device and the positional relationship of the entity object relative to the terminal device. The first relative positional relationship between the interactive device and the entity object. Specifically, for example, a spatial coordinate system is established with the terminal device as the origin, the spatial coordinates of the interaction device are obtained based on the position information of the interaction device relative to the terminal device, and the spatial coordinates of the entity object are obtained based on the position information of the entity object relative to the terminal device. Therefore, the first relative positional relationship between the interaction device and the physical object in the virtual space coordinate system can be obtained. The above-mentioned first relative positional relationship may be the relative positional relationship between the interaction device and the physical object in the virtual space coordinate system, or may be the relative position of the interaction device and the physical object in the real world viewed by the user through the head-mounted display device relation.
步骤S232b:基于第一相对位置关系,确定目标对象被交互装置遮挡的遮挡区域。Step S232b: Based on the first relative positional relationship, determine an occlusion area of the target object occluded by the interaction device.
由于步骤S231b已经将交互装置和目标对象置于同一空间坐标系中,因此,将交互装置各点的空间坐标与目标对象各点的空间坐标进行比对,确定目标对象中与交互装置具有至少两个坐标值重合的点,并将具有至少两个坐标值重合的点的空间坐标归入遮挡坐标集。进一步地,若存在遮挡坐标集,则可确定交互装置与目标对象之间存在遮挡,并将遮挡坐标集对应的区域确定为遮挡区域。例如,假设以终端设备为原点建立的空间坐标系为XYZ空间坐标系,其中,若以Z轴表示深度值,判断遮挡坐标集中是否存在X、Y两个坐标值重合的点,当遮挡坐标集中存在X、Y两个坐标值重合的点,则可以确定交互装置与目标对象存在遮挡。此时,交互装置和目标对象在X-Y平面存在遮挡区域。此时,交互装置所指向的方向为遮挡区域所在方向。Since the interactive device and the target object have been placed in the same spatial coordinate system in step S231b, the spatial coordinates of each point of the interactive device are compared with the spatial coordinates of each point of the target object, and it is determined that the target object has at least two coordinates with the interactive device. points with coincident coordinate values, and the spatial coordinates of the points with at least two coincident coordinate values are classified into the occlusion coordinate set. Further, if there is an occlusion coordinate set, it can be determined that there is an occlusion between the interaction device and the target object, and an area corresponding to the occlusion coordinate set is determined as an occlusion area. For example, it is assumed that the space coordinate system established with the terminal device as the origin is the XYZ space coordinate system. If the depth value is represented by the Z axis, it is determined whether there is a point where the two coordinate values of X and Y overlap in the occlusion coordinate set. If there is a point where the two coordinate values of X and Y coincide, it can be determined that the interaction device and the target object are blocked. At this time, the interaction device and the target object have an occlusion area in the X-Y plane. In this case, the direction in which the interaction device points is the direction in which the occlusion area is located.
进一步地,通过遮挡坐标集中交互装置与目标对象之间的深度关系,确定交互装置与目标对象之间具体的遮挡关系,如确定交互装置遮挡了目标对象、目标对象遮挡了交互装置等。例如,若以Z轴为深度值,获取遮挡坐标集中交互装置和目标对象的X、Y值相等的点,比对交互装置和目标对象在X、Y值相等的点的坐标的Z值大小,可以确定遮挡区域与目标对象之间的深度关系。例如,当交互装置的点A(X1,Y1,Z1)在遮挡坐标集内,目标对象的点B(X2,Y2,Z2)在遮挡坐标集内,若X1=X2,Y1=Y2,则认为点B和点A在Z轴的深度方向上具备遮挡关系,此时,通过比较Z1和Z2之间的大小,则能确定交互装置与目标对象之间具体的遮挡关系,如若Z2大于Z1,则确定交互装置遮挡了目标对象,若Z2小于Z1,目标对象遮挡了交互装置等。Further, the specific occlusion relationship between the interaction device and the target object is determined by the depth relationship between the interaction device and the target object by occlusion coordinates, such as determining that the interaction device occludes the target object, the target object occludes the interaction device, and the like. For example, if the Z-axis is used as the depth value, the point where the X and Y values of the interactive device and the target object are equal in the occlusion coordinate set are obtained, and the Z value of the coordinates of the point where the X and Y values of the interactive device and the target object are equal, are compared. The depth relationship between the occlusion area and the target object can be determined. For example, when point A (X1, Y1, Z1) of the interactive device is in the occlusion coordinate set, and point B (X2, Y2, Z2) of the target object is in the occlusion coordinate set, if X1=X2, Y1=Y2, it is considered that Point B and point A have an occlusion relationship in the depth direction of the Z axis. At this time, by comparing the sizes between Z1 and Z2, the specific occlusion relationship between the interactive device and the target object can be determined. If Z2 is greater than Z1, then It is determined that the interactive device blocks the target object, and if Z2 is smaller than Z1, the target object blocks the interactive device and so on.
进一步地,终端设备还可根据上述交互装置相对终端设备的位置及姿态信息,获取到交互装置的深度值,其中,深度值为物体与终端设备在深度上的距离值,也可以理解为物体距离终端设备的远近状态。终端设备也可根据目标对象与终端设备之间的位置关系,获取到目标对象的深度值,然后对交互装置与目标对象进行深度值比较,获取交互装置与目标对象之间的深度关系。从而终端设备可以根据深度关系确定交互装置与目标对象之间的遮挡关系,通常比较远的对象容易被比较近的对象遮挡。在一些实施方式中,当目标对象的深度值大于交互装置的深度值,则可认为是交互装置遮挡了目标对象。同理,若交互装置的深度值大于目标对象的深度值,则可认为是目标对象遮挡了交互装置。当然,物体之间相互遮挡关系计算的方法还可以是其他方式,例如:交叉检验、深度测算等,在此不作限定。Further, the terminal device can also obtain the depth value of the interaction device according to the position and attitude information of the interaction device relative to the terminal device, wherein the depth value is the distance value between the object and the terminal device in depth, which can also be understood as the distance of the object. The near and far status of the terminal device. The terminal device can also obtain the depth value of the target object according to the positional relationship between the target object and the terminal device, and then compare the depth value between the interaction device and the target object to obtain the depth relationship between the interaction device and the target object. Therefore, the terminal device can determine the occlusion relationship between the interaction device and the target object according to the depth relationship, and usually a relatively distant object is easily occluded by a relatively close object. In some embodiments, when the depth value of the target object is greater than the depth value of the interaction device, it may be considered that the interaction device blocks the target object. Similarly, if the depth value of the interaction device is greater than the depth value of the target object, it can be considered that the target object blocks the interaction device. Of course, the method for calculating the mutual occlusion relationship between objects may also be other methods, such as cross-checking, depth measurement, etc., which are not limited here.
步骤S233b:将遮挡区域确定为目标区域。Step S233b: Determine the occlusion area as the target area.
在本申请实施例中,终端设备获取到目标对象被交互装置遮挡的遮挡区域后,可将该遮挡区域确定为目标区域。In this embodiment of the present application, after acquiring the occlusion area of the target object that is occluded by the interaction device, the terminal device may determine the occlusion area as the target area.
需要说明的是,在一些实施例中,步骤S230可以同时包括步骤S231a~S232a和步骤S231b~S233b,以供用户选择合适的方案对目标对象的目标区域进行获取,提供了对目标区域的获取的多种可能方式,提高了用户与显示内容之间的交互性。It should be noted that, in some embodiments, step S230 may include steps S231a-S232a and steps S231b-S233b at the same time, so that the user can select an appropriate solution to acquire the target area of the target object, and provides a method for acquiring the target area. There are many possible ways to improve the interactivity between the user and the displayed content.
步骤S240:确定目标区域对应的内容数据。Step S240: Determine content data corresponding to the target area.
在本实施例中,目标区域对应的内容数据可以包括目标区域对应的目标对象的三维结构信息,其中,三维结构信息至少包括目标对象在目标区域中的外部结构信息和内部结构信息。In this embodiment, the content data corresponding to the target area may include three-dimensional structure information of the target object corresponding to the target area, wherein the three-dimensional structure information at least includes external structure information and internal structure information of the target object in the target area.
具体地,外部结构信息可以为该目标对象在目标区域中的外观信息,如线条、形状、颜色、大小等,内部结构信息可以为该目标对象在目标区域中的内部结构信息,如填充物、内部架构等。例如,若目标对象为图12所示的汽车400,其目标区域为目标区域410(虚线框部分),此时获取到的外部结构信息为汽车400位于目标区域410中的外部轮廓(如前轮、车灯、引擎盖等),内部结构信息为汽车400在目标区域410的内部结构,如发动机、变速器、保险杠等机械架构。Specifically, the external structure information may be the appearance information of the target object in the target area, such as line, shape, color, size, etc., and the internal structure information may be the internal structure information of the target object in the target area, such as filling, internal structure, etc. For example, if the target object is the
步骤S250:根据6DoF信息及内容数据生成虚拟内容。Step S250: Generate virtual content according to the 6DoF information and content data.
在本申请实施例中,虚拟内容为包括内容数据(如,三维结构信息)形成不同层次的信息。不同层次的信息可以根据内容数据与交互装置的相对位置关系进行分层,也即根据获取的内容数据与6DoF信息的对应关系进行分层。In this embodiment of the present application, the virtual content includes content data (eg, three-dimensional structure information) to form information of different levels. Information of different levels can be layered according to the relative positional relationship between the content data and the interactive device, that is, the layering can be performed according to the corresponding relationship between the acquired content data and the 6DoF information.
具体地,由于终端设备已经获取交互装置的6DoF信息,也即交互装置相对终端设备的位置及姿态信息。因此,以终端设备为基准建立虚拟空间坐标系,其中,终端设备可以获取到现实空间中的交互装置的空间位置坐标,将该空间位置坐标转换为虚拟空间坐标系中的空间坐标。在虚拟空间坐标系中,以虚拟摄像头作为原点,根据目标区域内各部分结构(如,外部结构和内部结构)与交互装置的位置关系,则可以获取到目标区域内各部分结构相对虚拟摄像头的空间位置,从将获取到目标区域内各部分结构在虚拟空间坐标系中的空间位置坐标。最后,根据交互装置的空间位置坐标和目标区域内各部分结构的空间位置坐标,可以采集交互装置相对目标区域内各部分结构的相对位置关系信息。Specifically, since the terminal device has acquired the 6DoF information of the interaction device, that is, the position and attitude information of the interaction device relative to the terminal device. Therefore, a virtual space coordinate system is established based on the terminal device, wherein the terminal device can obtain the space position coordinates of the interaction device in the real space, and convert the space position coordinates into the space coordinates in the virtual space coordinate system. In the virtual space coordinate system, with the virtual camera as the origin, according to the positional relationship between each part of the structure (such as the external structure and the internal structure) in the target area and the interactive device, the relative position of each part of the structure in the target area relative to the virtual camera can be obtained. Spatial position, from the spatial position coordinates of each part of the structure in the virtual space coordinate system to be obtained. Finally, according to the spatial position coordinates of the interactive device and the spatial position coordinates of each partial structure in the target area, the relative positional relationship information of the interactive device relative to each partial structure in the target area can be collected.
进一步地,根据目标区域内各部分结构与交互装置的相对位置关系(如深度信息),对内容数据进行分层。其中,深度信息用于表示目标区域内各部分结构相对交互装置的远近距离。例如,根据深度信息,至少可以将获取到的目标区域内各部分结构的内容数据分为外部结构信息和内部结构信息这两个不同的层次的信息,其中一个层次的信息用于表示外部结构信息,另一个层次的信息用于表示内部结构的信息。在一些实施方式中,也可以通过多个层次的信息来表示外部结构信息和内部结构信息。Further, the content data is layered according to the relative positional relationship (eg depth information) of each part structure in the target area and the interaction device. Among them, the depth information is used to indicate the distance between each part of the structure in the target area relative to the interactive device. For example, according to the depth information, at least the acquired content data of each part of the structure in the target area can be divided into two different levels of information: external structure information and internal structure information, and one level of information is used to represent the external structure information. , another level of information is used to represent the internal structure of the information. In some embodiments, the external structure information and the internal structure information may also be represented by multiple levels of information.
步骤S260:显示虚拟内容。Step S260: Display the virtual content.
进一步地,若以三维模型界面显示虚拟内容的不同层次的信息,其中,不同层次信息主要呈现为对三维模型进行拆解,例如将三维模型拆分成外轮廓和内结构的形式。在另一些实施方式中,若以二维图像界面显示虚拟内容的不同层次的信息,其中,不同层次信息主要呈现为可以对二维图像进行切换,例如通过不同的图像界面显示目标对象于目标区域的外轮廓和内结构信息。Further, if the information of different levels of the virtual content is displayed on the 3D model interface, the information of different levels is mainly presented as disassembling the 3D model, for example, the 3D model is divided into the form of outer contour and inner structure. In other embodiments, if the information of different levels of the virtual content is displayed in a two-dimensional image interface, the information of different levels is mainly presented so that the two-dimensional image can be switched, for example, the target object is displayed in the target area through different image interfaces outer contour and inner structure information.
作为一种实施方式,若基于交互装置的6DoF信息生成虚拟显示区,终端设备生成的虚拟内容显示于该虚拟显示区。进一步地,该虚拟显示区与交互装置的第二相对位置关系,可以根据用户的观看习惯进行调节,以便于用户的使用。此时,步骤S260可以包括,基于交互装置的6DoF信息生成虚拟显示区,根据用户调节指令确定虚拟显示区与交互装置的空间相对位置关系,呈现虚拟显示区并将虚拟内容显示于该虚拟显示区。As an embodiment, if a virtual display area is generated based on the 6DoF information of the interaction device, the virtual content generated by the terminal device is displayed in the virtual display area. Further, the second relative positional relationship between the virtual display area and the interactive device can be adjusted according to the user's viewing habits, so as to facilitate the use of the user. At this time, step S260 may include: generating a virtual display area based on the 6DoF information of the interactive device, determining the relative spatial relationship between the virtual display area and the interactive device according to the user adjustment instruction, presenting the virtual display area and displaying the virtual content in the virtual display area .
在本申请实施例中,由于虚拟内容包括不同层次的信息,用户可以通过浏览所显示的不同层次的信息,进而对目标区域的目标对象有更清晰的认识。所以,为了呈现出不同层次的信息,显示虚拟内容之后还可以包括步骤S261。In this embodiment of the present application, since the virtual content includes information at different levels, the user can have a clearer understanding of the target object in the target area by browsing the displayed information at different levels. Therefore, in order to present different levels of information, step S261 may also be included after displaying the virtual content.
步骤S261:接收交互装置发送的第二显示指令,并根据第二显示指令显示对应层次的信息。Step S261: Receive the second display instruction sent by the interaction device, and display the information of the corresponding level according to the second display instruction.
其中,第二显示指令与用户的操作动作相对应,第二显示指令包括但不限于切换下一层次信息的指令、切换上一层次信息的指令以及切换任意层次信息的指令。以切换任意层次信息的指令为例,若当前显示界面具有标记当前层次信息的编号的区域,用户可以通过输入待切换的层次信息的编号,进而发送切换任意层次信息的指令,以将当前显示界面切换成包含所输入的待切换的层次信息的界面。Wherein, the second display instruction corresponds to the user's operation action, and the second display instruction includes but is not limited to an instruction to switch information at a lower level, an instruction for switching information at a previous level, and an instruction for switching information at any level. Taking the instruction to switch information of any level as an example, if the current display interface has an area marked with the number of the information of the current level, the user can input the number of the level of information to be switched, and then send the instruction to switch the information of any level to change the current display interface. Switch to the interface containing the input level information to be switched.
步骤S270:接收交互装置发送的操作指令。Step S270: Receive an operation instruction sent by the interaction device.
在本申请实施例中,交互装置发送的操作指令,其基于用户在交互装置输入的操作动作,如点击、滑动、多点触控等操作动作。也即,交互装置还可以检测在交互装置的不同操作动作参数(如触控位置参数、触控次数参数),发送不通过的操作指令。其中,交互装置将用户的操作动作信息转换为操作指令,并将该操作指令发送至终端设备。步骤S280:基于操作指令,确定对应的内容处理指令。In the embodiment of the present application, the operation instruction sent by the interaction device is based on the operation actions input by the user on the interaction device, such as operation actions such as clicking, sliding, and multi-touch. That is, the interaction device can also detect different operation action parameters (such as a touch position parameter, a touch count parameter) on the interaction device, and send an operation command that fails. The interaction device converts the user's operation action information into an operation instruction, and sends the operation instruction to the terminal device. Step S280: Determine the corresponding content processing instruction based on the operation instruction.
终端设备接收到交互装置发送的操作指令之后,根据预定的数据分析和数据处理,确定对应的内容处理指令。其中,内容处理指令的种类至少包括缩放、修改、标记、移动、旋转等指令。After receiving the operation instruction sent by the interaction device, the terminal device determines the corresponding content processing instruction according to predetermined data analysis and data processing. The types of content processing instructions at least include instructions such as scaling, modifying, marking, moving, and rotating.
进一步地,在一些实施方式中,根据不同的虚拟内容,同一个操作指令可以对应于不同的内容处理指令,则接收到交互装置发送的操作指令后,根据当前显示的虚拟内容以及操作指令,生成与该操控指令对应的内容处理指令。例如,同样为选取并滑动的操作指令,选取对象为汽车时,内容处理指令为对汽车进行拖动;选取对象为车灯时,内容处理指令为对车灯的亮度进行调节。Further, in some embodiments, according to different virtual contents, the same operation instruction may correspond to different content processing instructions, then after receiving the operation instruction sent by the interactive device, according to the currently displayed virtual content and the operation instruction, generate The content processing instruction corresponding to the manipulation instruction. For example, it is also an operation instruction of selecting and sliding. When the selected object is a car, the content processing instruction is to drag the car; when the selected object is a car lamp, the content processing instruction is to adjust the brightness of the car lamp.
步骤S290:根据内容处理指令对虚拟内容进行处理,并生成相应的处理内容。Step S290: Process the virtual content according to the content processing instruction, and generate corresponding processing content.
请参阅图13,在一些实施方式中,在终端设备显示虚拟内容时,当获取用户在交互装置输入的操作指令为单指按住并拖动虚拟图像时,生成移动虚拟内容的内容处理指令,例如,该内容处理指令为控制终端设备将当前显示的汽车向左或向右移动。Referring to FIG. 13 , in some embodiments, when the terminal device displays virtual content, when the operation instruction input by the user on the interaction device is to press and drag the virtual image with one finger, a content processing instruction for moving the virtual content is generated, For example, the content processing instruction is to control the terminal device to move the currently displayed car to the left or right.
请参阅图14,在一些实施方式中,在终端设备显示虚拟内容时,当获取用户在交互装置输入的操作指令为双指的距离相对收缩合并时,生成缩小当前显示的虚拟内容的内容处理指令,该内容处理指令为控制终端设备将当前显示的汽车相对于用户的视角缩小。若获取用户在交互装置输入的操作指令为双指的距离相对扩大远离时,生成放大当前显示的虚拟内容的内容处理指令,该内容处理指令为控制终端设备将当前显示的汽车相对于用户的视角放大。Referring to FIG. 14, in some embodiments, when the terminal device displays virtual content, when the operation instruction input by the user on the interaction device is the relative shrinking and merging of the distance between two fingers, a content processing instruction for reducing the currently displayed virtual content is generated. , the content processing instruction is to control the terminal device to reduce the viewing angle of the currently displayed car relative to the user. If the operation command input by the user on the interaction device is that the distance between the two fingers is relatively widened, a content processing command for enlarging the currently displayed virtual content is generated. enlarge.
请参阅图15,在一些实施方式中,在终端设备显示虚拟内容时,当获取用户在交互装置输入的操作指令为对某一虚拟内容进行双击或多次点击时,生成修改或标记虚拟内容的内容处理指令,例如该内容处理指令为控制终端设备在选中的虚拟内容周围生成数据框或文本框,再进一步检测用户的操作以对数据框进行参数修改或对文本框进行文字输入。其中,数据框内的数据可以为虚拟内容相关的参数,如生产日期、使用时间、生产厂家等。Referring to FIG. 15 , in some embodiments, when the terminal device displays virtual content, when the operation instruction input by the user on the interaction device is to double-click or multiple-click on a certain virtual content, an operation instruction for modifying or marking the virtual content is generated. The content processing instruction, for example, the content processing instruction is to control the terminal device to generate a data frame or text frame around the selected virtual content, and further detect the user's operation to modify the parameters of the data frame or input text to the text frame. The data in the data frame may be parameters related to virtual content, such as production date, usage time, manufacturer, and the like.
步骤S300:获取目标区域相对终端设备的第二空间位置关系,并根据第二空间位置关系将处理内容叠加显示至目标区域。Step S300: Obtain a second spatial positional relationship of the target area relative to the terminal device, and superimpose and display the processing content on the target area according to the second spatial positional relationship.
其中,若目标对象为虚拟对象,第二空间位置关系可以从虚拟空间坐标系中直接得出,并根据第二空间位置关系将对目标区域的虚拟内容的处理内容叠加至目标区域。Wherein, if the target object is a virtual object, the second spatial position relationship can be directly obtained from the virtual space coordinate system, and the processing content of the virtual content of the target area is superimposed on the target area according to the second spatial position relationship.
若目标对象为实体对象,第二空间位置关系可以由终端设备对实体对象的识别来获取。可选地,可以获取终端设备与实体对象之间的第二相对位置关系,并根据目标区域在实体对象的位置得到目标区域相对终端设备的第二空间位置关系。进一步地,再根据第二空间位置关系将对目标区域的虚拟内容的处理内容叠加至实体对象的目标区域。If the target object is an entity object, the second spatial positional relationship can be obtained by identifying the entity object by the terminal device. Optionally, the second relative positional relationship between the terminal device and the physical object may be acquired, and the second spatial positional relationship between the target area and the terminal device may be obtained according to the position of the target area on the physical object. Further, the processing content of the virtual content in the target area is superimposed on the target area of the physical object according to the second spatial positional relationship.
在本申请实施例中,以标记虚拟内容的内容处理指令为例,目标对象为虚拟对象或实体对象,用户均可以通过终端设备观察到对虚拟内容进行标记的内容。相应地,根据修改、移动、旋转等内容指令对虚拟内容进行处理,也根据第二空间位置关系将处理的内容叠加至目标区域,可以通过终端设备观察到目标区域的虚拟内容的修改、移动、旋转等变化。例如,请一并参阅图14和图16,当该内容处理指令为生成缩小当前显示的虚拟内容时,显示于交互装置的虚拟内容缩小(如图14所示),同时,目标区域的目标对象也随之缩小(如图16所示)。In the embodiment of the present application, taking the content processing instruction for marking virtual content as an example, the target object is a virtual object or a physical object, and the user can observe the content marking the virtual content through the terminal device. Correspondingly, the virtual content is processed according to content instructions such as modification, movement, rotation, etc., and the processed content is also superimposed on the target area according to the second spatial positional relationship, and the modification, movement, and modification of the virtual content in the target area can be observed through the terminal device. Rotation, etc. For example, please refer to FIG. 14 and FIG. 16 together, when the content processing instruction is to generate and shrink the currently displayed virtual content, the virtual content displayed on the interactive device is reduced (as shown in FIG. 14 ), and at the same time, the target object in the target area is reduced. It also shrinks accordingly (as shown in Figure 16).
另外,由于上述步骤中,虚拟内容可以包括用户操作界面(User Interface,UI),内容处理指令可以为对UI信息的修改指令,所以终端设备可以将实体对象的相关数据(如UI数据)进行增强现实显示以实现与现实物体的交互。通过上述对虚拟内容的控制方式,终端设备可以进一步对实体对象对应的虚拟内容进行操作,以实现与现实物体的进一步交互。In addition, in the above steps, the virtual content may include a user interface (User Interface, UI), and the content processing instruction may be an instruction to modify UI information, so the terminal device may enhance the relevant data (such as UI data) of the physical object. Reality display to enable interaction with real objects. Through the above-mentioned control method for the virtual content, the terminal device can further operate the virtual content corresponding to the physical object, so as to realize further interaction with the real object.
在一些实施方式中,上述实体对象为智能家居设备时,终端设备还可以通过交互装置设定智能家居设备的状态,从而用户可通过交互装置与智能家居设备进行交互。具体地,可以将智能家居设备作为目标对象,终端设备可以获取智能家居设备的目标区域对应的虚拟内容,并根据对该虚拟内容进行的内容处理,生成执行指令,将执行指令传输至智能家居设备,执行指令用于指示智能家居设备执行设定操作。In some embodiments, when the entity object is a smart home device, the terminal device can also set the state of the smart home device through the interaction device, so that the user can interact with the smart home device through the interaction device. Specifically, the smart home device can be used as the target object, and the terminal device can obtain the virtual content corresponding to the target area of the smart home device, and according to the content processing of the virtual content, generate an execution instruction, and transmit the execution instruction to the smart home device. , the execution instruction is used to instruct the smart home device to perform the setting operation.
在一些实施方式中,终端设备可以对智能家居设备进行识别,以在虚拟空间中显示该智能家居设备的虚拟交互界面(虚拟UI),进而用户可以通过对交互设备输入的操作动作,对虚拟交互界面中的不同的虚拟内容进行选择,以设定智能家居设备的状态或控制智能家居设备。其中,虚拟交互界面可以叠加显示于交互装置上,或叠加显示在智能家居设备上。在一些实施方式中,终端设备显示出智能家居设备的虚拟交互界面时,可以通过交互装置对该虚拟交互界面实现操作。其中,该虚拟内容为显示的虚拟交互界面中的一部分。例如,请参阅图17,终端设备显示出的虚拟内容900为智能台灯的虚拟控制界面,目标区域910为亮度选项,与目标区域910对应的虚拟内容为具体的亮度值。In some embodiments, the terminal device can identify the smart home device to display the virtual interactive interface (virtual UI) of the smart home device in the virtual space, and then the user can interact with the virtual interactive device through the input operation action on the interactive device. Different virtual content in the interface can be selected to set the status of smart home devices or control smart home devices. Wherein, the virtual interactive interface can be superimposed and displayed on the interactive device, or superimposed and displayed on the smart home device. In some embodiments, when the terminal device displays the virtual interactive interface of the smart home device, the virtual interactive interface can be operated through the interactive device. The virtual content is a part of the displayed virtual interactive interface. For example, referring to FIG. 17 , the
在一些实施方式中,终端设备可以根据处理内容对智能家居设备的状态进行调整。具体地,终端设备可根据目标区域对应的处理内容(如,对UI信息的修改内容),生成对应的执行指令,该执行指令用于将智能家居设备的状态调整为与虚拟内容对应的状态。例如,处理内容为亮度50时,终端设备可生成调整智能台灯亮度为50的执行指令。在一些实施方式中,处理内容可与执行指令对应,也就是说,当终端设备获取到交互装置的处理内容时,可根据处理内容与执行指令的对应关系,生成与处理内容对应的执行指令。其中,处理内容与执行指令的对应关系可存储于终端设备中,也可以从服务器中去获取。In some embodiments, the terminal device can adjust the state of the smart home device according to the processing content. Specifically, the terminal device can generate a corresponding execution instruction according to the processing content corresponding to the target area (eg, the modification content of the UI information), and the execution instruction is used to adjust the state of the smart home device to a state corresponding to the virtual content. For example, when the processing content is 50 brightness, the terminal device can generate an execution instruction to adjust the brightness of the smart desk lamp to 50. In some embodiments, the processing content may correspond to an execution instruction, that is, when the terminal device obtains the processing content of the interaction device, it may generate an execution instruction corresponding to the processing content according to the correspondence between the processing content and the execution instruction. The corresponding relationship between the processing content and the execution instruction may be stored in the terminal device, or may be acquired from the server.
终端设备在生成上述执行指令后,可以将该执行指令传输至智能家居设备,该执行指令用于指示智能家居设备执行设定操作。智能家居设备接收到该执行指令时,可以根据该执行指令进行设定操作,以将当前的状态调整为用户所设定的状态,即上述与处理内容对应的状态。从而实现了交互装置与智能家居设备之间的交互,提高了交互装置与智能家居的交互水平。After generating the above execution instruction, the terminal device may transmit the execution instruction to the smart home device, where the execution instruction is used to instruct the smart home device to perform the setting operation. When the smart home device receives the execution instruction, it can perform a setting operation according to the execution instruction, so as to adjust the current state to the state set by the user, that is, the state corresponding to the processing content. Thus, the interaction between the interactive device and the smart home equipment is realized, and the interaction level between the interactive device and the smart home is improved.
本申请实施例提供的虚拟内容的处理方法,通过根据交互装置的6DoF信息确定目标对象的目标区域,并采集目标区域的内容数据,然后根据6DoF信息和内容数据生成并显示相应的虚拟内容,根据接收的内容处理指令对虚拟内容进行处理,最后通过获取目标区域相对终端设备的第二空间位置关系,并根据第二空间位置关系将处理内容叠加显示至目标区域。从而使用户可以通过交互装置的6DoF信息选取需要进行处理的虚拟内容,并直接通过交互装置对该虚拟内容进行处理,,同时将处理的内容叠加显示至目标区域,可以提高用户与显示的虚拟内容之间的交互性,提高沉浸感。In the method for processing virtual content provided by the embodiment of the present application, the target area of the target object is determined according to the 6DoF information of the interactive device, and the content data of the target area is collected, and then corresponding virtual content is generated and displayed according to the 6DoF information and the content data. The received content processing instruction processes the virtual content, and finally obtains the second spatial positional relationship of the target area relative to the terminal device, and superimposes and displays the processed content to the target area according to the second spatial positional relationship. Therefore, the user can select the virtual content to be processed through the 6DoF information of the interactive device, and directly process the virtual content through the interactive device, and at the same time superimpose and display the processed content to the target area, which can improve the user and the displayed virtual content. The interactivity between them increases the sense of immersion.
请参阅图18,其示出了本申请实施例提供的一种虚拟内容的处理装置500的结构框图,应用于终端设备,用以执行上述的虚拟内容的处理方法。虚拟内容的处理装置500可以包括:信息确定模块510、区域确定模块520、数据获取模块530、内容生成模块540、内容显示模块550以及内容处理模块560。可以理解的是,上述各模块可以为运行于计算机可读存储介质中的程序模块,上述各个模块的用途及工作具体如下:信息确定模块510用于根据采集的标记物图像,确定交互装置的六自由度(6DoF)信息,标记物图像包含有设于交互装置的标记物;区域确定模块520用于基于6DoF信息,获取目标对象的目标区域,目标区域为交互装置选择的区域;数据获取模块530用于获取目标区域对应的内容数据;内容生成模块540用于根据6DoF信息及内容数据生成虚拟内容;内容显示模块550用于显示虚拟内容;以及内容处理模块560用于接收交互装置发送的控制数据,根据控制数据生成对应的内容处理指令,根据内容处理指令对虚拟内容进行处理。Please refer to FIG. 18 , which shows a structural block diagram of an
在一些实施方式中,区域确定模块520还包括虚拟路径单元521和遮挡判断单元523。其中,虚拟路径单元521用于基于6DoF信息生成虚拟路径,虚拟路径为自交互装置指向目标对象的路径,并确定虚拟路径所指向的目标对象的区域为目标区域。遮挡判断单元523用于基于6DoF信息,获取交互装置与目标对象的第一相对位置关系,再基于第一相对位置关系,确定目标对象述交互装置遮挡的遮挡区域,并将遮挡区域确定为目标区域。In some embodiments, the
在一些实施方式中,内容处理模块560还包括操作指令接收单元561、内容处理指令确定单元563、生成处理内容单元565和叠加处理内容单元567。其中操作指令接收单元561用于接收交互装置发送的操作指令。内容处理指令确定单元563用于基于操作指令,确定对应的内容处理指令。生成处理单元内容单元565用于根据内容处理指令对虚拟内容进行处理,并生成相应的处理内容。叠加处理内容单元567用于获取目标区域相对终端设备的第二空间位置关系,并根据第二空间位置关系将处理内容叠加显示至目标区域。In some embodiments, the
在另一些实施方式中,若目标对象为虚拟对象时,处理装置500还包括物体显示模块570,物体显示模块570用于将虚拟对象显示至显示区域。具体地,物体显示模块570还包括显示区域确定单元571、第一显示指令接收单元573和指令执行单元575。其中,显示区域确定单元571用于确定显示区域。显示区域确定单元571可以包括:获取包含显示标记物的图像;识别图像中的显示标记物,并获取显示标记物与终端设备之间的相对空间位置关系;基于相对空间位置关系,确定显示区域。第一显示指令接收单元573用于接收交互装置发送的第一显示指令。指令执行单元575用于基于第一显示指令,按照预设的显示效果将虚拟对象显示于显示区域。In other embodiments, if the target object is a virtual object, the
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述装置和模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that, for the convenience and brevity of description, for the specific working process of the above-described devices and modules, reference may be made to the corresponding processes in the foregoing method embodiments, which will not be repeated here.
在本申请所提供的几个实施例中,所显示或讨论的模块相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或模块的间接耦合或通信连接,可以是电性,机械或其它的形式。In several embodiments provided in this application, the coupling or direct coupling or communication connection between the modules shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or modules may be electrical, mechanical or otherwise.
另外,在本申请各个实施例中的各功能模块可以集成在一个处理模块中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。In addition, each functional module in each embodiment of the present application may be integrated into one processing module, or each module may exist physically alone, or two or more modules may be integrated into one module. The above-mentioned integrated modules can be implemented in the form of hardware, and can also be implemented in the form of software function modules.
综上,本申请实施例提供的一种虚拟内容的处理方法及装置,应用于终端设备,通过根据交互装置的6DoF信息确定目标对象的目标区域,并采集目标区域的内容数据,然后根据6DoF信息和内容数据生成并显示相应的虚拟内容,根据接收的内容处理指令对虚拟内容进行处理,最后通过获取目标区域相对终端设备的第二空间位置关系,并根据第二空间位置关系将处理内容叠加显示至目标区域。从而使用户可以通过交互装置的6DoF信息对显示的虚拟内容进行处理,同时该处理内容可以叠加显示至目标区域,可以提高用户与显示的虚拟内容之间的交互性,提高沉浸感。To sum up, a method and device for processing virtual content provided by the embodiments of the present application are applied to terminal equipment, and the target area of the target object is determined according to the 6DoF information of the interaction device, and the content data of the target area is collected, and then according to the 6DoF information Generate and display the corresponding virtual content with the content data, process the virtual content according to the received content processing instruction, and finally obtain the second spatial positional relationship of the target area relative to the terminal device, and superimpose and display the processed content according to the second spatial positional relationship to the target area. Therefore, the user can process the displayed virtual content through the 6DoF information of the interactive device, and at the same time, the processed content can be superimposed and displayed to the target area, which can improve the interaction between the user and the displayed virtual content and improve the immersion.
请参阅图19,其示出了本申请实施例提供的一种终端设备的结构框图。该终端设备100可以是智能手机、平板电脑、头戴显示装置等能够运行应用程序的终端设备。本申请中的终端设备100可以包括一个或多个如下部件:处理器110、存储器120、图像采集装置130以及一个或多个应用程序,其中一个或多个应用程序可以被存储在存储器120中并被配置为由一个或多个处理器110执行,一个或多个程序配置用于执行如前述方法实施例所描述的方法。Please refer to FIG. 19 , which shows a structural block diagram of a terminal device provided by an embodiment of the present application. The
处理器110可以包括一个或者多个处理核。处理器110利用各种接口和线路连接整个终端设备100内的各个部分,通过运行或执行存储在存储器120内的指令、程序、代码集或指令集,以及调用存储在存储器120内的数据,执行终端设备100的各种功能和处理数据。可选地,处理器110可以采用数字信号处理(Digital Signal Processing,DSP)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、可编程逻辑阵列(Programmable LogicArray,PLA)中的至少一种硬件形式来实现。处理器110可集成中央处理器(CentralProcessing Unit,CPU)、图像处理器(Graphics Processing Unit,GPU)和调制解调器等中的一种或几种的组合。其中,CPU主要处理操作系统、用户界面和应用程序等;GPU用于负责显示内容的渲染和绘制;调制解调器用于处理无线通信。可以理解的是,上述调制解调器也可以不集成到处理器110中,单独通过一块通信芯片进行实现。The
存储器120可以包括随机存储器(Random Access Memory,RAM),也可以包括只读存储器(Read-Only Memory)。存储器120可用于存储指令、程序、代码、代码集或指令集。存储器120可包括存储程序区和存储数据区,其中,存储程序区可存储用于实现操作系统的指令、用于实现至少一个功能的指令(比如触控功能、声音播放功能、图像播放功能等)、用于实现下述各个方法实施例的指令等。存储数据区还可以存储终端设备100在使用中所创建的数据等。The
在本申请实施例中,图像采集装置130用于采集目标对象的图像以及采集目标场景的场景图像。图像采集装置130可以为红外摄像头,也可以是彩色摄像头,具体的摄像头类型在本申请实施例中并不作为限定。In this embodiment of the present application, the
请参阅图20,其示出了本申请实施例提供的一种计算机可读存储介质的结构框图。该计算机可读存储介质800中存储有程序代码,程序代码可被处理器调用执行上述方法实施例中所描述的方法。该计算机可读存储介质800可以是诸如闪存、EEPROM(电可擦除可编程只读存储器)、EPROM、硬盘或者ROM之类的电子存储器。可选地,计算机可读存储介质800包括非易失性计算机可读介质(non-transitory computer-readable storagemedium)。计算机可读存储介质800具有执行上述方法中的任何方法步骤的程序代码810的存储空间。这些程序代码可以从一个或者多个计算机程序产品中读出或者写入到这一个或者多个计算机程序产品中。程序代码810可以例如以适当形式进行压缩。Please refer to FIG. 20 , which shows a structural block diagram of a computer-readable storage medium provided by an embodiment of the present application. The computer-
最后应说明的是:以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不驱使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, but not to limit them; although the present application has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand: it can still be Modifications are made to the technical solutions described in the foregoing embodiments, or some technical features thereof are equivalently replaced; and these modifications or replacements do not drive the essence of the corresponding technical solutions to deviate from the spirit and scope of the technical solutions of the embodiments of the present application.
Claims (11)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910290641.8A CN111813214B (en) | 2019-04-11 | 2019-04-11 | Virtual content processing method, device, terminal device and storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910290641.8A CN111813214B (en) | 2019-04-11 | 2019-04-11 | Virtual content processing method, device, terminal device and storage medium |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN111813214A true CN111813214A (en) | 2020-10-23 |
| CN111813214B CN111813214B (en) | 2023-05-16 |
Family
ID=72843815
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201910290641.8A Active CN111813214B (en) | 2019-04-11 | 2019-04-11 | Virtual content processing method, device, terminal device and storage medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN111813214B (en) |
Cited By (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113204282A (en) * | 2021-04-12 | 2021-08-03 | 领悦数字信息技术有限公司 | Interactive apparatus, interactive method, computer-readable storage medium, and computer program product |
| CN113384901A (en) * | 2021-08-16 | 2021-09-14 | 北京蔚领时代科技有限公司 | Interactive program instance processing method and device, computer equipment and storage medium |
| CN115494947A (en) * | 2022-09-29 | 2022-12-20 | 歌尔科技有限公司 | Interaction method, device, near-eye display device and readable storage medium |
| CN116034397A (en) * | 2020-12-21 | 2023-04-28 | 京东方科技集团股份有限公司 | Mixed reality display method, mixed reality device and storage medium |
| WO2023174097A1 (en) * | 2022-03-15 | 2023-09-21 | 北京字跳网络技术有限公司 | Interaction method and apparatus, device and computer-readable storage medium |
| WO2024125021A1 (en) * | 2022-12-12 | 2024-06-20 | 华为技术有限公司 | Display device and related device |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107646098A (en) * | 2015-07-07 | 2018-01-30 | 谷歌有限责任公司 | System for tracking portable equipment in virtual reality |
| US20180108147A1 (en) * | 2016-10-17 | 2018-04-19 | Samsung Electronics Co., Ltd. | Method and device for displaying virtual object |
| CN108074262A (en) * | 2016-11-15 | 2018-05-25 | 卡尔蔡司工业测量技术有限公司 | For determining the method and system of the six-degree-of-freedom posture of object in space |
| CN109491508A (en) * | 2018-11-27 | 2019-03-19 | 北京七鑫易维信息技术有限公司 | The method and apparatus that object is watched in a kind of determination attentively |
| CN109508093A (en) * | 2018-11-13 | 2019-03-22 | 宁波视睿迪光电有限公司 | A kind of virtual reality exchange method and device |
-
2019
- 2019-04-11 CN CN201910290641.8A patent/CN111813214B/en active Active
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107646098A (en) * | 2015-07-07 | 2018-01-30 | 谷歌有限责任公司 | System for tracking portable equipment in virtual reality |
| US20180108147A1 (en) * | 2016-10-17 | 2018-04-19 | Samsung Electronics Co., Ltd. | Method and device for displaying virtual object |
| CN108074262A (en) * | 2016-11-15 | 2018-05-25 | 卡尔蔡司工业测量技术有限公司 | For determining the method and system of the six-degree-of-freedom posture of object in space |
| CN109508093A (en) * | 2018-11-13 | 2019-03-22 | 宁波视睿迪光电有限公司 | A kind of virtual reality exchange method and device |
| CN109491508A (en) * | 2018-11-27 | 2019-03-19 | 北京七鑫易维信息技术有限公司 | The method and apparatus that object is watched in a kind of determination attentively |
Cited By (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN116034397A (en) * | 2020-12-21 | 2023-04-28 | 京东方科技集团股份有限公司 | Mixed reality display method, mixed reality device and storage medium |
| CN116034397B (en) * | 2020-12-21 | 2025-08-19 | 京东方科技集团股份有限公司 | Mixed reality display method, mixed reality device and storage medium |
| CN113204282A (en) * | 2021-04-12 | 2021-08-03 | 领悦数字信息技术有限公司 | Interactive apparatus, interactive method, computer-readable storage medium, and computer program product |
| CN113204282B (en) * | 2021-04-12 | 2024-04-05 | 领悦数字信息技术有限公司 | Interactive device, interactive method, computer readable storage medium and computer program product |
| CN113384901A (en) * | 2021-08-16 | 2021-09-14 | 北京蔚领时代科技有限公司 | Interactive program instance processing method and device, computer equipment and storage medium |
| CN113384901B (en) * | 2021-08-16 | 2022-01-18 | 北京蔚领时代科技有限公司 | Interactive program instance processing method and device, computer equipment and storage medium |
| WO2023174097A1 (en) * | 2022-03-15 | 2023-09-21 | 北京字跳网络技术有限公司 | Interaction method and apparatus, device and computer-readable storage medium |
| CN115494947A (en) * | 2022-09-29 | 2022-12-20 | 歌尔科技有限公司 | Interaction method, device, near-eye display device and readable storage medium |
| WO2024125021A1 (en) * | 2022-12-12 | 2024-06-20 | 华为技术有限公司 | Display device and related device |
Also Published As
| Publication number | Publication date |
|---|---|
| CN111813214B (en) | 2023-05-16 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN111813214B (en) | Virtual content processing method, device, terminal device and storage medium | |
| US12307580B2 (en) | Methods for manipulating objects in an environment | |
| CN111766937B (en) | Virtual content interaction method and device, terminal equipment and storage medium | |
| CN110163942B (en) | Image data processing method and device | |
| US11244511B2 (en) | Augmented reality method, system and terminal device of displaying and controlling virtual content via interaction device | |
| US10698535B2 (en) | Interface control system, interface control apparatus, interface control method, and program | |
| CN111862333B (en) | Augmented reality-based content processing method, device, terminal equipment, and storage medium | |
| US7755608B2 (en) | Systems and methods of interfacing with a machine | |
| US10372229B2 (en) | Information processing system, information processing apparatus, control method, and program | |
| CN110442245A (en) | Display methods, device, terminal device and storage medium based on physical keyboard | |
| CN111913565B (en) | Virtual content control method, device, system, terminal device and storage medium | |
| CN111913674B (en) | Virtual content display method, device, system, terminal equipment and storage medium | |
| CN111161396B (en) | Virtual content control method, device, terminal equipment and storage medium | |
| CN111383345B (en) | Virtual content display method and device, terminal equipment and storage medium | |
| CN111083463A (en) | Virtual content display method and device, terminal equipment and display system | |
| EP3974949B1 (en) | Head-mounted display | |
| CN115115812A (en) | Virtual scene display method and device and storage medium | |
| CN111913560B (en) | Virtual content display method, device, system, terminal equipment and storage medium | |
| CN111651031B (en) | Display method, device, terminal device and storage medium of virtual content | |
| CN111399630B (en) | Virtual content interaction method, device, terminal device and storage medium | |
| CN111913564B (en) | Virtual content manipulation method, device, system, terminal equipment and storage medium | |
| CN111399631B (en) | Virtual content display method and device, terminal equipment and storage medium | |
| CN120491860A (en) | Device interaction method, wearable device, storage medium, and program product | |
| CN111857364B (en) | Interaction device, virtual content processing method and device and terminal equipment | |
| CN111818326B (en) | Image processing method, device, system, terminal device and storage medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |