CN116124119A - A positioning method, positioning device and system - Google Patents
A positioning method, positioning device and system Download PDFInfo
- Publication number
- CN116124119A CN116124119A CN202111339331.4A CN202111339331A CN116124119A CN 116124119 A CN116124119 A CN 116124119A CN 202111339331 A CN202111339331 A CN 202111339331A CN 116124119 A CN116124119 A CN 116124119A
- Authority
- CN
- China
- Prior art keywords
- light intensity
- virtual reality
- positioning device
- pose
- coordinate system
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Automation & Control Theory (AREA)
- Human Computer Interaction (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
Description
技术领域technical field
本申请实施例涉及电子技术领域,尤其涉及一种定位方法、定位设备及系统。The embodiments of the present application relate to the field of electronic technologies, and in particular, to a positioning method, positioning device, and system.
背景技术Background technique
虚拟现实(Virtual Reality,VR)技术是一种可以创建和体验虚拟场景的计算机仿真技术,它利用计算机生成一种虚拟场景,使用户沉浸到该虚拟场景中。Virtual Reality (VR) technology is a computer simulation technology that can create and experience virtual scenes. It uses computers to generate a virtual scene and immerses users in the virtual scene.
虚拟现实技术可以利用现实生活中的数据,通过计算机技术产生电子信号,将其与各种虚拟现实设备(例如头戴式显示设备和手柄等)结合使其转化为虚拟场景,用户可以通过虚拟现实设备与这个虚拟场景进行信息交互。想要实现用户和虚拟场景的交互,虚拟现实设备需要对其在现实环境中的位置、姿态进行识别,从而确定出虚拟现实设备的位置和姿态对应的虚拟现实内容。Virtual reality technology can use data in real life to generate electronic signals through computer technology, and combine it with various virtual reality devices (such as head-mounted display devices and handles, etc.) to convert them into virtual scenes. The device performs information interaction with this virtual scene. In order to realize the interaction between the user and the virtual scene, the virtual reality device needs to recognize its position and posture in the real environment, so as to determine the virtual reality content corresponding to the position and posture of the virtual reality device.
目前,由于头戴式显示设备的佩戴舒适性和重量的要求,无法安装过多的定位设备,往往通过固定在空间内的激光扫描器或者摄像头对虚拟现实设备进行定位。该方法很大程度上限制了用户佩戴虚拟现实设备时的活动范围,对提升用户的体验感造成了障碍。At present, due to the wearing comfort and weight requirements of the head-mounted display device, too many positioning devices cannot be installed, and the virtual reality device is often positioned by a laser scanner or a camera fixed in the space. This method largely limits the range of activities of the user when wearing the virtual reality device, which hinders the improvement of the user's sense of experience.
如何扩大用户在虚拟场景中的活动范围,提升用户的沉浸式体验感,是当前及未来的研究方向。How to expand the user's range of activities in the virtual scene and enhance the user's immersive experience is the current and future research direction.
发明内容Contents of the invention
本申请提供了一种定位方法、定位设备及系统,定位设备可以跟踪虚拟现实设备,定位设备通过自身在室内的位姿以及自身与虚拟显示设备的相对位姿,确定虚拟现实设备在室内的位姿。该方法可以扩大用户在虚拟场景中的活动范围,提升用户的沉浸式体验感。This application provides a positioning method, a positioning device and a system. The positioning device can track the virtual reality device. The positioning device determines the indoor position of the virtual reality device through its own indoor pose and the relative pose of itself and the virtual display device. posture. The method can expand the range of activities of the user in the virtual scene, and improve the user's sense of immersive experience.
第一方面,本申请实施例提供了一种定位方法,应用于定位设备,定位设备包括投影仪,方法包括:In the first aspect, the embodiment of the present application provides a positioning method, which is applied to a positioning device, where the positioning device includes a projector, and the method includes:
定位设备跟踪虚拟现实设备;The positioning device tracks the virtual reality device;
定位设备通过投影仪投影编码图像;The positioning device projects the encoded image through the projector;
定位设备接收来自虚拟现实设备的多个光强值;多个光强值包括虚拟现实设备的多个光强传感器分别接收到的编码图像的光强;The positioning device receives multiple light intensity values from the virtual reality device; the multiple light intensity values include light intensities of encoded images respectively received by multiple light intensity sensors of the virtual reality device;
定位设备根据投影的编码图像、多个光强值和多个光强传感器在虚拟现实设备中的位置,确定虚拟现实设备相对于定位设备的位姿;虚拟现实设备相对于定位设备的位姿和定位设备在环境坐标系中的位姿用于确定虚拟现实设备在环境坐标系中的位姿。The positioning device determines the pose of the virtual reality device relative to the positioning device according to the projected coded image, multiple light intensity values and the positions of multiple light intensity sensors in the virtual reality device; the pose and pose of the virtual reality device relative to the positioning device The pose of the positioning device in the environment coordinate system is used to determine the pose of the virtual reality device in the environment coordinate system.
实施本申请实施例,定位设备可以跟踪虚拟现实设备,定位设备通过自身在室内的位姿以及自身与虚拟显示设备的相对位姿,确定虚拟现实设备在室内的位姿。定位设备可以跟随虚拟现实设备移动,在移动过程中确定虚拟现实设备的位姿,该方法可以扩大用户的活动范围,提高用户的使用体验感。Implementing the embodiment of the present application, the positioning device can track the virtual reality device, and the positioning device determines the indoor pose of the virtual reality device through its own indoor pose and the relative pose between itself and the virtual display device. The positioning device can move along with the virtual reality device, and determine the pose of the virtual reality device during the movement process. This method can expand the user's range of activities and improve the user's sense of experience.
结合第一方面,在一种可能的实现方式中,方法还包括:In combination with the first aspect, in a possible implementation manner, the method further includes:
定位设备根据定位设备在环境坐标系中的位姿和虚拟现实设备相对于定位设备的位姿,确定虚拟现实设备在环境坐标系中的位姿;The positioning device determines the pose of the virtual reality device in the environment coordinate system according to the pose of the positioning device in the environment coordinate system and the pose of the virtual reality device relative to the positioning device;
定位设备向虚拟现实设备发送虚拟现实设备在环境坐标系中的位姿。The positioning device sends the pose of the virtual reality device in the environment coordinate system to the virtual reality device.
结合第一方面,在一种可能的实现方式中,编码图像包括多张编码图像;定位设备根据投影的编码图像、多个光强值和多个光强传感器在虚拟现实设备中的位置,确定虚拟现实设备相对于定位设备的位姿,包括:With reference to the first aspect, in a possible implementation manner, the coded image includes multiple coded images; the positioning device determines the The pose of the virtual reality device relative to the positioning device, including:
定位设备基于一个光强传感器接收到的每一个编码图像的光强值,生成光强传感器的编码;The positioning device generates the code of the light intensity sensor based on the light intensity value of each encoded image received by a light intensity sensor;
定位设备基于多个光强传感器的编码,确定多个光强传感器的像素坐标;The positioning device determines the pixel coordinates of the multiple light intensity sensors based on the codes of the multiple light intensity sensors;
定位设备基于多个光强传感器的像素坐标和多个光强传感器在虚拟现实设备中的位置,确定虚拟现实设备相对于投影仪的位姿;The positioning device determines the pose of the virtual reality device relative to the projector based on the pixel coordinates of the multiple light intensity sensors and the positions of the multiple light intensity sensors in the virtual reality device;
定位设备基于投影仪相对于定位设备的位姿,将虚拟现实设备相对于投影仪的位姿转换为虚拟现实设备相对于定位设备的位姿。Based on the pose of the projector relative to the positioning device, the positioning device converts the pose of the virtual reality device relative to the projector into a pose of the virtual reality device relative to the positioning device.
结合第一方面,在一种可能的实现方式中,多张编码图像包括M张第一图像和N张第二图像,M为正整数,N为正整数,第一图像是图案为第一方向的条纹的二值图像,第二图像是图案为第二方向的条纹的二值图像;定位设备基于一个光强传感器接收到的每一个编码图像的光强值,生成光强传感器的编码,包括:In combination with the first aspect, in a possible implementation manner, the multiple coded images include M first images and N second images, M is a positive integer, N is a positive integer, and the pattern of the first image is the first direction The second image is a binary image of stripes in the second direction; the positioning device generates the code of the light intensity sensor based on the light intensity value of each coded image received by a light intensity sensor, including :
定位设备基于一个光强传感器接收到的第一图像的光强值,生成光强传感器的第一编码;The positioning device generates a first code of the light intensity sensor based on the light intensity value of the first image received by a light intensity sensor;
定位设备基于一个光强传感器接收到的第二图像的光强值,生成光强传感器的第二编码;The positioning device generates a second code of the light intensity sensor based on the light intensity value of the second image received by the light intensity sensor;
定位设备基于多个光强传感器的编码,确定多个光强传感器的像素坐标,包括:定位设备基于光强传感器的第一编码,确定光强传感器在第一方向上的第一坐标;定位设备基于光强传感器的第二编码,确定光强传感器在第二方向上的第二坐标,光强传感器的像素坐标包括光强传感器的第一坐标和光强传感器的第二坐标。The positioning device determines the pixel coordinates of the multiple light intensity sensors based on the codes of the multiple light intensity sensors, including: the positioning device determines the first coordinates of the light intensity sensors in the first direction based on the first code of the light intensity sensors; the positioning device Based on the second code of the light intensity sensor, the second coordinates of the light intensity sensor in the second direction are determined, and the pixel coordinates of the light intensity sensor include the first coordinates of the light intensity sensor and the second coordinates of the light intensity sensor.
其中,二值编码图像为图案分布符合编码规律的二值图像,编码图案的编码规律可以是二进制码和格雷码等。Wherein, the binary coded image is a binary image whose pattern distribution conforms to a coding law, and the coding law of the coded pattern may be a binary code, a Gray code, or the like.
结合第一方面,在一种可能的实现方式中,方法还包括:In combination with the first aspect, in a possible implementation manner, the method further includes:
定位设备基于投影仪相对于相机的位姿和定位设备相对于相机的位姿,确定投影仪相对于定位设备的位姿。The positioning device determines the pose of the projector relative to the positioning device based on the pose of the projector relative to the camera and the pose of the positioning device relative to the camera.
结合第一方面,在一种可能的实现方式中,方法还包括:In combination with the first aspect, in a possible implementation manner, the method further includes:
定位设备基于虚拟现实设备在环境坐标系中的位姿、三维地图和媒体资源,确定显示内容;The positioning device determines the display content based on the pose, three-dimensional map and media resources of the virtual reality device in the environmental coordinate system;
定位设备将显示内容发送至虚拟现实设备,以使虚拟现实设备显示显示内容。The positioning device sends the display content to the virtual reality device, so that the virtual reality device displays the display content.
结合第一方面,在一种可能的实现方式中,定位设备通过投影仪投影编码图像之前,方法包括:With reference to the first aspect, in a possible implementation manner, before the positioning device projects the coded image through a projector, the method includes:
定位设备向虚拟现实设备发送指示信息,指示信息用于指示虚拟现实设备对多个光强传感器接收的光强进行采样得到多个光强值。The positioning device sends instruction information to the virtual reality device, where the instruction information is used to instruct the virtual reality device to sample the light intensities received by the multiple light intensity sensors to obtain multiple light intensity values.
结合第一方面,在一种可能的实现方式中,定位设备跟踪虚拟现实设备,包括:In combination with the first aspect, in a possible implementation manner, the positioning device tracks the virtual reality device, including:
定位设备定位虚拟现实设备的位置;The positioning device locates the position of the virtual reality device;
定位设备移动至距虚拟显示设备为预设距离的位置;The positioning device moves to a position with a preset distance from the virtual display device;
定位设备通过拍摄的图像,确定佩戴虚拟显示设备的用户与定位设备的相对位置;The positioning device determines the relative position of the user wearing the virtual display device and the positioning device through the captured images;
定位设备基于相对位置,移动至用户面部朝向的方向。The pointing device moves to the direction the user's face is facing based on the relative position.
第二方面,本申请实施例提供了一种定位方法,应用于虚拟现实设备,虚拟现实设备包括多个光强传感器,方法包括:In the second aspect, the embodiment of the present application provides a positioning method, which is applied to a virtual reality device. The virtual reality device includes a plurality of light intensity sensors. The method includes:
多个光强传感器分别接收到定位设备投影的编码图像的光强,得到多个光强值,定位设备跟踪虚拟现实设备;Multiple light intensity sensors respectively receive the light intensity of the coded image projected by the positioning device to obtain multiple light intensity values, and the positioning device tracks the virtual reality device;
虚拟现实设备向定位设备发送多个光强值;多个光强值、编码图像和多个光强传感器在虚拟现实设备中的位置用于定位设备确定虚拟现实设备相对于定位设备的位姿;虚拟现实设备相对于定位设备的位姿和定位设备在环境坐标系中的位姿用于确定虚拟现实设备在环境坐标系中的位姿。The virtual reality device sends multiple light intensity values to the positioning device; the multiple light intensity values, coded images, and positions of multiple light intensity sensors in the virtual reality device are used by the positioning device to determine the pose of the virtual reality device relative to the positioning device; The pose of the virtual reality device relative to the positioning device and the pose of the positioning device in the environment coordinate system are used to determine the pose of the virtual reality device in the environment coordinate system.
结合第二方面,在一种可能的实现方式中,其特征在于,方法还包括:In combination with the second aspect, in a possible implementation, the method further includes:
虚拟现实设备接收来自定位设备的虚拟现实设备相对于定位设备的位姿和定位设备在环境坐标系中的位姿;The virtual reality device receives from the positioning device the pose of the virtual reality device relative to the positioning device and the pose of the positioning device in the environment coordinate system;
虚拟现实设备基于定位设备在环境坐标系中的位姿和虚拟现实设备相对于定位设备的位姿,确定虚拟现实设备在环境坐标系中的位姿。The virtual reality device determines the pose of the virtual reality device in the environment coordinate system based on the pose of the positioning device in the environment coordinate system and the pose of the virtual reality device relative to the positioning device.
结合第二方面,在一种可能的实现方式中,方法还包括:In combination with the second aspect, in a possible implementation, the method further includes:
虚拟现实设备基于虚拟现实设备在环境坐标系中的位姿、三维地图和媒体资源,确定显示内容;The virtual reality device determines the display content based on the pose, three-dimensional map and media resources of the virtual reality device in the environmental coordinate system;
虚拟现实设备显示显示内容。Virtual reality device showing display content.
结合第二方面,在一种可能的实现方式中,方法包括:In combination with the second aspect, in a possible implementation, the method includes:
虚拟现实设备在接收定位设备发送的指示信息时,虚拟现实设备对多个光强传感器接收的光强进行采样,得到多个光强值。When the virtual reality device receives the indication information sent by the positioning device, the virtual reality device samples the light intensities received by the multiple light intensity sensors to obtain multiple light intensity values.
结合第二方面,在一种可能的实现方式中,第二电子设备向第一电子设备发送第二电子设备的物理方向包括:With reference to the second aspect, in a possible implementation manner, sending the physical direction of the second electronic device to the first electronic device by the second electronic device includes:
第二电子设备通过传感器获取第二电子设备的物理方向;The second electronic device obtains the physical direction of the second electronic device through the sensor;
第二电子设备在物理方向改变时,向第一电子设备发送物理方向。When the physical direction changes, the second electronic device sends the physical direction to the first electronic device.
第三方面,本申请提供一种电子设备。该电子设备可包括存储器和处理器。其中,存储器可用于存储计算机程序。处理器可用于调用计算机程序,使得电子设备执行如第一方面或第一方面中任一可能的实现方式。In a third aspect, the present application provides an electronic device. The electronic device can include memory and a processor. Among them, memory can be used to store computer programs. The processor can be used to call a computer program, so that the electronic device executes the first aspect or any possible implementation manner of the first aspect.
第四方面,本申请提供一种电子设备。该电子设备可包括存储器和处理器。其中,存储器可用于存储计算机程序。处理器可用于调用计算机程序,使得电子设备执行如第二方面或第二方面中任一可能的实现方式。In a fourth aspect, the present application provides an electronic device. The electronic device can include memory and a processor. Among them, memory can be used to store computer programs. The processor can be used to call a computer program, so that the electronic device executes the second aspect or any possible implementation manner in the second aspect.
第五方面,本申请提供一种包含指令的计算机程序产品,其特征在于,当上述计算机程序产品在电子设备上运行时,使得该电子设备执行如第一方面或第一方面中任一可能的实现方式。In the fifth aspect, the present application provides a computer program product containing instructions, which is characterized in that, when the above-mentioned computer program product is run on an electronic device, the electronic device is made to execute any possible method according to the first aspect or the first aspect. Method to realize.
第六方面,本申请提供一种包含指令的计算机程序产品,其特征在于,当上述计算机程序产品在电子设备上运行时,使得该电子设备执行如第二方面或第二方面中任一可能的实现方式。In a sixth aspect, the present application provides a computer program product containing instructions, which is characterized in that, when the above-mentioned computer program product is run on an electronic device, the electronic device is made to execute any possible method according to the second aspect or the second aspect. Method to realize.
第七方面,本申请提供一种计算机可读存储介质,包括指令,其特征在于,当上述指令在电子设备上运行时,使得该电子设备执行如第一方面或第一方面中任一可能的实现方式。In the seventh aspect, the present application provides a computer-readable storage medium, including instructions, which is characterized in that, when the above-mentioned instructions are run on the electronic device, the electronic device is made to execute any of the possible steps in the first aspect or the first aspect. Method to realize.
第八方面,本申请提供一种计算机可读存储介质,包括指令,其特征在于,当上述指令在电子设备上运行时,使得该电子设备执行如第二方面或第二方面中任一可能的实现方式。In an eighth aspect, the present application provides a computer-readable storage medium, including instructions, which is characterized in that, when the above-mentioned instructions are run on an electronic device, the electronic device is made to execute any of the possible steps in the second aspect or the second aspect. Method to realize.
第九方面,本申请实施例提供了一种定位系统,该定位系统包括第一电子设备和第二电子设备,该第一电子设备为第三方面描述的电子设备,该第二电子设备为第四方面描述的电子设备。In a ninth aspect, an embodiment of the present application provides a positioning system, the positioning system includes a first electronic device and a second electronic device, the first electronic device is the electronic device described in the third aspect, and the second electronic device is the first electronic device Electronic equipment described in four terms.
可以理解地,上述第三方面和第四方面提供的电子设备、第五方面和第六方面提供的计算机程序产品、第七方面和第八方面提供的计算机可读存储介质均用于执行本申请实施例所提供的方法。因此,其所能达到的有益效果可参考对应方法中的有益效果,此处不再赘述。It can be understood that the electronic devices provided in the third and fourth aspects, the computer program products provided in the fifth and sixth aspects, and the computer-readable storage media provided in the seventh and eighth aspects are all used to implement the present application The method provided by the embodiment. Therefore, the beneficial effects that it can achieve can refer to the beneficial effects in the corresponding method, and will not be repeated here.
附图说明Description of drawings
图1是本申请实施例提供的一种像素坐标系的示意图;FIG. 1 is a schematic diagram of a pixel coordinate system provided by an embodiment of the present application;
图2A是本申请实施例提供的一种虚拟现实系统的系统架构图;FIG. 2A is a system architecture diagram of a virtual reality system provided by an embodiment of the present application;
图2B是本申请实施例提供的另一种虚拟现实系统的系统架构图;FIG. 2B is a system architecture diagram of another virtual reality system provided by the embodiment of the present application;
图3是本申请实施例提供的一种头戴式显示设备的示意图;FIG. 3 is a schematic diagram of a head-mounted display device provided by an embodiment of the present application;
图4是本申请实施例提供的一种机器人的示意图;Fig. 4 is a schematic diagram of a robot provided in an embodiment of the present application;
图5是本申请实施例提供的一种头戴式显示设备与机器人的相对位置关系的示意图;Fig. 5 is a schematic diagram of the relative positional relationship between a head-mounted display device and a robot provided by an embodiment of the present application;
图6是本申请实施例提供的一种场景示意图;FIG. 6 is a schematic diagram of a scenario provided by an embodiment of the present application;
图7是本申请实施例提供的一种定位方法的流程图;FIG. 7 is a flow chart of a positioning method provided by an embodiment of the present application;
图8是本申请实施例提供的一种编码图像的示意图;FIG. 8 is a schematic diagram of a coded image provided by an embodiment of the present application;
图9是本申请实施例提供的两组编码图像的示意图;FIG. 9 is a schematic diagram of two sets of coded images provided by the embodiment of the present application;
图10是本申请实施例提供的一种光强传感器接收到的光强值的示意图;FIG. 10 is a schematic diagram of light intensity values received by a light intensity sensor provided in an embodiment of the present application;
图11是本申请实施例提供的一种头戴式显示设备的结构示意图。FIG. 11 is a schematic structural diagram of a head-mounted display device provided by an embodiment of the present application.
具体实施方式Detailed ways
下面将结合附图对本申请实施例中的技术方案进行清楚、详尽地描述。其中,在本申请实施例的描述中,除非另有说明,“/”表示或的意思,例如,A/B可以表示A或B;文本中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况,另外,在本申请实施例的描述中,“多个”是指两个或多于两个。The technical solutions in the embodiments of the present application will be described clearly and in detail below in conjunction with the accompanying drawings. Among them, in the description of the embodiments of this application, unless otherwise specified, "/" means or means, for example, A/B can mean A or B; "and/or" in the text is only a description of associated objects The association relationship indicates that there may be three kinds of relationships, for example, A and/or B, which may indicate: A exists alone, A and B exist at the same time, and B exists alone. In addition, in the description of the embodiment of the present application , "plurality" means two or more than two.
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为暗示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征,在本申请实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。Hereinafter, the terms "first" and "second" are used for descriptive purposes only, and cannot be understood as implying or implying relative importance or implicitly specifying the quantity of indicated technical features. Therefore, the features defined as "first" and "second" may explicitly or implicitly include one or more of these features. In the description of the embodiments of the present application, unless otherwise specified, the "multiple" The meaning is two or more.
本申请以下实施例中的术语“用户界面(user interface,UI)”,是应用程序或操作系统与用户之间进行交互和信息交换的介质接口,它实现信息的内部形式与用户可以接受形式之间的转换。用户界面是通过java、可扩展标记语言(extensible markuplanguage,XML)等特定计算机语言编写的源代码,界面源代码在电子设备上经过解析,渲染,最终呈现为用户可以识别的内容。用户界面常用的表现形式是图形用户界面(graphicuser interface,GUI),是指采用图形方式显示的与计算机操作相关的用户界面。它可以是在电子设备的显示屏中显示的文本、图标、按钮、菜单、选项卡、文本框、对话框、状态栏、导航栏、Widget等可视的界面元素。The term "user interface (UI)" in the following embodiments of this application is a medium interface for interaction and information exchange between an application program or an operating system and a user, and it realizes the difference between the internal form of information and the form acceptable to the user. conversion between. The user interface is the source code written in a specific computer language such as java and extensible markup language (XML). The source code of the interface is parsed and rendered on the electronic device, and finally presented as content that can be recognized by the user. The commonly used form of user interface is the graphical user interface (GUI), which refers to the user interface related to computer operation displayed in a graphical way. It may be text, icons, buttons, menus, tabs, text boxes, dialog boxes, status bars, navigation bars, Widgets, and other visible interface elements displayed on the display screen of the electronic device.
首先,下面先对本申请实施例中涉及的技术术语进行描述。First, the technical terms involved in the embodiments of the present application will be described below.
一、结构光系统1. Structured light system
结构光系统是指利用结构光进行测量的系统。结构光系统可以由光学投射器、相机和计算机处理系统组成。其基本原理是,光学投射器投射特定的光信号(特定的光信息又可以称为结构光)到物体表面及背景后,由相机采集光学投射器投射的光信号,进而,根据物体造成的光信号的变化计算物体的位置和深度等信息,上述位置和深度信息可以用于复原整个三维空间。其中,光学投射器可以为投影仪和激光器等。A structured light system refers to a system that uses structured light for measurement. A structured light system can consist of an optical projector, a camera, and a computer processing system. The basic principle is that after the optical projector projects a specific optical signal (specific optical information can also be called structured light) to the surface and background of the object, the camera collects the optical signal projected by the optical projector, and then, according to the optical signal caused by the object The change of the signal calculates information such as the position and depth of the object, and the above position and depth information can be used to restore the entire three-dimensional space. Wherein, the optical projector may be a projector, a laser, and the like.
1、结构光1. Structured light
已知空间方向的投影光线的集合为结构光。A collection of projection rays with known spatial directions is structured light.
基于光束模式,结构光可以分为点结构光、线结构光、多线结构光和面结构光等。进而,根据光学投射器所投射的结构光的光束模式,结构光系统可分为点结构光模式、线结构光模式、多线结构光模式和面结构光模式等。Based on the beam mode, structured light can be divided into point structured light, line structured light, multi-line structured light and surface structured light. Furthermore, according to the beam mode of the structured light projected by the optical projector, the structured light system can be divided into a point structured light mode, a line structured light mode, a multi-line structured light mode, and a surface structured light mode.
其中,点结构光是投射到物体上形成一个光点的光束。点结构光模式是指激光器发出的光束投射到物体上产生一个光点,光点经相机的镜头成像在相机的像平面上,形成一个二维点,相机的视线和光束在空间中于光点处相交,形成一种简单的三角几何关系。通过一定的标定可以得到这种三角几何约束关系,并由其可以唯一确定光点在某一已知世界坐标系中的空间位置。该方法需要逐点扫描物体进行测量,拍摄图像和图像处理需要时间,时间随着被测物体体积的增大而急剧增加。Among them, the point structured light is a light beam projected onto an object to form a light spot. The point structured light mode means that the beam emitted by the laser is projected onto the object to generate a light spot, which is imaged on the image plane of the camera through the camera lens to form a two-dimensional point. Intersect at each point to form a simple triangular relationship. This triangular geometric constraint relationship can be obtained through a certain calibration, and the spatial position of the light point in a known world coordinate system can be uniquely determined by it. This method needs to scan the object point by point for measurement, and it takes time to take images and image processing, and the time increases sharply as the volume of the measured object increases.
其中,线结构光是投射到物体上形成一条光条的光束,线结构光模式是向物体投射一条光束,在物体表面形成一条光条;多线结构光是投射多条光条;面结构光模式投射的是二维的结构光图像,结构光图案可以是光栅条纹。Among them, the line structured light is a light beam projected onto the object to form a light strip, the line structured light mode is to project a light beam to the object, and form a light strip on the surface of the object; the multi-line structured light is to project multiple light strips; the surface structured light The mode projects a two-dimensional structured light image, and the structured light pattern can be a grating stripe.
2、面结构光模式2. Surface structured light mode
面结构光是投射到物体表面为二维图像的光束。面结构光模式,是光学投射器将二维的结构光图案投射到物体表面上,该方法不需要进行扫描就可以实现三维轮廓测量,测量速度很快,面结构光中最常用的方法是投影光栅条纹到物体表面。Surface structured light is a beam of light projected onto the surface of an object as a two-dimensional image. The surface structured light mode is an optical projector that projects a two-dimensional structured light pattern onto the surface of an object. This method can realize three-dimensional profile measurement without scanning, and the measurement speed is very fast. The most commonly used method in surface structured light is projection The grating stripes onto the surface of the object.
3、编码结构光测量法3. Coded structured light measurement method
编码结构光是经过编码的结构光,编码结构光投射到物体上形成的图案符合编码方法对应的编码规律。其中,编码方法包括格雷编码(Gray Encoding)、二进制编码、二维网格图案编码、随机图案编码、彩色编码、灰度编码、邻域编码、相位编码以及混合编码等。例如,二值编码图像上的图案可以为条纹,二值编码图像上的条纹排列符合格雷编码。Coded structured light is coded structured light, and the pattern formed when the coded structured light is projected onto an object conforms to the coding law corresponding to the coding method. Among them, the encoding methods include Gray Encoding, binary encoding, two-dimensional grid pattern encoding, random pattern encoding, color encoding, grayscale encoding, neighborhood encoding, phase encoding, and hybrid encoding. For example, the pattern on the binary coded image may be stripes, and the arrangement of stripes on the binary coded image conforms to Gray coding.
通过编码结构光确定物体表面点与二维图像的像素点之间的对应关系,称为编码结构光测量法。编码结构光测量法可以分为空间编码方法和时间编码方法。由于编码结构光投射到物体表面形成带有编码图案的图像,为方便描述,以下将编码结构光称为编码图像。Determining the correspondence between the surface points of an object and the pixels of a two-dimensional image by coding structured light is called coded structured light measurement. Coded structured light measurement methods can be divided into spatial coding methods and temporal coding methods. Since the coded structured light is projected onto the surface of an object to form an image with a coded pattern, for the convenience of description, the coded structured light is referred to as a coded image below.
其中,时域编码方法是,光学投射器按时间顺序依次投影多个编码图像,其中,多个编码图像中每一个编码图像上的图案是不同的,每一个编码图像上的图案由多个子图案组成,每一个编码图像的子图案对应一个数值;相机在光学投射机投射一个编码图像时,拍摄一个投影图像,得到多张投影图像;计算机处理系统可以对多张投影图像上和子图案进行识别,得到投影图像上每一个像素对应的多张子图案,进而,基于子图案与数值的对应关系确定每一个像素对应的数字序列;通过识别每一个像素点上的物体,确定每一个像素点上的物体对应的数字序列,该数字序列用于指示物体的位置。Wherein, the time-domain encoding method is that the optical projector projects a plurality of encoded images sequentially in time sequence, wherein the patterns on each encoded image in the plurality of encoded images are different, and the patterns on each encoded image are composed of a plurality of sub-patterns The sub-pattern of each coded image corresponds to a value; when the camera projects a coded image on the optical projector, it takes a projected image to obtain multiple projected images; the computer processing system can identify the sub-patterns on multiple projected images, Obtain multiple sub-patterns corresponding to each pixel on the projection image, and then determine the digital sequence corresponding to each pixel based on the correspondence between the sub-pattern and the value; by identifying the object on each pixel, determine the number on each pixel A sequence of numbers corresponding to an object, which is used to indicate the location of the object.
4、二值编码图像4. Binary coded image
二值编码图像是指图案分布符合编码方法对应的编码规律的二值图像(BinaryImage)。其中,二值图像是指将图像上的每一个像素只有两种可能的取值或灰度等级。The binary coded image refers to a binary image (BinaryImage) whose pattern distribution conforms to the coding law corresponding to the coding method. Among them, the binary image means that each pixel on the image has only two possible values or gray levels.
在介绍二值图像的编码之前,先介绍像素坐标系。Before introducing the encoding of binary images, we first introduce the pixel coordinate system.
投影仪将二值编码图像投影至物体上,通过编码可以确定物体在投影仪坐标系中的位置,物体在投影仪坐标系中的位置可以通过像素坐标系来表示。像素坐标系为二维坐标系,像素坐标系的单位是像素,坐标原点O在左上角。The projector projects the binary coded image onto the object, through which the position of the object in the coordinate system of the projector can be determined, and the position of the object in the coordinate system of the projector can be represented by the pixel coordinate system. The pixel coordinate system is a two-dimensional coordinate system, the unit of the pixel coordinate system is a pixel, and the coordinate origin O is at the upper left corner.
图1中的(A)示例性的示出了第一图像的像素坐标系。如图1中的(A)所示,矩形代表第一图像,以第一图像的左上角为原点O,可以建立如图所示的像素坐标系,该像素坐标系的横坐标u和纵坐标v的单位均为像素。可以理解的,图像中像素的横坐标u与纵坐标v分别指示该像素所在的列数与行数。(A) in FIG. 1 exemplarily shows the pixel coordinate system of the first image. As shown in (A) in Figure 1, the rectangle represents the first image, with the upper left corner of the first image as the origin O, the pixel coordinate system as shown in the figure can be established, the abscissa u and the ordinate of the pixel coordinate system The units of v are pixels. It can be understood that the abscissa u and the ordinate v of a pixel in the image respectively indicate the number of columns and the number of rows where the pixel is located.
如图1中的(B)中所示,以竖条纹表示图像中的一列像素,则图像的第一列像素的横坐标为1,第二列像素的横坐标为2,第三列像素的横坐标为3,以此类推。如图1中的(C)所示,以竖条纹表示图像中的一行像素,则图像的第一行像素的横坐标为1,第二行像素的横坐标为2,第三行像素的横坐标为3,以此类推。As shown in (B) in Figure 1, a row of pixels in the image is represented by vertical stripes, the abscissa of the first row of pixels of the image is 1, the abscissa of the second row of pixels is 2, and the abscissa of the third row of pixels The abscissa is 3, and so on. As shown in (C) in Figure 1, a row of pixels in the image is represented by vertical stripes, the abscissa of the first row of pixels in the image is 1, the abscissa of the second row of pixels is 2, and the abscissa of the third row of pixels The coordinate is 3, and so on.
例如,若某一像素点的u=1,v=2,则该像素点位于图1中的(B)中的第一列与图1中的(C)中的第二行重合的像素位置。For example, if u=1 and v=2 of a certain pixel point, then the pixel point is located at the pixel position where the first column in (B) in Figure 1 coincides with the second row in (C) in Figure 1 .
以下以五位格雷码为例,介绍对图像的编码和解码过程。其中,五位格雷码包括32个格雷码,每一个格雷码由五位码值组成。The following takes the five-bit Gray code as an example to introduce the encoding and decoding process of the image. Wherein, the five-bit Gray code includes 32 Gray codes, and each Gray code is composed of a five-bit code value.
在对二值编码图像编码的过程中,可以先根据二值编码图像的尺寸确定格雷码的位数和二值编码图像的个数。例如二值编码图像的width为32个像素位置,height小于32个像素位置,则可以使用五位格雷码对二值编码图像的横坐标值进行编码,二值编码图像的个数为5。例如,第一像素位置对应的五位格雷码为00011,则第一个二值编码图像投影在第一像素位置的编码为0;第二个二值编码图像投影在第一像素位置的编码为0;第三个二值编码图像投影在第一像素位置的编码为0;第四个二值编码图像投影在第一像素位置的编码为1;第五个二值编码图像投影在第一像素位置的编码为1。由此可见,由五个二值编码图像可以构成像素的五位格雷码。In the process of encoding a binary coded image, the number of digits of the Gray code and the number of binary coded images can be determined according to the size of the binary coded image. For example, the width of the binary coded image is 32 pixel positions, and the height is less than 32 pixel positions, then the abscissa value of the binary coded image can be coded using five-bit Gray code, and the number of binary coded images is 5. For example, if the five-digit Gray code corresponding to the first pixel position is 00011, then the code of the first binary coded image projected at the first pixel position is 0; the code projected at the first pixel position of the second binary coded image is 0; the code of the third binary coded image projected at the first pixel position is 0; the code of the fourth binary coded image projected at the first pixel position is 1; the fifth binary coded image projected at the first pixel The location is coded as 1. It can be seen that the five-bit Gray code of a pixel can be constituted by five binary coded images.
在对二值编码图像编码的过程中,可以按照像素位置的横坐标值和纵坐标值,对二维编码图像进行编码。以五位格雷码为例,若某个像素位置的横坐标值为3,由于二进制数3对应的格雷码为00011,则将该像素位置编码为00011,那么,在确定某一像素位置的格雷码为00011时,可以确定该像素位置的横坐标值为3。In the process of encoding the binary encoded image, the two-dimensional encoded image can be encoded according to the abscissa value and the ordinate value of the pixel position. Taking the five-digit Gray code as an example, if the abscissa value of a certain pixel position is 3, since the Gray code corresponding to the binary number 3 is 00011, the pixel position is coded as 00011, then, when determining the Gray code of a certain pixel position When the code is 00011, it can be determined that the abscissa value of the pixel position is 3.
本申请实施例是通过采用光敏传感器代替相机,以配合投影仪获取物体的像素坐标。In the embodiment of the present application, a photosensitive sensor is used instead of a camera to cooperate with a projector to acquire pixel coordinates of an object.
二、位姿2. Pose
1、位姿的定义1. Definition of pose
物体的位姿,是指物体在某个坐标系中的位置和姿态,可以用附着于物体上的坐标系的相对姿态来描述。The pose of an object refers to the position and attitude of the object in a certain coordinate system, which can be described by the relative attitude of the coordinate system attached to the object.
例如,物体可以用附着于物体上的坐标系B来表示,则物体相对于坐标系A的姿态等价与坐标系B相对于坐标系A的姿态。例如,机器人上以一固定点为原点,建立机器人坐标系F2,则机器人相对于环境坐标系F0的姿态等价于机器人相对于机器人坐标系F2相对于环境坐标系F0的姿态。For example, an object can be represented by a coordinate system B attached to the object, then the posture of the object relative to the coordinate system A is equivalent to the posture of the coordinate system B relative to the coordinate system A. For example, if a fixed point is taken as the origin on the robot and the robot coordinate system F2 is established, the posture of the robot relative to the environment coordinate system F0 is equivalent to the posture of the robot relative to the robot coordinate system F2 relative to the environment coordinate system F0.
其中,坐标系B相对于坐标系A的姿态可以通过旋转矩阵R和平移矩阵T来表示,坐标系B相对于坐标系A的姿态可以表示为则物体相对于坐标系A的姿态可以用表示。需要说明的是,在坐标系A为环境坐标系F0时,物体的位姿可以用表示。Among them, the attitude of the coordinate system B relative to the coordinate system A can be expressed by the rotation matrix R and the translation matrix T, and the attitude of the coordinate system B relative to the coordinate system A can be expressed as Then the attitude of the object relative to the coordinate system A can be used express. It should be noted that when the coordinate system A is the environment coordinate system F0, the pose of the object can be used express.
本申请实施例中,若室内的坐标系用环境坐标系F0表示,则头戴式显示设备在室内的定位,即为头戴式显示设备在环境坐标系F0中的位姿。头戴式显示设备可以用头戴式显示设备坐标系F3表示,则头戴式显示设备在环境坐标系F0中的位姿为 In the embodiment of the present application, if the indoor coordinate system is represented by the environment coordinate system F0, the indoor positioning of the head-mounted display device is the pose of the head-mounted display device in the environment coordinate system F0. The head-mounted display device can be represented by the head-mounted display device coordinate system F3, then the pose of the head-mounted display device in the environment coordinate system F0 is
2、N点透视算法(Perspective-n-Point,PnP)2. N-point perspective algorithm (Perspective-n-Point, PnP)
PnP算法是求解3D到2D点对运动的方法,用于基于n个3D空间点以及n个3D空间点的投影位置时,估计物体的位姿。其中,n个3D空间点为物体上的点,n为正整数。其中,n个3D空间点的投影位置可以基于结构光系统得到,n个3D空间点的投影位置可以用像素坐标系中的坐标来表示。The PnP algorithm is a method for solving the motion of 3D to 2D point pairs, and is used to estimate the pose of an object based on n 3D space points and the projected positions of n 3D space points. Wherein, the n 3D space points are points on the object, and n is a positive integer. Wherein, the projected positions of the n 3D space points can be obtained based on the structured light system, and the projected positions of the n 3D space points can be represented by coordinates in a pixel coordinate system.
PnP问题有很多种求解方法,例如用三对点估计位姿的P3P、直接线性变换(DLT)、EPnP。此外,还能用非线性优化的方式,构建最小二乘问题并迭代求解。There are many solutions to the PnP problem, such as P3P, direct linear transformation (DLT), and EPnP, which use three pairs of points to estimate the pose. In addition, non-linear optimization can be used to construct the least squares problem and iteratively solve it.
本申请实施例中,机器人可以在获取L个光学传感器在头戴式显示设备坐标系中的三维坐标和L个光学传感器在像素坐标系中的坐标后,基于PnP算法确定头戴式显示设备在投影仪坐标系中的位姿。其中,L为正整数。In the embodiment of the present application, after obtaining the three-dimensional coordinates of the L optical sensors in the coordinate system of the head-mounted display device and the coordinates of the L optical sensors in the pixel coordinate system, the robot can determine the position of the head-mounted display device based on the PnP algorithm. The pose in the projector coordinate system. Wherein, L is a positive integer.
三、实时定位与同步地图构建(Simultaneous Localization And Mapping,SLAM)算法3. Real-time positioning and synchronous map construction (Simultaneous Localization And Mapping, SLAM) algorithm
1、SLAM的分类1. Classification of SLAM
目前用在SLAM上的传感器主要分为这两类,一种是基于激光雷达的激光SLAM(Lidar SLAM)和基于视觉的VSLAM(Visual SLAM)。其中,激光SLAM基于激光雷达返回的点云信息,视觉SLAM基于相机返回的图像信息。The sensors currently used in SLAM are mainly divided into two categories, one is Lidar-based laser SLAM (Lidar SLAM) and vision-based VSLAM (Visual SLAM). Among them, the laser SLAM is based on the point cloud information returned by the lidar, and the visual SLAM is based on the image information returned by the camera.
本申请实施例中,机器人可以采用激光SLAM实现构建地图和定位。应理解,激光SLAM主要应用在室内,且激光SLAM比视觉SLAM构建的地图的精度高。需要说明的是,其他实施例中也可以采用视觉SLAM来实现机器人的室内地图构建和定位功能。In the embodiment of this application, the robot can use laser SLAM to realize map construction and positioning. It should be understood that laser SLAM is mainly applied indoors, and the accuracy of maps constructed by laser SLAM is higher than that of visual SLAM. It should be noted that in other embodiments, visual SLAM may also be used to realize the indoor map building and positioning functions of the robot.
2、激光SLAM2. Laser SLAM
激光SLAM可以采用二维(2-dimension,2D)激光雷达或三维(3-dimension,3D)激光雷达。其中,二维激光雷达一般用于室内机器人上(如扫地机器人),而二维激光雷达一般使用于无人驾驶领域。Laser SLAM can use two-dimensional (2-dimension, 2D) lidar or three-dimensional (3-dimension, 3D) lidar. Among them, two-dimensional lidar is generally used on indoor robots (such as sweeping robots), while two-dimensional lidar is generally used in the field of unmanned driving.
激光雷达的作用主要有两个方面:一方面可以为构图算法提供点云数据。当构图算法获取足够的点云数据后,即可构建出以机器人为中心,以雷达射程为半径的局部地图。另一方面可以通过系统观测模型修正贝叶斯滤波器的预测位姿,提高滤波估计的机器人位姿精度。在已知最佳位姿的条件下,局部地图即可更新至全局地图中。The role of lidar mainly has two aspects: on the one hand, it can provide point cloud data for the composition algorithm. When the composition algorithm obtains enough point cloud data, it can construct a local map with the robot as the center and the radar range as the radius. On the other hand, the predicted pose of the Bayesian filter can be corrected through the system observation model to improve the accuracy of the robot pose estimated by the filter. Under the condition of knowing the best pose, the local map can be updated to the global map.
二维激光SLAM构建的地图为二维栅格地图;三维激光SLAM构建的地图一般都是三维点云地图,三维点云地图为由离散的三维空间点组成的地图。目前,2D-SLAM算法包括gmapping-SLAM算法和Hector-SLAM算法等。例如,gmapping-SLAM算法采用粒子滤波的方法,可以将收取到的激光测距数据最终转换为栅格地图。粒子滤波的核心思想是通过从后验概率(观测方程)中抽取的随机状态粒子来表达空间分布。简单来说,粒子滤波法是指通过寻找一组在状态空间传播的随机样本对概率密度函数进行近似,以样本均值代替状态方程,从而获得状态最小方差分布的过程,其中,上述样本即指粒子。The map constructed by two-dimensional laser SLAM is a two-dimensional grid map; the map constructed by three-dimensional laser SLAM is generally a three-dimensional point cloud map, and a three-dimensional point cloud map is a map composed of discrete three-dimensional space points. Currently, 2D-SLAM algorithms include gmapping-SLAM algorithm and Hector-SLAM algorithm. For example, the gmapping-SLAM algorithm uses a particle filter method to convert the collected laser ranging data into a grid map. The core idea of particle filtering is to express the spatial distribution by random state particles drawn from the posterior probability (observation equation). In simple terms, the particle filter method refers to the process of approximating the probability density function by finding a group of random samples propagating in the state space, and replacing the state equation with the sample mean value, thereby obtaining the minimum variance distribution of the state. .
本申请实施例中,机器人可以通过二维激光SLAM构建室内的二维地图,结合结构光系统探测到的点云数据,合成室内的三维地图;机器人也可以通过三维激光SLAM构建室内的三维地图。In the embodiment of this application, the robot can construct an indoor two-dimensional map through two-dimensional laser SLAM, and combine the point cloud data detected by the structured light system to synthesize an indoor three-dimensional map; the robot can also construct an indoor three-dimensional map through three-dimensional laser SLAM.
为了更加清楚、详细地介绍本申请实施例提供的基于虚拟现实系统下的定位方法,下面先介绍本申请实施例提供的虚拟现实系统。In order to introduce the positioning method based on the virtual reality system provided by the embodiment of the present application more clearly and in detail, the virtual reality system provided by the embodiment of the present application is firstly introduced below.
请参见图2A,图2A是本申请实施例提供的一种虚拟现实系统的系统架构图。如图2A所示,该虚拟现实系统包括虚拟现实设备100和定位设备200。虚拟现实设备100和定位设备200可以建立通信连接。其中:Please refer to FIG. 2A . FIG. 2A is a system architecture diagram of a virtual reality system provided by an embodiment of the present application. As shown in FIG. 2A , the virtual reality system includes a
虚拟现实设备100可以为头戴式显示设备和手柄等,头戴式显示设备也可以称为头戴眼镜设备、眼镜设备、VR眼镜等,手柄也可以称为电磁手柄、控制手柄等。The
定位设备200用于对虚拟现实设备100进行定位。定位设备为具备移动能力和定位能力的设备,例如移动机器人或者飞行机器人等。The
在一些实施例中,用户可以戴上头戴式显示设备,观看到一种虚拟的沉浸式场景,其中,头戴式显示设备通过其在空间中的位置确定显示的场景内容。也即是说,头戴式显示设备会基于头戴式显示设备的位置变化对输出的显示内容进行变化。这里就涉及到对于头戴式显示设备的定位问题,头戴式显示设备需要获取自身的空间定位,确定显示的场景内容,从而实现用户的沉浸式体验和虚拟现实系统的避障功能。In some embodiments, the user can wear a head-mounted display device to watch a virtual immersive scene, wherein the head-mounted display device determines the content of the displayed scene according to its position in space. That is to say, the head-mounted display device will change the output display content based on the position change of the head-mounted display device. This involves the positioning of the head-mounted display device. The head-mounted display device needs to obtain its own spatial positioning and determine the content of the displayed scene, so as to realize the user's immersive experience and the obstacle avoidance function of the virtual reality system.
本申请实施例中,定位设备200可以对虚拟现实设备100进行定位,得到虚拟显示设备的位置信息,进而,定位设备200可以将该位置信息发送至虚拟现实设备100,以使虚拟现实设备100基于该位置信息显示虚拟场景画面。具体实现可以参见以下实施例,在此不赘述。In the embodiment of the present application, the
其中,虚拟现实设备100和定位设备200之间建立的通信连接可包括但不限于:无线保真直连(wireless fidelity direct,Wi-Fi direct)(又称为无线保真点对点(wireless fidelity peer-to-peer,Wi-Fi P2P))通信连接、蓝牙通信连接、近场通信(near field communication,NFC)连接等等。Wherein, the communication connection established between the
示例性的,本申请实施例提供了另一种虚拟现实系统的系统架构图。Exemplarily, the embodiment of the present application provides a system architecture diagram of another virtual reality system.
请参见图2B,图2B是本申请实施例提供的一种虚拟现实系统。如图2B所示,该虚拟现实系统包括头戴式显示设备300和机器人400。头戴式显示设备300通过蓝牙、WiFi等近距离无线通信方式和机器人400进行通信连接。其中:Please refer to FIG. 2B . FIG. 2B is a virtual reality system provided by an embodiment of the present application. As shown in FIG. 2B , the virtual reality system includes a head-mounted
在用户佩戴头戴式显示设备300在室内走动时,机器人400可以跟随用户移动,并实时对头戴式显示设备300进行定位,将头戴式显示设备300在室内的位姿发送至头戴式显示设备300,以使头戴式显示设备300基于自身的位姿确定显示内容。When the user wears the head-mounted
头戴式显示设备300上可以设置光强传感器,光强传感器用于接收光强。例如机器人400上可以设置有投影仪,投影仪向头戴式显示设备300投影,进而,头戴式显示设备300可以将光强传感器接收到的光强值发送至机器人400,以使机器人400基于上述光强值确定头戴式显示设备300的位姿。A light intensity sensor may be provided on the head-mounted
其中,光强传感器为对外界光信号或光辐射有响应或转换功能的敏感装置。光强传感器包括光电管、光电倍增管、光敏电阻、光敏三极管、太阳能电池、红外线传感器、紫外线传感器、光纤式光电传感器和色彩传感器等。Wherein, the light intensity sensor is a sensitive device that responds to or converts external light signals or light radiation. Light intensity sensors include photocells, photomultiplier tubes, photoresistors, phototransistors, solar cells, infrared sensors, ultraviolet sensors, fiber optic photoelectric sensors, and color sensors.
请参见图3,图3示出了头戴式显示设备300上光强传感器可能的分布情况。如图3中所示,黑色圆点代表光强传感器310,图3示出了20个光强传感器的分布情况。Please refer to FIG. 3 , which shows possible distribution of light intensity sensors on the head-mounted
请参见图4,图4为本申请实施例提供的一种机器人400的示意图。如图4所示,机器人400包括结构光系统410和移动系统420。其中:Please refer to FIG. 4 , which is a schematic diagram of a
结构光系统410包括相机411和投影仪412。其中,投影仪412用于发射编码后的结构光,例如二值编码图像;相机411用于拍摄图像。如图4所示,虚线101和虚线102用于指示相机411的拍摄范围,实线201和实线202用于指示投影仪412的投影范围,可见,相机411的拍摄范围和投影仪412的投影范围有重合区域,以使相机411可以拍摄到投影仪412投影所在区域。The
其中,投影仪412可以为广角红外数字光投影仪或其他投影仪,此处不做限定。Wherein, the
移动系统420可以包括若干个轮子和驱动装置,驱动装置用于驱动该若干个轮子带动机器人400移动。机器人400可以控制移动系统420使自身与头戴式显示设备300保持在预设距离和角度,以使头戴式显示设备300位于相机411和投影仪412的工作范围内。例如,头戴式显示设备300与机器人400的相对位置关系可以如图5所示,用户可以佩戴头戴式显示设备300,机器人400通过控制移动系统420与头戴式显示设备300保持在预设距离之内,即用户移动时机器人400跟随用户的移动而移动,以使头戴式显示设备300位于相机411和投影仪412的工作范围内。The moving
在一种实现中,机器人400与头戴式显示设备300保持预设距离和角度,以使头戴式显示设备300位于相机411和投影仪412的工作区域;进而,机器人400可以控制投影仪412向头戴式显示设备300发射编码结构光;机器人400可以通过相机411拍摄头戴式显示设备300,解码相机411拍摄的图像确定头戴式显示设备300的位姿。In one implementation, the
在一些实施例中,相机411和投影仪412是可以旋转的。机器人可以控制相机411和投影仪412旋转,并通过旋转参数,计算旋转后的相机411与投影仪412之间的变换关系或相机411与机器人400的变换关系等。In some embodiments,
可选地,机器人还可以包括激光SLAM系统。激光SLAM系统用于机器人400探测室内环境,生成室内地图。进而,机器人400可以基于该地图确定自身在室内的位置,从而基于自身在室内的位置确定头戴式显示设备在室内的位姿。其中,机器人400确定头戴式显示设备在室内的位姿的详细内容可以参见以下实施例,此处不赘述。Optionally, the robot may also include a laser SLAM system. The laser SLAM system is used for the
以图2B的系统架构、图3提供的头戴式显示设备以及图4提供的机器人,本申请实施例提供一种如图6所示的场景。Based on the system architecture of FIG. 2B , the head-mounted display device provided in FIG. 3 , and the robot provided in FIG. 4 , the embodiment of the present application provides a scenario as shown in FIG. 6 .
请参见图6,用户穿戴头戴式显示设备300,用户与机器人400位于同一室内环境中。同时,室内环境中还包括若干个障碍物500。Referring to FIG. 6 , the user wears the head-mounted
首先,介绍室内的几个坐标系:First, introduce several coordinate systems in the room:
1、环境坐标系F01. Environment coordinate system F0
机器人以室内某一点为坐标原点,建立如图4所示的环境坐标系F0。环境坐标系F0为三维坐标系,用于描述物体在真实世界中的三维坐标,室内每一个位置均可以用该环境坐标系F0中的坐标点来表示。在一些实施例中,环境坐标系F0也可以称为世界坐标系。The robot takes a certain point in the room as the coordinate origin, and establishes the environment coordinate system F0 shown in Figure 4. The environmental coordinate system F0 is a three-dimensional coordinate system, which is used to describe the three-dimensional coordinates of objects in the real world. Every position in the room can be represented by a coordinate point in the environmental coordinate system F0. In some embodiments, the environment coordinate system F0 may also be called a world coordinate system.
2、结构光系统的坐标系2. The coordinate system of the structured light system
结构光系统的坐标系包括相机坐标系F11和投影仪坐标系F12。其中,相机坐标系F11为以相机光学中心为原点的坐标系,光轴以z轴重合;投影仪坐标系F12为以投影仪光学中心为原点的坐标系,光轴以z轴重合。The coordinate system of the structured light system includes a camera coordinate system F11 and a projector coordinate system F12. Among them, the camera coordinate system F11 is a coordinate system with the optical center of the camera as the origin, and the optical axes coincide with the z-axis; the projector coordinate system F12 is a coordinate system with the optical center of the projector as the origin, and the optical axes coincide with the z-axis.
在相机和投影仪的位置确定后,可以通过对相机和投影仪进行标定,得到相机参数、投影仪参数、相机坐标系F11与投影仪坐标系F12的变换关系。其中,相机参数包括相机焦距和相机中心点坐标;投影仪坐标系F12与相机坐标系F11的变换关系可以用矩阵来表示,其中,R代表旋转矩阵,T代表平移矩阵。其中,标定方法可以通过机器视觉软件(HALCON),也可以通过其他标定方法,此处不做限定。After the positions of the camera and the projector are determined, the camera and projector can be calibrated to obtain camera parameters, projector parameters, and the transformation relationship between the camera coordinate system F11 and the projector coordinate system F12. Among them, the camera parameters include the focal length of the camera and the coordinates of the center point of the camera; the transformation relationship between the projector coordinate system F12 and the camera coordinate system F11 can be expressed by the matrix to represent, where R represents the rotation matrix and T represents the translation matrix. Wherein, the calibration method may be through machine vision software (HALCON), or other calibration methods, which are not limited here.
具体的,投影仪将将编码后的结构光投射到物体上,相机拍摄该物体,通过解码相机拍摄的图像,可以确定物体在像素坐标系中的坐标。其中,像素坐标系的坐标原点在图像的左上角,基本单元为像素。可以理解的,相机坐标系F11、投影仪坐标系F12和物体的位置确定了物体在像素坐标系中的坐标,则物体在像素坐标系中的坐标可以反映物体与相机坐标系F11及投影仪坐标系F12的位置关系。Specifically, the projector will project the coded structured light onto the object, and the camera will capture the object. By decoding the image captured by the camera, the coordinates of the object in the pixel coordinate system can be determined. Wherein, the coordinate origin of the pixel coordinate system is at the upper left corner of the image, and the basic unit is a pixel. It can be understood that the camera coordinate system F11, the projector coordinate system F12 and the position of the object determine the coordinates of the object in the pixel coordinate system, and the coordinates of the object in the pixel coordinate system can reflect the coordinates of the object, the camera coordinate system F11 and the projector The positional relationship of the system F12.
3、机器人坐标系F23. Robot coordinate system F2
机器人坐标系F2是以机器人上一固定点为坐标原点的坐标系,该固定点的位置相对于结构光坐标系是固定不变的。例如,机器人坐标系F2的原点可以位于如图6所示的位置。The robot coordinate system F2 is a coordinate system with a fixed point on the robot as the coordinate origin, and the position of the fixed point is fixed relative to the structured light coordinate system. For example, the origin of the robot coordinate system F2 may be located as shown in FIG. 6 .
如图6所示,相机411安装于机器人400上,在相机411和机器人400的位置确定后,通过手眼标定法可以确定相机坐标系F11相对于机器人坐标系F2的旋转矩阵和平移矩阵,可以用表示,用于指示F11相对于F2的变换关系。需要说明的是,机器人还可以通过其他标定方法确定此处不做限定。As shown in Figure 6, the
4、头戴式显示设备坐标系F34. Head-mounted display device coordinate system F3
头戴式显示设备坐标系F3是以头戴式显示设备上一固定点为坐标原点的坐标系。The head-mounted display device coordinate system F3 is a coordinate system with a fixed point on the head-mounted display device as the coordinate origin.
本申请实施例中,头戴式显示设备上设置有多个光强传感器,每一个光强传感器对应一个编号,在确定头戴式显示设备坐标系F3后,可以确定每一个光强传感器在头戴式显示设备坐标系F3中的坐标。In the embodiment of the present application, multiple light intensity sensors are provided on the head-mounted display device, and each light intensity sensor corresponds to a serial number. After determining the coordinate system F3 of the head-mounted display device, it can be determined that each light intensity sensor Coordinates in the wearable display device coordinate system F3.
本申请实施例中,头戴式显示设备的位姿,即是头戴式显示设备坐标系F3相对于环境坐标系F0的位姿,可以用表示。In the embodiment of this application, the pose of the head-mounted display device, that is, the pose of the head-mounted display device coordinate system F3 relative to the environment coordinate system F0, can be used express.
下面基于图6所示的场景,结合本申请实施例图3提供的头戴式显示设备和图4提供的机器人,来详细描述本申请实施例提供的定位方法。Based on the scenario shown in FIG. 6 , the positioning method provided in the embodiment of the present application will be described in detail below in combination with the head-mounted display device provided in FIG. 3 and the robot provided in FIG. 4 in the embodiment of the present application.
图7示例性示出了本申请实施例提供的定位方法流程。该定位方法可以包括以下部分或全部步骤:Fig. 7 exemplarily shows the flow of the positioning method provided by the embodiment of the present application. The positioning method may include some or all of the following steps:
S101、机器人与头戴式显示设备建立通信连接。S101. The robot establishes a communication connection with the head-mounted display device.
头戴式显示设备和机器人之间建立的通信连接可包括但不限于:Wi-Fi P2P通信连接、蓝牙通信连接、NFC连接等等。The communication connection established between the head-mounted display device and the robot may include but not limited to: Wi-Fi P2P communication connection, Bluetooth communication connection, NFC connection and so on.
S102、机器人基于室内的三维地图,确定机器人在室内的位姿。S102. The robot determines the indoor pose of the robot based on the indoor three-dimensional map.
其中,机器人在室内的位姿可以用表示,用于指示机器人坐标系F2相对于环境坐标系F0的变换关系。应理解,机器人坐标系F2的原点位于机器人上,机器人坐标系F2相对于环境坐标系F0的位姿等价于机器人在环境坐标系F0的位姿,因此用可以表示机器人在室内的定位。Among them, the pose of the robot indoors can be used express, It is used to indicate the transformation relationship between the robot coordinate system F2 and the environment coordinate system F0. It should be understood that the origin of the robot coordinate system F2 is located on the robot, and the pose of the robot coordinate system F2 relative to the environment coordinate system F0 is equivalent to the pose of the robot in the environment coordinate system F0, so It can represent the positioning of the robot indoors.
在一些实施例中,机器人可以先获取室内的三维地图,再基于激光SLAM系统实时探测周围环境,最后,基于周围环境与室内的三维地图的比对,得到机器人在室内的位姿 In some embodiments, the robot can first obtain the indoor three-dimensional map, and then detect the surrounding environment in real time based on the laser SLAM system. Finally, based on the comparison between the surrounding environment and the indoor three-dimensional map, the indoor pose of the robot can be obtained.
其中,机器人获取室内的三维地图的方法可以是由机器人探测生成,例如机器人安装有激光雷达,机器人可以在室内慢速运动,通过激光SLAM系统生成室内的二维地图;再基于结构光系统获取室内的点云数据,从而基于上述二维地图和上述点云数据,生成室内的三维地图。又例如,机器人可以通过结构光系统获取室内的三维地图,具体的,机器人通过投影仪向目标区域投影,通过相机获取图像,进而,分析图像获取室内的三维地图。Among them, the method for the robot to obtain the indoor three-dimensional map can be generated by robot detection, for example, the robot is equipped with a laser radar, the robot can move slowly in the room, and the indoor two-dimensional map is generated through the laser SLAM system; then the indoor map is obtained based on the structured light system. The point cloud data, so as to generate an indoor three-dimensional map based on the above-mentioned two-dimensional map and the above-mentioned point cloud data. For another example, the robot can obtain an indoor three-dimensional map through a structured light system. Specifically, the robot projects to the target area through a projector, obtains an image through a camera, and then analyzes the image to obtain an indoor three-dimensional map.
需要说明的是,机器人也可以从其他设备获取并保存室内的三维地图,机器人还可以通过其他方法获取室内的三维地图,此处不作限制。可以理解的,机器人基于室内的三维地图可以确定室内的可行驶区域,从而实现机器人自身避障。It should be noted that the robot can also obtain and save the indoor three-dimensional map from other devices, and the robot can also obtain the indoor three-dimensional map through other methods, which are not limited here. It can be understood that the robot can determine the drivable area in the room based on the indoor three-dimensional map, so as to realize the obstacle avoidance of the robot itself.
S103、机器人向头戴式显示设备投影二值编码图像。S103. The robot projects a binary coded image to the head-mounted display device.
其中,二值编码图像为图案分布符合编码规律的二值图像,编码图案的编码规律可以是二进制码和格雷码等。Wherein, the binary coded image is a binary image whose pattern distribution conforms to a coding law, and the coding law of the coded pattern may be a binary code, a Gray code, or the like.
在一些实施例中,机器人可以实时或预设时间间隔通过投影仪以预设频率向头戴式显示设备投影两组二值编码图像。其中,第一组图像上的图案为第一方向的条纹,第二组图像上的图像为第二方向的条纹,第一方向和第二方向为垂直方向;每一个条纹对应一个像素位置,以第一方向为竖向、第二方向为横向为例,每一个竖条纹对应的像素位置为每一列像素;第一个横条纹对应的像素位置为每一行像素。需要说明的是,每组二值编码图像的顺序是基于编码规律确定的,机器人按二值编码图像的顺序进行投影,具体可见以下实施例。In some embodiments, the robot may project two sets of binary coded images to the head-mounted display device at a preset frequency through a projector in real time or at preset time intervals. Wherein, the patterns on the first group of images are stripes in the first direction, the images on the second group of images are stripes in the second direction, and the first direction and the second direction are vertical directions; each stripe corresponds to a pixel position , taking the first direction as vertical and the second direction as horizontal as an example, the pixel position corresponding to each vertical stripe is each column of pixels; the pixel position corresponding to the first horizontal stripe is each row of pixels. It should be noted that the order of each group of binary coded images is determined based on the coding rules, and the robot projects according to the order of the binary coded images, see the following embodiments for details.
其中,二值编码图像的个数N由二值编码图像的尺寸和编码方法确定。例如,二值编码图像的编码方法为格雷编码,二值编码图像的长为height,宽为width,则在第一方向或者第二方向上的二值编码图像的个数N>M,M=max(log2(width),log2(height)),其中,N为正整数。又例如二值编码图像的编码方法为正弦,则二值编码图像的个数至少为三个。Wherein, the number N of binary coded images is determined by the size and coding method of the binary coded images. For example, the encoding method of the binary coded image is gray coding, the length of the binary coded image is height, and the width is width, then the number of binary coded images in the first direction or the second direction is N>M, and M= max(log 2 (width), log 2 (height)), wherein, N is a positive integer. For another example, if the coding method of the binary coded image is sinusoidal, the number of binary coded images is at least three.
以5位格雷码为例,5位格雷码可以对32(25=32)个像素位置进行编码。请参见图8,图8示例性的示出了以图案为条纹,编码规律为五位格雷码的一组二值编码图像。Taking the 5-bit Gray code as an example, the 5-bit Gray code can encode 32 (2 5 =32) pixel positions. Please refer to FIG. 8 . FIG. 8 exemplarily shows a group of binary coded images with patterns as stripes and coded as five-digit Gray codes.
请参见图8,图8中有五个二值编码图像,每一个二值编码图像均被划分为多个竖条区域,每一个竖条区域代表一个像素位置,带有斜线的竖条区域代表该像素位置的编码为0,空白竖条区域代表该像素位置的编码为1;每一个像素位置对应一个格雷码,五个二值编码图像分别对应五位格雷码中的一位码值,基于同一像素位置在五个二值编码图像中的编码,可以确定该像素位置对应的五位格雷码。以第三位置为例,第三位置的横坐标值为3,则第三位置对应的格雷码为00011,那么,第三位置在第一个二值编码图像中可以编码为0,在第二个二值编码图像中可以编码为0,在第三个二值编码图像中可以编码为0,在第四个二值编码图像中可以编码为1,在第五个二值编码图像中的编码为1。需要说明的是,图1中仅以8个像素位置为例,五位格雷码对应的二值编码图像可以包括32个像素位置。Please refer to Figure 8, there are five binary coded images in Figure 8, each binary coded image is divided into multiple vertical bar areas, each vertical bar area represents a pixel position, and the vertical bar area with slashes The code representing the pixel position is 0, and the blank vertical bar area represents the code of the pixel position is 1; each pixel position corresponds to a Gray code, and the five binary coded images correspond to one code value in the five-digit Gray code, respectively. Based on the encoding of the same pixel position in five binary coded images, the five-bit Gray code corresponding to the pixel position can be determined. Taking the third position as an example, the abscissa value of the third position is 3, and the Gray code corresponding to the third position is 00011, then, the third position can be coded as 0 in the first binary coded image, and in the second can be coded as 0 in the first binary coded image, can be coded as 0 in the third binary coded image, can be coded as 1 in the fourth binary coded image, and can be coded as 1 in the fifth binary coded image is 1. It should be noted that only 8 pixel positions are taken as an example in FIG. 1 , and the binary coded image corresponding to the five-bit Gray code may include 32 pixel positions.
请参见图9,图9是本申请实施例提供的两组二值编码图像。第一组图像为图8中所示的二值编码图像,第一组图像的条纹为竖条纹;第二组图像的条纹为横条纹,第二组图像的详细内容可以参见图8的相关描述,此处不赘述。Please refer to FIG. 9, which shows two sets of binary coded images provided by the embodiment of the present application. The first group of images is the binary coded image shown in Figure 8, the stripes of the first group of images are vertical stripes; the stripes of the second group of images are horizontal stripes, and the details of the second group of images can be found in Fig. 8, the relevant description will not be repeated here.
在一些实施例中,机器人可以先向头戴式显示设备发送指示信息,再向头戴式显示设备发射二值编码图像,该指示信息用于指示头戴式显示设备对光强传感器接收到的光强进行采样。其中,机器人可以先向头戴式显示设备发送指示信息的方式可以包括以下两种实现:In some embodiments, the robot may first send instruction information to the head-mounted display device, and then transmit a binary coded image to the head-mounted display device. Light intensity is sampled. Among them, the way that the robot can first send instruction information to the head-mounted display device may include the following two implementations:
在一种实现中,机器人可以先向头戴式显示设备投影一张预设亮度的图像,该图像用于指示头戴式显示设备采样光强传感器接收到的光强,进而,以预设投影频率向头戴式显示设备投影二值编码图像。需要说明的是,由于预设投影频率高,所以即使用户佩戴头戴式显示设备移动,也可以实时定位头戴式显示设备。In one implementation, the robot may first project an image with a preset brightness to the head-mounted display device, and the image is used to instruct the head-mounted display device to sample the light intensity received by the light intensity sensor, and then project the image with the preset brightness. The frequency projects a binary coded image to the head-mounted display device. It should be noted that since the preset projection frequency is high, even if the user wears the head-mounted display device and moves, the head-mounted display device can be positioned in real time.
在另一种实现中,机器人可以通过步骤S101建立的通信连接,向头戴式显示设备发送开始消息,该开始消息用于指示头戴式显示设备采样光强传感器接收到的光强,进而,以预设投影频率向头戴式显示设备投影二值编码图像。In another implementation, the robot may send a start message to the head-mounted display device through the communication connection established in step S101, where the start message is used to instruct the head-mounted display device to sample the light intensity received by the light intensity sensor, and further, Projecting binary coded images to the head-mounted display device at a preset projection frequency.
在一些实施例中,头戴式显示设备的四周均设置有光强传感器,机器人与头戴式显示设备保持预设距离时,头戴式显示设备即处于机器人的投影范围,机器人能够将二值编码图像投影在头戴式显示设备。例如,机器人可以通过蓝牙定位确定头戴式显示设备的位置,通过跟踪算法与头戴式显示设备保持预设距离,以使机器人能够将二值编码图像投影在头戴式显示设备。又例如,在初始状态时,用户佩戴头戴式显示设备,头戴式显示设备处于机器人上的投影仪与相机的工作范围内;在用户走动时,机器人通过跟踪算法与头戴式显示设备保持目标距离,以使头戴式显示设备处于机器人上的投影仪与相机的工作范围内。需要说明的是,头戴式显示设备可以如图3所示,只要机器人位于头戴式显示设备的周围,即可以拍摄到头戴式显示设备。In some embodiments, light intensity sensors are arranged around the head-mounted display device. When the robot maintains a preset distance from the head-mounted display device, the head-mounted display device is within the projection range of the robot, and the robot can convert binary The encoded image is projected on the head-mounted display device. For example, the robot can determine the position of the head-mounted display device through Bluetooth positioning, and maintain a preset distance from the head-mounted display device through a tracking algorithm, so that the robot can project binary coded images on the head-mounted display device. For another example, in the initial state, the user wears a head-mounted display device, and the head-mounted display device is within the working range of the projector and camera on the robot; Target distance so that the HMD is within working range of the projector and camera on the robot. It should be noted that the head-mounted display device may be shown in FIG. 3 , as long as the robot is located around the head-mounted display device, the head-mounted display device can be photographed.
在另一些实施例中,头戴式显示设备的部分位置设置有光强传感器,则机器人可以移动至头戴式显示设备该部分位置面前的位置,并与头戴式显示设备保持预设距离,以使头戴式显示设备处于机器人的投影范围。In some other embodiments, a part of the head-mounted display device is provided with a light intensity sensor, then the robot can move to a position in front of the part of the head-mounted display device, and maintain a preset distance from the head-mounted display device, So that the head-mounted display device is within the projection range of the robot.
其中,机器人可以跟踪头戴式显示设备或跟踪佩戴头戴式显示设备的用户。例如,机器人可以识别佩戴有头戴式显示设备的用户,进而,通过跟踪该用户以实现与头戴式显示设备保持预设距离,从而实现机器人可以投影在头戴式显示设备上。Wherein, the robot may track the head-mounted display device or track the user wearing the head-mounted display device. For example, the robot can identify a user wearing a head-mounted display device, and then track the user to maintain a preset distance from the head-mounted display device, so that the robot can be projected on the head-mounted display device.
可选地,机器人还可以基于训练好的神经网络模型以确定机器人与人体的位置关系,进而,机器人可以移动至用户的前方以确保用户处于投影仪的工作范围。例如,机器人可以通过相机拍摄图像,将图像输入训练好的神经网络模型,得到用户的特征,如机器人基于该图像得到用户的全部面部特征则可以确定机器人位于用户的前方。Optionally, the robot can also determine the positional relationship between the robot and the human body based on the trained neural network model, and then the robot can move to the front of the user to ensure that the user is within the working range of the projector. For example, a robot can take an image through a camera, input the image into a trained neural network model, and obtain the characteristics of the user. If the robot obtains all the facial features of the user based on the image, it can determine that the robot is in front of the user.
在一种实现中,机器人可以通过蓝牙定位确定头戴式显示设备的位置;进而,机器人移动至距头戴式显示设备为预设距离的位置;进而,机器人通过拍摄图像,基于图像识别机器人与头戴式显示设备的位置关系;基于所述位置关系,移动相对于头戴式显示设备的目标位置,例如头戴式显示设备的前方。In one implementation, the robot can determine the position of the head-mounted display device through Bluetooth positioning; then, the robot moves to a position that is a preset distance away from the head-mounted display device; A positional relationship of the head-mounted display device; based on the positional relationship, moving relative to a target position of the head-mounted display device, such as the front of the head-mounted display device.
S104、在机器人向头戴式显示设备投影二值编码图像时,头戴式显示设备分别对光强传感器接收到的光强值进行采样,得到每一个光强传感器对应的光强值序列。S104. When the robot projects a binary coded image to the head-mounted display device, the head-mounted display device respectively samples the light intensity values received by the light intensity sensors to obtain a sequence of light intensity values corresponding to each light intensity sensor.
其中,头戴式显示设备上设置有L个光强传感器,L为大于等于4的正整数。Wherein, the head-mounted display device is provided with L light intensity sensors, and L is a positive integer greater than or equal to 4.
在一些实施例中,在机器人向头戴式显示设备投影二值编码图像时,头戴式显示设备可以通过光强传感器接收光强信号,得到每一个光强传感器接收到的光强值;进而,头戴式显示设备可以在确定投影仪开始投影时,通过数据采集单元(也称数据采集卡)以预设采样频率对每一个光强传感器接收到的光强进行采样,得到每一个光强传感器的光强值序列In some embodiments, when the robot projects a binary coded image to the head-mounted display device, the head-mounted display device can receive the light intensity signal through the light intensity sensor to obtain the light intensity value received by each light intensity sensor; and then , the head-mounted display device can sample the light intensity received by each light intensity sensor through the data acquisition unit (also called a data acquisition card) at a preset sampling frequency when the projector is determined to start projection, and obtain each light intensity Sequence of light intensity values from the sensor
其中,预设采样频率大于等于预设投影频率的二倍值。例如机器人通过步骤S101建立的通信连接将预设投影频率发送至头戴式显示设备,相应的,头戴式显示设备接收该预设投影频率并基于将该预设投影频率的二倍值确定为预设采样频率。Wherein, the preset sampling frequency is greater than or equal to twice the value of the preset projection frequency. For example, the robot sends the preset projection frequency to the head-mounted display device through the communication connection established in step S101. Correspondingly, the head-mounted display device receives the preset projection frequency and determines based on the double value of the preset projection frequency as Preset sampling frequency.
例如机器人向头戴式显示设备投影如图8所示的二值编码图像,假设第一光强传感器位于第三位置对应的位置,也即是说,第一光强传感器位于接收到二值编码图像中第三位置的光强的位置,则第一光强传感器接收到的光强信号可以如图10所示。图10示出了第一光强传感器接收到的光强信号的光强值,其中,横坐标为时间,纵坐标为光强值。图10中将第一光强传感器接收到第一张图像的时间以原点时间0表示,T为预设投影频率对应的投影周期。机器人在第一个周期内向头戴式显示设备投影第一个二值编码图像,第一个二值编码图像中第三位置的灰度值为0,光强值较弱;如图10所示,第一光强传感器在第二个周期和第三个周期接收到的光强值与第一周期接收到的光强值一致;机器人在第四个周期内向头戴式显示设备投影第四个二值编码图像,第一光强传感器接收到的光强为第四个二值编码图像的第三位置,该第三位置的灰度值为1,光强值较强,如图10所示,第一光强传感器在第五个周期接收到的光强值与第四周期接收到的光强值一致。需要说明的是,由于现实环境中不稳定因素例如其他光照的影响,在一个周期内光强值可能略有变化,则图10中一个周期内的光照值可以为曲线。For example, a robot projects a binary coded image as shown in Figure 8 to a head-mounted display device, assuming that the first light intensity sensor is located at the position corresponding to the third position, that is to say, the first light intensity sensor is located at the position corresponding to the binary code received The position of the light intensity at the third position in the image, the light intensity signal received by the first light intensity sensor may be as shown in FIG. 10 . Fig. 10 shows the light intensity value of the light intensity signal received by the first light intensity sensor, where the abscissa is time and the ordinate is light intensity value. In FIG. 10 , the time when the first light intensity sensor receives the first image is represented by the
在一些实施例中,头戴式显示设备可以通过机器人发送的指示信息确定投影仪开始投影从而开始对光强进行采样,也可以在探测到光强发生变化时开始采样,还可以通过其他方法确定投影仪开始投影,此处不做限定。例如指示信息可以为预设亮度的图像,头戴式显示设备可以在接收到预设亮度的图像时,以预设采样频率对每一个光强传感器接收到的光强进行采样,得到每一个光强传感器的光强序列值。In some embodiments, the head-mounted display device can determine that the projector starts to project through the instruction information sent by the robot to start sampling the light intensity, or it can start sampling when it detects a change in the light intensity, or it can be determined by other methods The projector starts to project, which is not limited here. For example, the indication information can be an image with a preset brightness. When the head-mounted display device receives an image with a preset brightness, it can sample the light intensity received by each light intensity sensor at a preset sampling frequency to obtain each light intensity. The light intensity sequence value of the intensity sensor.
其中,头戴式显示设备可以预先存储或接收机器人发送的机器人一次投影过程的图像张数或时间等参数,进而,基于上述参数停止采样,将采样的光强值序列发送至机器人。例如,头戴式显示设备预存有机器人一次投影的投影时间,则头戴式显示设备可以基于指示信息开始采样光强值,在采样时间为投影时间时结束采样,得到光强值序列,将光强值序列发送至机器人。Among them, the head-mounted display device can pre-store or receive parameters such as the number of images or time of a projection process of the robot sent by the robot, and then stop sampling based on the above parameters, and send the sequence of sampled light intensity values to the robot. For example, if the head-mounted display device pre-stores the projection time of one projection of the robot, the head-mounted display device can start sampling the light intensity value based on the instruction information, and end the sampling when the sampling time is the projection time, to obtain the light intensity value sequence, and Sequence of strong values sent to the robot.
S105、头戴式显示设备基于上述通信连接,向机器人发送每一个光强传感器对应的光强值序列和每一个光强传感器在头戴式显示设备坐标系F3中的三维坐标。S105. The head-mounted display device sends to the robot the sequence of light intensity values corresponding to each light intensity sensor and the three-dimensional coordinates of each light intensity sensor in the coordinate system F3 of the head-mounted display device to the robot based on the above communication connection.
在一种实现中,头戴式显示设备生成L个光强传感器中每一个光强传感器对应的数据,得到L组数据;向机器人发送L组数据,其中,L组数据中任一组数据包括光强传感器的标识、该光强传感器对应的光强值序列和该光强传感器在头戴式显示设备坐标系F3中的三维坐标;光强传感器的标识用于区分L个光强传感器,光强传感器的标识可以为编号等。In one implementation, the head-mounted display device generates data corresponding to each of the L light intensity sensors to obtain L sets of data; and sends L sets of data to the robot, wherein any set of data in the L sets of data includes The logo of the light intensity sensor, the sequence of light intensity values corresponding to the light intensity sensor, and the three-dimensional coordinates of the light intensity sensor in the coordinate system F3 of the head-mounted display device; the logo of the light intensity sensor is used to distinguish L light intensity sensors, light The identification of the strong sensor may be a serial number or the like.
例如,每一个光强传感器在头戴式显示设备坐标系F3中的坐标可以用{id,x,y,z}表示,其中,id为光强传感器的编号,x,y,z为光强传感器在头戴式显示设备坐标系F3中的三维坐标值。For example, the coordinates of each light intensity sensor in the head-mounted display device coordinate system F3 can be represented by {id, x, y, z}, where id is the number of the light intensity sensor, and x, y, z are the light intensity The three-dimensional coordinate value of the sensor in the coordinate system F3 of the head-mounted display device.
S106、机器人基于上述光强值序列,确定每一个光强传感器在像素坐标系中的坐标。S106. The robot determines the coordinates of each light intensity sensor in the pixel coordinate system based on the above sequence of light intensity values.
具体的,机器人可以分别对L个光强传感器对应的光强值序列进行处理,生成L个光强传感器在像素坐标系中的坐标。其中,每一个光学传感器在像素坐标系中的坐标可以用(u,v)表示,其中,u为该光学传感器的横坐标,v为该光学传感器的纵坐标。Specifically, the robot can respectively process the light intensity value sequences corresponding to the L light intensity sensors to generate coordinates of the L light intensity sensors in the pixel coordinate system. Wherein, the coordinates of each optical sensor in the pixel coordinate system can be represented by (u, v), where u is the abscissa of the optical sensor, and v is the ordinate of the optical sensor.
在一些实施例中,机器人可以基于投影周期,将头戴式显示器发送的每一组光强传感器的光强值序列分为两组光强值子序列;机器人可以分别对每一个光强传感器的两组光强值子序列进行处理,得到每一个光强传感器的两个编码;基于每一个光强传感器的两个编码,得到每一个光强传感器在像素坐标系中的横坐标和纵坐标。In some embodiments, the robot can divide the light intensity value sequence of each group of light intensity sensors sent by the head-mounted display into two sets of light intensity value subsequences based on the projection period; Two sets of light intensity value subsequences are processed to obtain two codes of each light intensity sensor; based on the two codes of each light intensity sensor, the abscissa and ordinate of each light intensity sensor in the pixel coordinate system are obtained.
例如,投影周期为T,机器人在每一个投影周期投影一张二值编码图像,机器人在前五个周期投影了如图9所示的第一组图像,在后五个周期投影了图9所示的第二组图像,则机器人可以针对每一个光学传感器,将前五个周期的光强值确定为第一子序列,将后五个周期的光强值确定为第二子序列;基于第一子序列得到第一格雷码,基于第二子序列得到第二格雷码;第一格雷码对应的二进制码确定为为光强传感器的横坐标u,将第二格雷码对应的二进制码确定为光强传感器的纵坐标v,得到该光强传感器在像素坐标系中的坐标(u,v)。For example, the projection period is T, and the robot projects a binary coded image in each projection period. The robot projects the first group of images shown in Figure 9 in the first five periods, and projects the images in Figure 9 in the last five periods. For the second group of images shown, the robot can determine the light intensity values of the first five cycles as the first subsequence, and determine the light intensity values of the last five cycles as the second subsequence for each optical sensor; The first Gray code is obtained based on the first subsequence, and the second Gray code is obtained based on the second subsequence; the binary code corresponding to the first Gray code is determined as the abscissa u of the light intensity sensor, and the binary code corresponding to the second Gray code is determined as Determine the ordinate v of the light intensity sensor to obtain the coordinates (u, v) of the light intensity sensor in the pixel coordinate system.
其中,机器人基于第一子序列得到第一格雷码的方法可以是,机器人从每一个周期的子序列中确定一个目标光强值,例如将子序列的光强值的平均值或子序列中任意光强值确定为目标光强值,得到五个目标光强值;分别对每一个目标光强值进行归一化处理,得到五个编码值;最后,将五个编码值按时间顺序排序得到第一格雷码。Wherein, the method for the robot to obtain the first Gray code based on the first subsequence may be that the robot determines a target light intensity value from the subsequence of each cycle, for example, the average value of the light intensity values of the subsequence or any subsequence in the subsequence The light intensity value is determined as the target light intensity value, and five target light intensity values are obtained; each target light intensity value is normalized to obtain five coded values; finally, the five coded values are sorted in time order to obtain First gray code.
其中,机器人可以通过以下公式对目标光强值进行归一化处理,得到编码值。Among them, the robot can normalize the target light intensity value through the following formula to obtain the coded value.
其中,I为五个光强值中的任一光强值,Imax为五个光强值中数值最大的光强值,Imin为五个光强值中数值最小的光强值,为归一化处理后的光强值。如果头戴式显示设备可以确定对应的编码值为1;否则,确定对应的编码值为1。Wherein, I is any light intensity value in five light intensity values, and I max is the light intensity value with the largest numerical value in five light intensity values, and I min is the light intensity value with the smallest numerical value in five light intensity values, is the normalized light intensity value. if Head-mounted display devices can determine The corresponding encoding value is 1; otherwise, determine The corresponding encoding value is 1.
在一种实现中,机器人在得到每一个光强传感器的两个格雷码后,可以将两个格雷码分别转换为二进制码,得到光强传感器在像素坐标系中的横坐标和纵坐标。例如机器人通过上述处理得到第一格雷码为00011,第二格雷码为00111,机器人可以计算得到第一格雷码对应的二进制码为3,第二格雷码对应的二进制码为6,则该光学传感器在像素坐标系中的坐标为(3,6)。In one implementation, after obtaining the two Gray codes of each light intensity sensor, the robot can convert the two Gray codes into binary codes respectively to obtain the abscissa and ordinate of the light intensity sensor in the pixel coordinate system. For example, the robot obtains the first Gray code as 00011 and the second Gray code as 00111 through the above processing, and the robot can calculate that the binary code corresponding to the first Gray code is 3, and the binary code corresponding to the second Gray code is 6, then the optical sensor The coordinates in the pixel coordinate system are (3, 6).
需要说明的是,在一些实施例中,每个光强传感器的坐标可以由头戴式显示设备计算得到,头戴式显示设备在计算得到每个光强传感器的坐标后,将每个光强传感器的坐标发送至机器人,执行以下步骤S106。It should be noted that, in some embodiments, the coordinates of each light intensity sensor can be calculated by the head-mounted display device. After calculating the coordinates of each light intensity sensor, the head-mounted display device calculates the coordinates of each light intensity sensor The coordinates of the sensor are sent to the robot, and the following step S106 is executed.
S107、机器人基于每个光强传感器在像素坐标系中的坐标和其在头戴式显示设备坐标系F3中的三维坐标,确定头戴式显示设备在投影仪坐标系F12中的位姿 S107. The robot determines the pose of the head-mounted display device in the projector coordinate system F12 based on the coordinates of each light intensity sensor in the pixel coordinate system and its three-dimensional coordinates in the head-mounted display device coordinate system F3
在一些实施例中,机器人在步骤S105中可以获取头戴式显示设备发送的L个光强传感器在头戴式显示设备坐标系F3中的三维坐标,在S106中可以获取L个光强传感器在像素坐标系中的坐标,因而,机器人可以基于光强传感器的标识,确定每一个光强传感器在像素坐标系中的坐标和其在头戴式显示设备坐标系F3中的三维坐标;再针对每个光强传感器在像素坐标系中的坐标和在头戴式显示设备坐标系F3中的三维坐标,生成一个方程式,得到L个方程式;解L个方程式,可以得到头戴式显示设备在投影仪坐标系F12中的位姿 用于指示头戴式显示设备坐标系F3与投影仪坐标系F12的变换关系。In some embodiments, in step S105, the robot can acquire the three-dimensional coordinates of the L light intensity sensors sent by the head-mounted display device in the coordinate system F3 of the head-mounted display device, and in S106 can acquire the coordinates of the L light intensity sensors in the head-mounted display device coordinate system F3. The coordinates in the pixel coordinate system, therefore, the robot can determine the coordinates of each light intensity sensor in the pixel coordinate system and its three-dimensional coordinates in the head-mounted display device coordinate system F3 based on the identification of the light intensity sensor; The coordinates of a light intensity sensor in the pixel coordinate system and the three-dimensional coordinates in the head-mounted display device coordinate system F3 generate an equation to obtain L equations; solve the L equations to obtain the head-mounted display device in the projector Pose in coordinate system F12 It is used to indicate the transformation relationship between the head-mounted display device coordinate system F3 and the projector coordinate system F12.
例如,机器人可以基于光强传感器在像素坐标系中的坐标、投影仪的内参和光强传感器在头戴式显示设备坐标系F3中的三维坐标,通过PnP算法计算得到头戴式显示设备在投影仪坐标系F12中的位姿其中,PnP算法可以包括光束法平差(BA,BundleAdjustment)算法和直接线性变换(DLT,Direct Linear Transform)算法等,此处不作限制。For example, based on the coordinates of the light intensity sensor in the pixel coordinate system, the internal parameters of the projector, and the three-dimensional coordinates of the light intensity sensor in the head-mounted display device coordinate system F3, the robot can calculate the projection of the head-mounted display device through the PnP algorithm. The pose in instrument coordinate system F12 Wherein, the PnP algorithm may include a bundle adjustment (BA, Bundle Adjustment) algorithm and a direct linear transformation (DLT, Direct Linear Transform) algorithm, etc., which are not limited here.
其中,机器人针对每一个光强传感器生成一个方程式,得到方程组;通过解方程组,可以得到头戴式显示设备在投影仪坐标系F12中的位姿。Among them, the robot generates an equation for each light intensity sensor to obtain a system of equations; by solving the system of equations, the pose of the head-mounted display device in the projector coordinate system F12 can be obtained.
其中,机器人可以基于以下公式生成方程式:Among other things, the robot can generate equations based on:
其中,(u,v)为光强传感器在像素坐标系中的坐标,(x,y,z)为光强传感器在头戴式显示设备坐标系F3中的三维坐标值,投影仪的内参包括焦距和中心点坐标,投影仪焦距为fx和fx,中心坐标为(cx,cy),n为一个比例值。Among them, (u, v) are the coordinates of the light intensity sensor in the pixel coordinate system, (x, y, z) are the three-dimensional coordinate values of the light intensity sensor in the coordinate system F3 of the head-mounted display device, and the internal parameters of the projector include The focal length and center point coordinates, the focal length of the projector is f x and f x , the center coordinate is (c x , cy y ), and n is a proportional value.
S108、机器人基于投影仪坐标系F12相对于相机坐标系F11的变换关系,将头戴式显示设备在投影仪坐标系F12中的位姿转换为头戴式显示设备在相机坐标系F11中的位姿 S108. Based on the transformation relationship between the projector coordinate system F12 and the camera coordinate system F11, the robot calculates the pose of the head-mounted display device in the projector coordinate system F12 Convert to the pose of the head-mounted display device in the camera coordinate system F11
在一种实现中,机器人存储有投影仪坐标系F12相对于相机坐标系F11的变换关系机器人基于以下公式,得到头戴式显示设备在相机坐标系F11中的位姿:In one implementation, the robot stores the transformation relationship of the projector coordinate system F12 relative to the camera coordinate system F11 The robot obtains the pose of the head-mounted display device in the camera coordinate system F11 based on the following formula:
应理解,为结构光系统的参数,在相机和投影仪的位置确定后,可以通过机器视觉软件或其他标定方法确定。It should be understood that The parameters of the structured light system can be determined by machine vision software or other calibration methods after the positions of the camera and projector are determined.
S109、机器人基于机器人在室内的位姿机器人坐标系F2和相机坐标系F11的变换关系头戴式显示设备在相机坐标系F11中的位姿计算得到头戴式显示设备在环境坐标系F0中的位姿。S109, the robot is based on the indoor pose of the robot The transformation relationship between the robot coordinate system F2 and the camera coordinate system F11 The pose of the head-mounted display device in the camera coordinate system F11 Calculate the pose of the head-mounted display device in the environment coordinate system F0.
在一种实现中,机器人通过步骤S102可以得到机器人在在室内的位姿进而,机器人可以基于以下公式,得到头戴式显示设备在环境坐标系F0中的位姿:In one implementation, the robot can obtain the indoor pose of the robot through step S102 Furthermore, the robot can obtain the pose of the head-mounted display device in the environment coordinate system F0 based on the following formula:
其中,为机器人在环境坐标系F0中的位姿;为机器人坐标系F2和相机坐标系F11的标定关系;为头戴式显示设备在相机坐标系F11中的位姿。需要说明的是,机器人坐标系F2和相机坐标系F11的标定关系可以通过手眼标定法等标定方法确定,在结构光系统在机器人上安装固定后,可以标定出机器人坐标系F2和相机坐标系F11的变换关系。in, is the pose of the robot in the environment coordinate system F0; is the calibration relationship between the robot coordinate system F2 and the camera coordinate system F11; is the pose of the head-mounted display device in the camera coordinate system F11. It should be noted that the calibration relationship between the robot coordinate system F2 and the camera coordinate system F11 can be determined by calibration methods such as the hand-eye calibration method. After the structured light system is installed and fixed on the robot, the robot coordinate system F2 and the camera coordinate system F11 can be calibrated transformation relationship.
S110、机器人基于上述通信连接,将头戴式显示设备在环境坐标系F0中的位姿发送至头戴式显示设备。S110. Based on the above communication connection, the robot sends the pose of the head-mounted display device in the environment coordinate system F0 to the head-mounted display device.
在一些实施例中,步骤S107至步骤S109中的部分内容可以由头戴式显示设备执行。例如,所述头戴式显示设备接收所述机器人发送的所述头戴式显示设备相对于机器人的位姿和所述机器人在所述环境坐标系中的位姿;头戴式显示设备根据机器人在环境坐标系中的位姿和头戴式显示设备相对于机器人的位姿,确定头戴式显示设备在环境坐标系中的位姿。In some embodiments, part of the content from step S107 to step S109 may be executed by a head-mounted display device. For example, the head-mounted display device receives the pose of the head-mounted display device relative to the robot and the pose of the robot in the environment coordinate system sent by the robot; The pose in the environment coordinate system and the pose of the head-mounted display device relative to the robot determine the pose of the head-mounted display device in the environment coordinate system.
S111、头戴式显示设备基于头戴式显示设备在环境坐标系F0中的位姿和三维地图,生成显示内容。S111. The head-mounted display device generates display content based on the pose and the three-dimensional map of the head-mounted display device in the environment coordinate system F0.
在一种实现中,头戴式显示设备基于其在环境坐标系F0中的位姿,在三维地图内确定目标视场范围,结合已有的媒体资源,生成目标视场范围的图像。其中,目标视场范围为用户佩戴上头戴式显示设备时显示的范围。In one implementation, the head-mounted display device determines the range of the target field of view in the three-dimensional map based on its pose in the environment coordinate system F0, and combines existing media resources to generate an image of the target field of view. Wherein, the target field of view range is the range displayed when the user wears the head-mounted display device.
S112、头戴式显示设备显示该显示内容。S112. The head-mounted display device displays the display content.
用户佩戴上头戴式显示设备,可以看到上述显示内容。例如头戴式显示设备基于头戴式显示设备在室内的位姿,确定佩戴上头戴式显示设备的用户面前的左侧无障碍物,右侧有障碍物,则生成的显示内容可以显示右侧障碍物的位置为虚拟障碍物,并显示左侧位置为虚拟通道,那么,用户在看到该显示内容后可以避开障碍物,在安全区域活动。其中,虚拟障碍物和虚拟通道为头戴式显示设备的媒体库中存储的媒体资源。The user wears the head-mounted display device and can see the above-mentioned displayed content. For example, based on the indoor pose of the head-mounted display device, the head-mounted display device determines that there are no obstacles on the left side and obstacles on the right side in front of the user wearing the head-mounted display device, then the generated display content can display the The position of the side obstacle is a virtual obstacle, and the position on the left side is displayed as a virtual passage. Then, after seeing the displayed content, the user can avoid obstacles and move in a safe area. Wherein, the virtual obstacle and the virtual channel are media resources stored in the media library of the head-mounted display device.
在另一些实施例中,机器人基于头戴式显示设备在环境坐标系F0中的位姿和三维地图生成显示内容,机器人再将该显示内容发送至头戴式显示设备。In other embodiments, the robot generates display content based on the pose of the head-mounted display device in the environment coordinate system F0 and the three-dimensional map, and then the robot sends the display content to the head-mounted display device.
在一些实施例中,在步骤S107后,机器人可以基于头戴式显示设备在投影仪坐标系中的位姿、机器人坐标系和投影仪坐标系的变换关系,确定头戴式显示设备在机器人坐标系中的位姿;进而,基于头戴式显示设备在机器人坐标系中的位姿和机器人在室内的位姿,确定头戴式显示设备在环境坐标系F0中的位姿。其中,机器人坐标系和投影仪坐标系的变换关系可以预先标定的。例如通过步骤S108和S109中的相机标定机器人坐标系和相机坐标系的变换关系、相机坐标系和投影仪坐标系的变换关系,进而,确定机器人坐标系和投影仪坐标系的变换关系。可以理解的,在确定机器人坐标系和投影仪坐标系的变换关系后,在一些实施例中,机器人可以不需设置相机。In some embodiments, after step S107, the robot can determine the position of the head-mounted display device in the robot coordinate system based on the pose of the head-mounted display device in the projector coordinate system and the transformation relationship between the robot coordinate system and the projector coordinate system. Then, based on the pose of the head-mounted display device in the robot coordinate system and the pose of the robot in the room, determine the pose of the head-mounted display device in the environment coordinate system F0. Wherein, the transformation relationship between the robot coordinate system and the projector coordinate system can be calibrated in advance. For example, the transformation relationship between the robot coordinate system and the camera coordinate system, the transformation relationship between the camera coordinate system and the projector coordinate system is calibrated by the camera in steps S108 and S109, and then the transformation relationship between the robot coordinate system and the projector coordinate system is determined. It can be understood that after the transformation relationship between the robot coordinate system and the projector coordinate system is determined, in some embodiments, the robot does not need to install a camera.
参考图11,图11示出了本申请实施例提供的头戴式显示设备300的结构示意图。如图11所示,头戴式显示设备300可包括:处理器301、存储器302、通信模块303、传感器模块304、摄像头305、显示装置306、音频装置307。以上各个部件可以耦合连接并相互通信。可理解的,图11所示的结构并不构成对头戴式显示设备300的具体限定。Referring to FIG. 11 , FIG. 11 shows a schematic structural diagram of a head-mounted
在本申请另一些实施例中,头戴式显示设备300可以包括比图示更多或更少的部件。例如,头戴式显示设备300还可以包括物理按键如开关键、音量键、USB接口等等。In some other embodiments of the present application, the head-mounted
处理器301可以包括一个或多个处理单元,例如:处理器301可以包括AP,调制解调处理器,GPU,ISP,控制器,视频编解码器,DSP,基带处理器,和/或NPU等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制,使得各个部件执行相应的功能,例如人机交互、运动跟踪/预测、渲染显示、音频处理等。The processor 301 may include one or more processing units, for example: the processor 301 may include an AP, a modem processor, a GPU, an ISP, a controller, a video codec, a DSP, a baseband processor, and/or an NPU, etc. . Wherein, different processing units may be independent devices, or may be integrated in one or more processors. The controller can generate operation control signals according to instruction opcodes and timing signals, complete instruction fetching and execution control, and enable each component to perform corresponding functions, such as human-computer interaction, motion tracking/prediction, rendering display, audio processing, etc.
存储器302存储用于执行本申请实施例提供的虚拟现实场景中的交互方法的可执行程序代码,该可执行程序代码包括指令。存储器302可以包括存储程序区和存储数据区。通信模块303可包括移动通信模块和无线通信模块。其中,移动通信模块可以提供应用在头戴式显示设备300上的包括2G/3G/4G/5G等无线通信的解决方案。无线通信模块可以提供应用在头戴式显示设备300上的包括WLAN,BT,GNSS,FM,IR等无线通信的解决方案。通信模块303可支持头戴式显示设备300和电子设备进行通信。传感器模块304用于采集佩戴该头戴式显示设备300的用户的运动状态数据。传感器模块304可包括加速度计、指南针、陀螺仪、磁力计、或用于检测运动的其他传感器等。The memory 302 stores executable program codes for executing the interaction method in the virtual reality scene provided by the embodiment of the present application, where the executable program codes include instructions. The memory 302 may include an area for storing programs and an area for storing data. The
在本申请实施例中,存储器302可以存储头戴式显示设备上每一个光强传感器在头戴式显示设备上的三维坐标值。In the embodiment of the present application, the memory 302 may store the three-dimensional coordinate values of each light intensity sensor on the head-mounted display device on the head-mounted display device.
在一些实施例中,传感器模块304可以为设置在头戴式显示设备300内的惯性测量单元(inertial measurement unit,IMU)。传感器模块304可用于获取用户头部的运动数据,如头部位置信息、位移、速度,摇动、转动等。传感器模块304还可以包括光学传感器,用于结合摄像头305来跟踪用户的眼睛位置以及捕捉眼球运动数据。例如可以用于确定用户瞳孔间距、眼间距、每只眼睛相对于头戴式显示设备300的3D位置、每只眼睛的扭转和旋转(即转动、俯仰和摇动)的幅度和注视方向等等。摄像头305可以用于捕捉捕捉静态图像或视频。该静态图像或视频可以是面向外部的用户周围的图像或视频,也可以是面向内部的图像或视频。In some embodiments, the
在本申请实施例中,传感器模块304可以包括至少四个光强传感器。In the embodiment of the present application, the
摄像头305可以跟踪用户单眼或者双眼的运动。摄像头305包括但不限于传统彩色摄像头(RGB camera)、深度摄像头(RGB depth camera)、动态视觉传感器(dynamic visionsensor,DVS)相机等。The
头戴式显示设备300通过GPU,显示装置306,以及应用处理器等来呈现或者显示VR场景。GPU为图像处理的微处理器,连接显示装置306和应用处理器。处理器301可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。GPU用于根据从电子设备处得到的数据执行数学和几何计算,利用计算机图形技术、计算机仿真技术等来渲染3D的虚拟场景,以提供用于在显示装置306上显示的内容。GPU还用于将校正或预失真添加到虚拟场景的渲染过程中,以补偿或校正由光学器件引起的失真。GPU还可以基于来自传感器模块304的数据来调整提供给显示装置306的内容。例如,GPU可以基于用户眼睛的3D位置、瞳孔距离等在提供给显示装置306的内容中添加景深信息。在本申请的一些实施例中,显示装置306用于接收头戴式显示设备300的GPU提供的内容,并根据该内容呈现或者显示VR场景。在本申请的另一些实施例中,显示装置306用于接收来自电子设备处理后的数据或内容(例如经过电子设备渲染后的数据),并根据该数据或内容来呈现VR场景。在一些实施例中,显示装置306可以分别为用户的左眼和右眼呈现对应的图像,从而模拟双眼视觉。The head-mounted
在本申请实施例中,头戴式显示设备300可以接收来自机器人发送的显示内容,并根据该数据或内容来呈现VR场景。In the embodiment of the present application, the head-mounted
在一些实施例中,显示装置306可包括显示屏以及配合使用的光学器件。其中,显示屏可包括显示面板,显示面板可以用于显示虚拟图像,从而为用户呈现立体的虚拟场景。显示面板可以采用LCD,OLED,AMOLED,FLED,Miniled,MicroLed,Micro-oLed,QLED等。显示屏的数量可以是一个,也可以是多个。光学器件可包括一个或多个光学元件,例如菲涅耳透镜、凸透镜、凹透镜、滤波器等等。光学器件用于将来自显示屏的光引导至出射光瞳以供用户感知。在一些实施方式中,光学器件中的一个或多个光学元件可具有一个或多个涂层,诸如,抗反射涂层。光学器件对图像光的放大允许显示屏在物理上更小、更轻、消耗更少的功率。另外,图像光的放大可以增加显示屏显示的内容的视野。例如,光学器件可以使得显示屏所显示的内容的视野为用户的全部视野。光学器件还可用于校正一个或多个光学误差。光学误差的示例包括:桶形失真、枕形失真、纵向色差、横向色差、球面像差、彗形像差、场曲率、散光等。在一些实施方式中,提供给显示屏显示的内容被预先失真,由光学器件在从显示屏接收基于内容产生的图像光时校正该失真。在另一些实施例中,显示装置306可包括用于将光学信号(例如光束)直接投射到用户视网膜上的投影装置。该投影装置可以是投影仪。投影装置可以接收GPU提供的内容,将该内容编码到光学信号上,并将编码后的光学信号投射到用户的视网膜上,使得用户感受到立体的VR场景。投影装置的数量可以是一个,也可以是多个。音频装置307用于实现音频的采集以及输出。音频装置307可包括但不限于:麦克风、扬声器、耳机等等。In some embodiments,
在一些实施例中,图2B所示的系统还可以包括手持设备。手持设备可以和电子设备通过BT、NFC、ZigBee等近距离传输技术无线连接并通信,还可以通过USB接口、HDMI接口或自定义接口等来有线连接并通信。手持设备的实现形式可以包括手柄、鼠标、键盘、手写笔、手环等等。手持设备可配置有多种传感器,例如加速度传感器、陀螺仪传感器、磁传感器、压力传感器等。压力传感器可设置于手持设备的确认按键下。确认按键可以是实体按键,也可以是虚拟按键。手持设备的传感器用于采集对应的数据,例如加速度传感器采集手持设备的加速度、陀螺仪传感器采集手持设备的运动速度等。手持设备可以将各个传感器采集到的数据发送给电子设备进行分析。电子设备可以根据手持设备中各个传感器采集到的数据,确定手持设备的运动情况以及状态。手持设备的运动情况可包括但不限于:是否移动、移动的方向、移动的速度、移动的距离、移动的轨迹等等。手持设备的状态可包括:手持设备的确认按键是否被按压。电子设备可以根据手持设备的运动情况和/或状态,调整头戴式显示设备300上显示的图像和/或启动对应的功能,例如移动该图像中的光标,该光标的移动轨迹由手持设备的运动情况确定。In some embodiments, the system shown in FIG. 2B may also include a handheld device. Handheld devices can be connected and communicated with electronic devices wirelessly through BT, NFC, ZigBee and other short-distance transmission technologies, and can also be wired and communicated through USB interfaces, HDMI interfaces or custom interfaces. The implementation forms of the handheld device may include handles, mice, keyboards, stylus pens, bracelets, and the like. Handheld devices can be configured with various sensors, such as acceleration sensors, gyroscope sensors, magnetic sensors, pressure sensors, and the like. The pressure sensor can be arranged under the confirmation button of the handheld device. The confirmation button can be a physical button or a virtual button. The sensors of the handheld device are used to collect corresponding data, for example, the acceleration sensor collects the acceleration of the handheld device, the gyroscope sensor collects the movement speed of the handheld device, and the like. The handheld device can send the data collected by each sensor to the electronic device for analysis. The electronic device can determine the motion and state of the handheld device according to the data collected by various sensors in the handheld device. The movement condition of the handheld device may include but not limited to: whether to move, the direction of movement, the speed of movement, the distance of movement, the trajectory of movement and so on. The status of the handheld device may include: whether the confirmation button of the handheld device is pressed. The electronic device can adjust the image displayed on the head-mounted
本申请实施例还提供了一种电子设备,电子设备包括一个或多个处理器和一个或多个存储器;其中,一个或多个存储器与一个或多个处理器耦合,一个或多个存储器用于存储计算机程序代码,计算机程序代码包括计算机指令,当一个或多个处理器执行计算机指令时,使得电子设备执行上述实施例描述的方法。The embodiment of the present application also provides an electronic device, the electronic device includes one or more processors and one or more memories; wherein, the one or more memories are coupled with the one or more processors, and the one or more memories are used for For storing computer program codes, the computer program codes include computer instructions, and when one or more processors execute the computer instructions, the electronic device executes the methods described in the above embodiments.
本申请实施例还提供了一种包含指令的计算机程序产品,当计算机程序产品在电子设备上运行时,使得电子设备执行上述实施例描述的方法。The embodiment of the present application also provides a computer program product containing instructions, and when the computer program product is run on the electronic device, the electronic device is made to execute the method described in the foregoing embodiments.
本申请实施例还提供了一种计算机可读存储介质,包括指令,当指令在电子设备上运行时,使得电子设备执行上述实施例描述的方法。The embodiment of the present application also provides a computer-readable storage medium, including instructions, and when the instructions are run on the electronic device, the electronic device is made to execute the method described in the foregoing embodiments.
可以理解的是,本申请的各实施方式可以任意进行组合,以实现不同的技术效果。It can be understood that the various implementation modes of the present application can be combined arbitrarily to achieve different technical effects.
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线)或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid StateDisk)等。In the above embodiments, all or part of them may be implemented by software, hardware, firmware or any combination thereof. When implemented using software, it may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, the processes or functions according to the present application will be generated in whole or in part. The computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable devices. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website, computer, server or data center Transmission to another website site, computer, server, or data center by wired (eg, coaxial cable, optical fiber, DSL) or wireless (eg, infrared, wireless, microwave, etc.) means. The computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server or a data center integrated with one or more available media. The available medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a Solid State Disk).
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,该流程可以由计算机程序来指令相关的硬件完成,该程序可存储于计算机可读取存储介质中,该程序在执行时,可包括如上述各方法实施例的流程。而前述的存储介质包括:ROM或随机存储记忆体RAM、磁碟或者光盘等各种可存储程序代码的介质。Those of ordinary skill in the art can understand that all or part of the processes in the methods of the above embodiments are realized. The processes can be completed by computer programs to instruct related hardware. The programs can be stored in computer-readable storage media. When the programs are executed , may include the processes of the foregoing method embodiments. The aforementioned storage medium includes: ROM or random access memory RAM, magnetic disk or optical disk, and other various media that can store program codes.
总之,以上所述仅为本申请技术方案的实施例而已,并非用于限定本申请的保护范围。凡根据本申请的揭露,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。In a word, the above description is only an embodiment of the technical solution of the present application, and is not intended to limit the protection scope of the present application. All modifications, equivalent replacements, improvements, etc. based on the disclosure of this application shall be included within the scope of protection of this application.
Claims (16)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202111339331.4A CN116124119A (en) | 2021-11-12 | 2021-11-12 | A positioning method, positioning device and system |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202111339331.4A CN116124119A (en) | 2021-11-12 | 2021-11-12 | A positioning method, positioning device and system |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN116124119A true CN116124119A (en) | 2023-05-16 |
Family
ID=86299597
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202111339331.4A Pending CN116124119A (en) | 2021-11-12 | 2021-11-12 | A positioning method, positioning device and system |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN116124119A (en) |
Citations (13)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2010128133A (en) * | 2008-11-27 | 2010-06-10 | Univ Of Tokyo | Mobile information superimposition system and information superimposition method |
| CN105354820A (en) * | 2015-09-30 | 2016-02-24 | 深圳多新哆技术有限责任公司 | Method and apparatus for regulating virtual reality image |
| US20170115488A1 (en) * | 2015-10-26 | 2017-04-27 | Microsoft Technology Licensing, Llc | Remote rendering for virtual images |
| US20170237940A1 (en) * | 2016-02-12 | 2017-08-17 | Sony Interactive Entertainment Network America Llc | Multiuser telepresence interaction |
| CN108154533A (en) * | 2017-12-08 | 2018-06-12 | 北京奇艺世纪科技有限公司 | A kind of position and attitude determines method, apparatus and electronic equipment |
| CN108230399A (en) * | 2017-12-22 | 2018-06-29 | 清华大学 | A kind of projector calibrating method based on structured light technique |
| CN109753140A (en) * | 2017-11-02 | 2019-05-14 | 腾讯科技(深圳)有限公司 | Operational order acquisition methods, device based on virtual reality |
| CN110442011A (en) * | 2019-07-29 | 2019-11-12 | 中国计量大学 | A kind of method that can continuously detect virtual reality device dynamic delay and the time-delay detection system using this method |
| CN110672097A (en) * | 2019-11-25 | 2020-01-10 | 北京中科深智科技有限公司 | Indoor positioning and tracking method, device and system based on laser radar |
| CN111653175A (en) * | 2020-06-09 | 2020-09-11 | 浙江商汤科技开发有限公司 | Virtual sand table display method and device |
| CN111823240A (en) * | 2019-05-27 | 2020-10-27 | 广东小天才科技有限公司 | A face tracking robot, method, device and storage medium |
| CN112233146A (en) * | 2020-11-04 | 2021-01-15 | Oppo广东移动通信有限公司 | Location recommendation method and apparatus, computer-readable storage medium and electronic device |
| WO2021088498A1 (en) * | 2019-11-08 | 2021-05-14 | 华为技术有限公司 | Virtual object display method and electronic device |
-
2021
- 2021-11-12 CN CN202111339331.4A patent/CN116124119A/en active Pending
Patent Citations (13)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2010128133A (en) * | 2008-11-27 | 2010-06-10 | Univ Of Tokyo | Mobile information superimposition system and information superimposition method |
| CN105354820A (en) * | 2015-09-30 | 2016-02-24 | 深圳多新哆技术有限责任公司 | Method and apparatus for regulating virtual reality image |
| US20170115488A1 (en) * | 2015-10-26 | 2017-04-27 | Microsoft Technology Licensing, Llc | Remote rendering for virtual images |
| US20170237940A1 (en) * | 2016-02-12 | 2017-08-17 | Sony Interactive Entertainment Network America Llc | Multiuser telepresence interaction |
| CN109753140A (en) * | 2017-11-02 | 2019-05-14 | 腾讯科技(深圳)有限公司 | Operational order acquisition methods, device based on virtual reality |
| CN108154533A (en) * | 2017-12-08 | 2018-06-12 | 北京奇艺世纪科技有限公司 | A kind of position and attitude determines method, apparatus and electronic equipment |
| CN108230399A (en) * | 2017-12-22 | 2018-06-29 | 清华大学 | A kind of projector calibrating method based on structured light technique |
| CN111823240A (en) * | 2019-05-27 | 2020-10-27 | 广东小天才科技有限公司 | A face tracking robot, method, device and storage medium |
| CN110442011A (en) * | 2019-07-29 | 2019-11-12 | 中国计量大学 | A kind of method that can continuously detect virtual reality device dynamic delay and the time-delay detection system using this method |
| WO2021088498A1 (en) * | 2019-11-08 | 2021-05-14 | 华为技术有限公司 | Virtual object display method and electronic device |
| CN110672097A (en) * | 2019-11-25 | 2020-01-10 | 北京中科深智科技有限公司 | Indoor positioning and tracking method, device and system based on laser radar |
| CN111653175A (en) * | 2020-06-09 | 2020-09-11 | 浙江商汤科技开发有限公司 | Virtual sand table display method and device |
| CN112233146A (en) * | 2020-11-04 | 2021-01-15 | Oppo广东移动通信有限公司 | Location recommendation method and apparatus, computer-readable storage medium and electronic device |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10788673B2 (en) | User-based context sensitive hologram reaction | |
| US9230368B2 (en) | Hologram anchoring and dynamic positioning | |
| CN110362193B (en) | Target tracking method and system assisted by hand or eye tracking | |
| US9710973B2 (en) | Low-latency fusing of virtual and real content | |
| US10955665B2 (en) | Concurrent optimal viewing of virtual objects | |
| US12256211B2 (en) | Immersive augmented reality experiences using spatial audio | |
| US10620779B2 (en) | Navigating a holographic image | |
| JP5966510B2 (en) | Information processing system | |
| US20130326364A1 (en) | Position relative hologram interactions | |
| US20130328925A1 (en) | Object focus in a mixed reality environment | |
| US20130335405A1 (en) | Virtual object generation within a virtual environment | |
| US20160210780A1 (en) | Applying real world scale to virtual content | |
| US20130342572A1 (en) | Control of displayed content in virtual environments | |
| Schütt et al. | Semantic interaction in augmented reality environments for microsoft hololens | |
| US20240303934A1 (en) | Adaptive image processing for augmented reality device | |
| KR20250123921A (en) | Augmented Reality Ergonomics Assessment System | |
| US11531390B1 (en) | Augmented reality with eyewear triggered IoT | |
| US12302014B2 (en) | High dynamic range for dual pixel sensors | |
| CN118747039A (en) | Method, device, electronic device and storage medium for moving virtual objects | |
| CN116124119A (en) | A positioning method, positioning device and system | |
| KR20250041962A (en) | Method of simultaneous localization and mapping and electronic device perfroming thereof | |
| CN117234281A (en) | Data processing method, device, electronic equipment, head-mounted equipment and medium | |
| CN118827940A (en) | Projection display method, device, projection equipment and storage medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |