[go: up one dir, main page]

CN118057266A - Method and device for controlling user position in virtual scene - Google Patents

Method and device for controlling user position in virtual scene Download PDF

Info

Publication number
CN118057266A
CN118057266A CN202211462815.2A CN202211462815A CN118057266A CN 118057266 A CN118057266 A CN 118057266A CN 202211462815 A CN202211462815 A CN 202211462815A CN 118057266 A CN118057266 A CN 118057266A
Authority
CN
China
Prior art keywords
ray
user
movement
target gesture
virtual scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211462815.2A
Other languages
Chinese (zh)
Inventor
饶小林
方迟
刘硕
刘静薇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202211462815.2A priority Critical patent/CN118057266A/en
Publication of CN118057266A publication Critical patent/CN118057266A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请实施例提供一种虚拟场景中用户位置的控制方法及装置,当检测到目标手势时,在虚拟场景中显示射线,射线的起点表示用户的当前位置,射线的末端表示用户移动后的目标位置,通过检测目标手势的运动方向和运动速度,当目标手势的运动速度小于预设速度时,根据目标手势的运动方向控制射线的末端位置移动,当目标手势的运动速度大于或等于预设速度时,控制虚拟场景中用户移动到射线的末端位置。该方法能够根据现实场景中用户手势的运动方向和运动速度,控制虚拟场景中用户位置的快速移动,不需要额外的控制器即可实现用户位置的移动,方便用户操作,提高了用户体验。

The embodiment of the present application provides a method and device for controlling the position of a user in a virtual scene. When a target gesture is detected, a ray is displayed in the virtual scene. The starting point of the ray indicates the current position of the user, and the end of the ray indicates the target position after the user moves. By detecting the movement direction and speed of the target gesture, when the movement speed of the target gesture is less than a preset speed, the end position of the ray is controlled to move according to the movement direction of the target gesture. When the movement speed of the target gesture is greater than or equal to the preset speed, the user in the virtual scene is controlled to move to the end position of the ray. This method can control the rapid movement of the user's position in the virtual scene according to the movement direction and speed of the user's gesture in the real scene. The movement of the user's position can be achieved without an additional controller, which is convenient for user operation and improves the user experience.

Description

虚拟场景中用户位置的控制方法及装置Method and device for controlling user position in virtual scene

技术领域Technical Field

本申请实施例涉及电子设备技术领域,尤其涉及一种虚拟场景中用户位置的控制方法及装置。The embodiments of the present application relate to the technical field of electronic devices, and in particular, to a method and device for controlling a user position in a virtual scene.

背景技术Background technique

扩展现实(Extended Reality,XR),是指通过计算机将真实与虚拟相结合,打造一个可人机交互的虚拟环境,XR也是虚拟现实(Virtual Reality,VR)、增强现实(AugmentedReality,AR)和混合现实(Mixed Reality,MR)等多种技术的统称。通过将三者的视觉交互技术相融合,为体验者带来虚拟世界与现实世界之间无缝转换的“沉浸感”。Extended Reality (XR) refers to the combination of reality and virtuality through computers to create a virtual environment for human-computer interaction. XR is also a general term for multiple technologies such as Virtual Reality (VR), Augmented Reality (AR) and Mixed Reality (MR). By integrating the visual interaction technologies of the three, it brings the experiencer an "immersive feeling" of seamless transition between the virtual world and the real world.

在XR场景中,用户可以通过凝视控制、手持硬件设备(例如控制器)控制、手势控制、可穿戴设备(例如,腕带)控制、语音控制等中的一种或多种与XR设备进行交互,从而实现对XR设备对应的虚拟环境中的虚拟对象进行控制,例如,选择对象、移动、旋转、调整大小、启动控件、更改颜色或皮肤、定义虚拟对象之间的交互、设置虚拟力以作用于虚拟对象等操作。In the XR scenario, users can interact with the XR device through one or more of gaze control, handheld hardware device (such as a controller) control, gesture control, wearable device (such as a wristband) control, voice control, etc., thereby controlling virtual objects in the virtual environment corresponding to the XR device, such as selecting objects, moving, rotating, resizing, activating controls, changing color or skin, defining interactions between virtual objects, setting virtual forces to act on virtual objects, and other operations.

现有技术中,用户对应的虚拟对象的移动通常通过用户佩戴的手持控制器的移动实现,操作方式不方便。In the prior art, the movement of a virtual object corresponding to a user is usually achieved by moving a handheld controller worn by the user, which is inconvenient to operate.

发明内容Summary of the invention

本申请实施例提供一种虚拟场景中用户位置的控制方法及装置,该方法能够根据现实场景中用户手势的运动方向和运动速度,控制虚拟场景中用户位置的快速移动,不需要额外的控制器即可实现用户位置的移动,方便用户操作,提高了用户体验。The embodiments of the present application provide a method and device for controlling the user position in a virtual scene. The method can control the rapid movement of the user position in the virtual scene according to the movement direction and speed of the user's gesture in the real scene. The user position can be moved without an additional controller, which facilitates user operation and improves user experience.

第一方面,本申请实施例提供一种虚拟场景中用户位置的控制方法,所述方法包括:In a first aspect, an embodiment of the present application provides a method for controlling a user position in a virtual scene, the method comprising:

当检测到目标手势时,在虚拟场景中显示射线,所述射线的起点表示用户的当前位置,所述射线的末端表示所述用户移动后的目标位置;When a target gesture is detected, a ray is displayed in the virtual scene, wherein the starting point of the ray represents the current position of the user, and the end point of the ray represents the target position after the user moves;

检测所述目标手势的运动方向和运动速度;Detecting the movement direction and movement speed of the target gesture;

当所述目标手势的运动速度小于预设速度时,根据所述目标手势的运动方向控制所述射线的位置移动;When the movement speed of the target gesture is less than a preset speed, controlling the position movement of the ray according to the movement direction of the target gesture;

当所述目标手势的运动速度大于或等于所述预设速度时,控制所述虚拟场景中所述用户移动到所述射线的末端位置。When the movement speed of the target gesture is greater than or equal to the preset speed, the user in the virtual scene is controlled to move to the end position of the ray.

在一些实施例中,所述根据所述目标手势的运动方向控制所述射线的位置移动,包括:In some embodiments, controlling the position movement of the ray according to the movement direction of the target gesture includes:

当所述目标手势的运动方向为上、下移动和/或向内、向外转动时,控制所述射线的位置远、近移动;和/或When the movement direction of the target gesture is upward or downward movement and/or inward or outward rotation, controlling the position of the ray to move far or near; and/or

当所述目标手势的运动方向为左、右移动和/或向左、向右转动时,控制所述射线的位置左、右移动。When the movement direction of the target gesture is leftward or rightward movement and/or leftward or rightward rotation, the position of the ray is controlled to move leftward or rightward.

在一些实施例中,控制所述射线的位置远、近、左、右移动时,所述射线的起点位置不变,所述射线的末端位置移动。In some embodiments, when the position of the ray is controlled to move far, near, left, or right, the starting position of the ray remains unchanged, and the end position of the ray moves.

在一些实施例中,控制所述射线的位置远、近、左、右移动时,所述射线的起点位置和末端位置均移动,所述射线的起点位置的移动距离小于所述末端位置的移动距离。In some embodiments, when the position of the ray is controlled to move far, near, left, or right, both the starting position and the end position of the ray move, and the moving distance of the starting position of the ray is smaller than the moving distance of the end position.

在一些实施例中,当检测到目标手势时,在虚拟场景中显示射线,包括:In some embodiments, when a target gesture is detected, displaying a ray in the virtual scene includes:

当检测到所述目标手势时,以所述虚拟场景中的预设位置为所述射线的起点,生成并显示所述射线。When the target gesture is detected, the ray is generated and displayed with the preset position in the virtual scene as the starting point of the ray.

在一些实施例中,当检测到目标手势时,在虚拟场景中显示射线,包括:In some embodiments, when a target gesture is detected, displaying a ray in the virtual scene includes:

当检测到所述目标手势时,在所述虚拟场景中显示一虚拟对象,以所述虚拟对象上的点作为所述射线的起点,生成并显示所述射线。When the target gesture is detected, a virtual object is displayed in the virtual scene, and a point on the virtual object is used as a starting point of the ray to generate and display the ray.

在一些实施例中,所述虚拟对象为所述目标手势对应的虚拟手势,所述射线的起点为所述虚拟手势的掌心位置或者指尖位置。In some embodiments, the virtual object is a virtual gesture corresponding to the target gesture, and the starting point of the ray is a palm position or a fingertip position of the virtual gesture.

在一些实施例中,所述射线的初始长度为预设长度。In some embodiments, the initial length of the ray is a preset length.

在一些实施例中,所述射线的延伸方向为所述用户面对的方向。In some embodiments, the extending direction of the ray is the direction the user is facing.

在一些实施例中,在根据所述目标手势的运动方向控制所述射线的位置移动过程中,还包括:In some embodiments, in the process of controlling the position movement of the ray according to the movement direction of the target gesture, the method further includes:

区别显示所述射线的初始位置和移动后的位置。The initial position and the position after the movement of the ray are displayed separately.

在一些实施例中,所述目标手势为手掌向上且手指捏合姿势,所述目标手势的运动速度为所述捏合的手指弹开的速度。In some embodiments, the target gesture is a palm-up and finger-pinching posture, and the movement speed of the target gesture is the speed at which the pinched fingers pop open.

在一些实施例中,当所述目标手势的运动速度大于或等于所述预设速度时,控制所述虚拟场景中所述用户移动到所述射线的末端位置,包括:In some embodiments, when the movement speed of the target gesture is greater than or equal to the preset speed, controlling the user in the virtual scene to move to the end position of the ray includes:

当所述运动速度大于或等于所述预设速度时,控制所述用户瞬移到所述射线的末端位置;When the movement speed is greater than or equal to the preset speed, controlling the user to teleport to the end position of the ray;

或者,当所述运动速度大于或等于所述预设速度时,控制所述用户按照预设的目标速度移动到所述射线的末端位置。Alternatively, when the movement speed is greater than or equal to the preset speed, the user is controlled to move to the end position of the ray at a preset target speed.

在一些实施例中,还包括:在所述用户移动到所述射线的末端位置之后,在所述虚拟场景中隐藏所述射线。In some embodiments, the method further includes: hiding the ray in the virtual scene after the user moves to an end position of the ray.

另一方面,本申请实施例提供一种虚拟场景中用户位置的控制装置,所述装置包括:On the other hand, an embodiment of the present application provides a device for controlling a user position in a virtual scene, the device comprising:

显示模块,用于当检测到目标手势时,在虚拟场景中显示射线,所述射线的起点表示用户的当前位置,所述射线的末端表示所述用户移动后的目标位置;A display module, used for displaying a ray in a virtual scene when a target gesture is detected, wherein the starting point of the ray represents the current position of the user, and the end point of the ray represents the target position after the user moves;

检测模块,用于检测所述目标手势的运动方向和运动速度;A detection module, used to detect the movement direction and movement speed of the target gesture;

控制模块,用于当所述目标手势的运动速度小于预设速度时,根据所述目标手势的运动方向控制所述射线的位置移动;A control module, configured to control the position movement of the ray according to the movement direction of the target gesture when the movement speed of the target gesture is less than a preset speed;

所述控制模块,还用于当所述目标手势的运动速度大于或等于所述预设速度时,控制所述虚拟场景中所述用户移动到所述射线的末端位置。The control module is further configured to control the user in the virtual scene to move to the end position of the ray when the movement speed of the target gesture is greater than or equal to the preset speed.

另一方面,本申请实施例提供一种虚拟场景中用户位置的控制设备,所述虚拟场景中用户位置的控制设备包括:处理器和存储器,所述存储器用于存储计算机程序,所述处理器用于调用并运行所述存储器中存储的计算机程序,以执行如上述任一项所述的方法。On the other hand, an embodiment of the present application provides a control device for a user position in a virtual scene, wherein the control device for a user position in a virtual scene comprises: a processor and a memory, wherein the memory is used to store a computer program, and the processor is used to call and run the computer program stored in the memory to execute any of the methods described above.

另一方面,本申请实施例提供一种计算机可读存储介质,所述计算机可读存储介质用于存储计算机程序,所述计算机程序使得计算机执行如上述任一项所述的方法。On the other hand, an embodiment of the present application provides a computer-readable storage medium, wherein the computer-readable storage medium is used to store a computer program, and the computer program enables a computer to execute any of the methods described above.

另一方面,本申请实施例提供一种计算机程序产品,包括计算机程序,所述计算机程序被处理器执行时实现如上述任一项所述的方法。On the other hand, an embodiment of the present application provides a computer program product, including a computer program, which implements any of the methods described above when executed by a processor.

本申请实施例提供的虚拟场景中用户位置的控制方法及装置,当检测到目标手势时,在虚拟场景中显示射线,射线的起点表示用户的当前位置,射线的末端表示用户移动后的目标位置,通过检测目标手势的运动方向和运动速度,当目标手势的运动速度小于预设速度时,根据目标手势的运动方向控制射线的位置移动,当目标手势的运动速度大于或等于预设速度时,控制虚拟场景中用户移动到射线的末端位置。该方法能够根据现实场景中用户手势的运动方向和运动速度,控制虚拟场景中用户位置的快速移动,不需要额外的控制器即可实现用户位置的移动,方便用户操作,提高了用户体验。The control method and device of the user position in the virtual scene provided by the embodiment of the present application, when a target gesture is detected, a ray is displayed in the virtual scene, the starting point of the ray indicates the current position of the user, and the end of the ray indicates the target position after the user moves, by detecting the movement direction and movement speed of the target gesture, when the movement speed of the target gesture is less than the preset speed, the position movement of the ray is controlled according to the movement direction of the target gesture, and when the movement speed of the target gesture is greater than or equal to the preset speed, the user in the virtual scene is controlled to move to the end position of the ray. This method can control the rapid movement of the user position in the virtual scene according to the movement direction and movement speed of the user gesture in the real scene, and can realize the movement of the user position without an additional controller, which is convenient for user operation and improves the user experience.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings required for use in the description of the embodiments will be briefly introduced below. Obviously, the drawings described below are only some embodiments of the present invention. For ordinary technicians in this field, other drawings can be obtained based on these drawings without creative work.

图1为本申请实施例提供的一种虚拟场景中用户位置的控制方法的流程图;FIG1 is a flow chart of a method for controlling a user position in a virtual scene provided by an embodiment of the present application;

图2为用户的位置移动被触发时虚拟场景的一种界面示意图;FIG2 is a schematic diagram of an interface of a virtual scene when the user's position movement is triggered;

图3为用户的位置移动触发至结束时虚拟场景的界面变换示意图;FIG3 is a schematic diagram of the interface transformation of the virtual scene from the user's position movement trigger to the end;

图4为本申请实施例提供的一种虚拟场景中用户位置的控制装置的结构示意图;FIG4 is a schematic diagram of the structure of a device for controlling a user position in a virtual scene provided by an embodiment of the present application;

图5为本申请实施例提供的虚拟场景中用户位置的控制设备的一种结构示意图。FIG5 is a schematic diagram of a structure of a device for controlling a user's position in a virtual scene provided in an embodiment of the present application.

具体实施方式Detailed ways

下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本发明保护的范围。The following will be combined with the drawings in the embodiments of the present invention to clearly and completely describe the technical solutions in the embodiments of the present invention. Obviously, the described embodiments are only part of the embodiments of the present invention, not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by ordinary technicians in this field without creative work are within the scope of protection of the present invention.

需要说明的是,本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本发明的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或服务器不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。It should be noted that the terms "first", "second", etc. in the specification and claims of the present invention and the above-mentioned drawings are used to distinguish similar objects, and are not necessarily used to describe a specific order or sequence. It should be understood that the data used in this way can be interchanged where appropriate, so that the embodiments of the present invention described herein can be implemented in an order other than those illustrated or described herein. In addition, the terms "including" and "having" and any variations thereof are intended to cover non-exclusive inclusions, for example, a process, method, system, product, or server that includes a series of steps or units is not necessarily limited to those steps or units clearly listed, but may include other steps or units that are not clearly listed or inherent to these processes, methods, products, or devices.

本申请实施例提供一种虚拟场景中用户位置的控制方法,可以应用在XR设备中,XR设备包括但不限于VR设备、AR设备和MR设备。An embodiment of the present application provides a method for controlling a user position in a virtual scene, which can be applied to XR devices, including but not limited to VR devices, AR devices, and MR devices.

VR:创建和体验虚拟世界的技术,计算生成一种虚拟环境,是一种多源信息(本文中提到的虚拟现实至少包括视觉感知,此外还可以包括听觉感知、触觉感知、运动感知,甚至还包括味觉感知、嗅觉感知等),实现虚拟环境的融合的、交互式的三维动态视景和实体行为的仿真,使用户沉浸到模拟的虚拟现实环境中,实现在诸如地图、游戏、视频、教育、医疗、模拟、协同训练、销售、协助制造、维护和修复等多种虚拟环境的应用。VR: Technology for creating and experiencing a virtual world. It generates a virtual environment by computation. It is a multi-source information (the virtual reality mentioned in this article includes at least visual perception, and can also include auditory perception, tactile perception, motion perception, and even taste perception, olfactory perception, etc.). It realizes the simulation of the fusion and interactive three-dimensional dynamic vision and entity behavior of the virtual environment, immersing users in a simulated virtual reality environment, and realizing applications in various virtual environments such as maps, games, videos, education, medical care, simulation, collaborative training, sales, assisted manufacturing, maintenance and repair.

VR设备是指实现虚拟现实效果的终端,通常可以提供为眼镜、头盔式显示器(HeadMount Display,HMD)、隐形眼镜的形态,以用于实现视觉感知和其他形式的感知,当然虚拟现实设备实现的形态不限于此,根据需要可以进一步小型化或大型化。VR device refers to a terminal that realizes virtual reality effects, which can usually be provided in the form of glasses, helmet-mounted displays (Head Mount Display, HMD), and contact lenses to realize visual perception and other forms of perception. Of course, the form of virtual reality devices is not limited to this, and can be further miniaturized or enlarged as needed.

AR:AR布景是指至少一个虚拟对象叠加在物理布景或其表示之上的模拟布景。例如,电子系统可具有不透明显示器和至少一个成像传感器,成像传感器用于捕获物理布景的图像或视频,这些图像或视频是物理布景的表示。系统将图像或视频与虚拟对象组合,并在不透明显示器上显示该组合。个体使用系统经由物理布景的图像或视频间接地查看物理布景,并且观察叠加在物理布景之上的虚拟对象。当系统使用一个或多个图像传感器捕获物理布景的图像,并且使用那些图像在不透明显示器上呈现AR布景时,所显示的图像被称为视频透传。另选地,用于显示AR布景的电子系统可具有透明或半透明显示器,个体可通过该显示器直接查看物理布景。该系统可在透明或半透明显示器上显示虚拟对象,使得个体使用该系统观察叠加在物理布景之上的虚拟对象。又如,系统可包括将虚拟对象投影到物理布景中的投影系统。虚拟对象可例如在物理表面上或作为全息图被投影,使得个体使用该系统观察叠加在物理布景之上的虚拟对象。具体的,一种在相机采集图像的过程中,实时地计算相机在现实世界(或称三维世界、真实世界)中的相机姿态参数,根据该相机姿态参数在相机采集的图像上添加虚拟元素的技术。虚拟元素包括但不限于:图像、视频和三维模型。AR技术的目标是在屏幕上把虚拟世界套接在现实世界上进行互动。AR: AR scenery refers to a simulated scenery in which at least one virtual object is superimposed on a physical scenery or a representation thereof. For example, an electronic system may have an opaque display and at least one imaging sensor for capturing images or videos of a physical scenery, which are representations of the physical scenery. The system combines the image or video with the virtual object and displays the combination on the opaque display. An individual uses the system to indirectly view the physical scenery via an image or video of the physical scenery and observes the virtual object superimposed on the physical scenery. When the system uses one or more image sensors to capture images of the physical scenery and uses those images to present the AR scenery on an opaque display, the displayed image is referred to as video pass-through. Alternatively, an electronic system for displaying an AR scenery may have a transparent or translucent display through which an individual can directly view the physical scenery. The system may display virtual objects on a transparent or translucent display so that an individual uses the system to observe virtual objects superimposed on the physical scenery. As another example, the system may include a projection system that projects virtual objects into a physical scenery. Virtual objects may be projected, for example, on a physical surface or as a hologram so that an individual uses the system to observe virtual objects superimposed on the physical scenery. Specifically, a technology that calculates the camera's posture parameters in the real world (or three-dimensional world, real world) in real time during the process of the camera capturing images, and adds virtual elements to the images captured by the camera based on the camera posture parameters. Virtual elements include but are not limited to: images, videos, and three-dimensional models. The goal of AR technology is to integrate the virtual world with the real world on the screen for interaction.

MR:通过在现实场景呈现虚拟场景信息,在现实世界、虚拟世界和用户之间搭起一个交互反馈的信息回路,以增强用户体验的真实感。例如,将计算机创建的感官输入(例如,虚拟对象)与来自物理布景的感官输入或其表示集成在模拟布景中,一些MR布景中,计算机创建的感官输入可以适应于来自物理布景的感官输入的变化。另外,用于呈现MR布景的一些电子系统可以监测相对于物理布景的取向和/或位置,以使虚拟对象能够与真实对象(即来自物理布景的物理元素或其表示)交互。例如,系统可监测运动,使得虚拟植物相对于物理建筑物看起来是静止的。MR: By presenting virtual scene information in the real scene, an interactive feedback information loop is established between the real world, the virtual world and the user to enhance the realism of the user experience. For example, computer-created sensory input (e.g., virtual objects) is integrated with sensory input from the physical scene or its representation in the simulated scene. In some MR scenes, the computer-created sensory input can adapt to changes in sensory input from the physical scene. In addition, some electronic systems used to present MR scenes can monitor the orientation and/or position relative to the physical scene so that virtual objects can interact with real objects (i.e., physical elements from the physical scene or their representations). For example, the system can monitor movement so that virtual plants appear stationary relative to physical buildings.

虚拟场景,是应用程序在电子设备上运行时显示(或提供)的虚拟场景。该虚拟场景可以是对真实世界的仿真环境,也可以是半仿真半虚构的虚拟场景,还可以是纯虚构的虚拟场景。虚拟场景可以是二维(2D)虚拟场景、2.5维虚拟场景或者三维(3D)虚拟场景中的任意一种,本申请实施例对虚拟场景的维度不加以限定。例如,虚拟场景可以包括天空、陆地、海洋等,该陆地可以包括沙漠、城市等环境元素,用户可以控制虚拟对象在该虚拟场景中进行移动。A virtual scene is a virtual scene displayed (or provided) when an application is running on an electronic device. The virtual scene can be a simulation environment of the real world, a semi-simulated and semi-fictitious virtual scene, or a purely fictitious virtual scene. The virtual scene can be any one of a two-dimensional (2D) virtual scene, a 2.5-dimensional virtual scene, or a three-dimensional (3D) virtual scene. The embodiment of the present application does not limit the dimension of the virtual scene. For example, a virtual scene may include the sky, land, ocean, etc., and the land may include environmental elements such as deserts and cities, and users can control virtual objects to move in the virtual scene.

虚拟视场,用户在虚拟现实设备中通过透镜所能够感知到的虚拟环境中的区域,使用虚拟视场的视场角(Field Of View,FOV)来表示所感知到区域,虚拟视场也可以称为虚拟视点。Virtual field of view refers to the area in the virtual environment that the user can perceive through the lens in the virtual reality device. The field of view (FOV) of the virtual field of view is used to represent the perceived area. The virtual field of view can also be called a virtual viewpoint.

虚拟对象,虚拟场景中进行交互的对象,受到用户或机器人程序(例如,基于人工智能的机器人程序)的控制,能够在虚拟场景中静止、移动以及进行各种行为的对象,例如游戏中的各种角色等。Virtual objects are objects that interact in virtual scenes. They are controlled by users or robot programs (for example, robot programs based on artificial intelligence) and can be still, move, and perform various behaviors in virtual scenes, such as various characters in games.

虚拟视场是不断变化的,现有技术中,用户的虚拟视场的变化通常是通过XR设备的控制器控制。以HMD为例,HMD中设置有姿态检测的传感器(如九轴传感器),用于实时检测HMD的姿态变化,如果用户佩戴了HMD,那么当用户头部姿态发生变化时,会将头部的实时姿态传给处理器,以此计算用户的视线在虚拟环境中的注视点,根据注视点计算虚拟环境的三维模型中处于用户注视范围(即虚拟视场)的图像,并在显示屏上显示,使人仿佛在置身于现实环境中观看一样的沉浸式体验。The virtual field of view is constantly changing. In the prior art, the changes in the user's virtual field of view are usually controlled by the controller of the XR device. Taking HMD as an example, the HMD is provided with a posture detection sensor (such as a nine-axis sensor) for real-time detection of the posture changes of the HMD. If the user wears the HMD, then when the user's head posture changes, the real-time posture of the head will be transmitted to the processor to calculate the user's gaze point in the virtual environment, and the image in the user's gaze range (i.e., virtual field of view) in the three-dimensional model of the virtual environment is calculated based on the gaze point, and displayed on the display screen, so that people can have an immersive experience as if they were watching in the real environment.

虚拟视场的变化可以理解为用户位置的变化,二者的变化都会导致用户看到的虚拟场景发生变化,所以本申请实施例中用户位置的移动,也可以理解为虚拟视场的移动。其中,在一些场景中,虚拟场景中会显示有用户对应的虚拟对象,相应的,用户位置的移动可以表现为虚拟场景中虚拟对象的移动。The change of the virtual field of view can be understood as the change of the user's position. The change of both will cause the virtual scene seen by the user to change. Therefore, the movement of the user's position in the embodiment of the present application can also be understood as the movement of the virtual field of view. In some scenes, the virtual scene will display a virtual object corresponding to the user. Accordingly, the movement of the user's position can be expressed as the movement of the virtual object in the virtual scene.

本申请实施例中,虚拟场景可以称为虚拟环境,现实场景可以成为现实环境或者人工现实环境。In the embodiments of the present application, the virtual scene may be referred to as a virtual environment, and the real scene may be referred to as a real environment or an artificial real environment.

本申请实施例提供一种虚拟场景中用户位置的控制方法,图1为本申请实施例提供的一种虚拟场景中用户位置的控制方法的流程图,本实施例以执行主体为XR设备为例进行说明,如图1所示,该虚拟场景中用户位置的控制方法包括以下步骤:The present application embodiment provides a method for controlling the position of a user in a virtual scene. FIG1 is a flow chart of a method for controlling the position of a user in a virtual scene provided by the present application embodiment. The present embodiment is described by taking an XR device as an example. As shown in FIG1 , the method for controlling the position of a user in a virtual scene includes the following steps:

S101、当检测到目标手势时,在虚拟场景中显示射线,该射线的起点表示用户的当前位置,该射线的末端表示用户移动后的目标位置。S101. When a target gesture is detected, a ray is displayed in a virtual scene, wherein a starting point of the ray indicates a current position of a user, and an end point of the ray indicates a target position after the user moves.

XR设备可以监测用户在现实场景中的手部姿势,XR设备可以连续或周期性地检测用户的手部姿势(即手势)。手势可以理解为用户的手在现实场景中形成的形状。一种实现方式中,可以使用XR设备自带的摄像头或者外接摄像头采集到的图像识别用户手势,这些摄像头捕捉用户手部的描述。另一种实现方式中,手势可以基于来自可穿戴设备的输入,例如跟踪用户手部运动的手套或腕带。可选的,在通过上述摄像头采集到的图像或者可穿戴设备采集到的运动数据识别手势时,可以将采集到的数据输入到预先训练得到的用于识别手势的机器学习模型中,通过该机器学习模型识别得到手势,从而提高了手势识别的准确性。XR devices can monitor the user's hand posture in real scenes, and XR devices can continuously or periodically detect the user's hand posture (i.e., gestures). Gestures can be understood as the shape formed by the user's hands in real scenes. In one implementation, the user's gestures can be recognized using images captured by the camera of the XR device or an external camera, which captures a description of the user's hands. In another implementation, gestures can be based on input from a wearable device, such as a glove or wristband that tracks the user's hand movements. Optionally, when identifying gestures through images captured by the above-mentioned camera or motion data collected by the wearable device, the collected data can be input into a pre-trained machine learning model for identifying gestures, and the gestures are recognized by the machine learning model, thereby improving the accuracy of gesture recognition.

该目标手势用于触发用户的位置移动,位置移动也称为位置传送,即当检测到现实场景中出现该目标手势时,表示用户即将开始位置移动。The target gesture is used to trigger the user's position movement, which is also called position transmission. That is, when the target gesture is detected in the real scene, it means that the user is about to start position movement.

该目标手势可以是手掌向上、握拳、手掌向上且手指呈捏合姿势、“OK”姿势、点赞姿势等,该目标手势还可以是两个手掌共同做出的手势,例如,双手抱拳、双手合十、双手同时向上、双手同时向下等,这里只是举例说明,本申请实施例不对目标手势的具体姿势进行限定。The target gesture can be palm up, fist, palm up with fingers in a pinching posture, "OK" gesture, thumbs up gesture, etc. The target gesture can also be a gesture made by two palms together, for example, both hands are clasped in fists, both hands are put together, both hands are up at the same time, both hands are down at the same time, etc. These are just examples, and the embodiments of the present application do not limit the specific posture of the target gesture.

在检测到目标手势后,在虚拟场景中显示射线,该射线可以用于通知用户位置移动被触发,当然,也可以通过其他方式提示用户位置移动被触发,例如,通过文字在虚拟场景的空白位置或者固定位置提示位置移动被触发,或者通过语音播放形式通知用户位置移动被触发。After the target gesture is detected, a ray is displayed in the virtual scene, which can be used to notify the user that the position movement is triggered. Of course, the user can also be prompted that the position movement is triggered in other ways, for example, by using text to prompt that the position movement is triggered in a blank position or a fixed position in the virtual scene, or by notifying the user that the position movement is triggered through voice playback.

射线还用于表示用户移动前后的位置,射线的起点表示用户的当前位置,射线的末端表示用户移动后的目标位置,可以理解,射线的起点并不表示用户当前的实际位置,这里只是一种位置示意图,用于表示用户位置的变化,射线的末端位置的变化表示目标位置的变化,用户可以根据自己的需求不断调整目标位置。Rays are also used to indicate the user's position before and after movement. The starting point of the ray indicates the user's current position, and the end of the ray indicates the target position after the user moves. It can be understood that the starting point of the ray does not indicate the user's actual current position. This is just a position diagram used to indicate changes in the user's position. Changes in the position of the end of the ray indicate changes in the target position. Users can continuously adjust the target position according to their needs.

本实施例不对射线的颜色、样式进行限定,射线的样式包括线条粗细、实线或者虚线等。This embodiment does not limit the color and style of the ray, and the style of the ray includes line thickness, solid line or dotted line, etc.

可选的,可以在射线的末端增加箭头,通过箭头区分射线的起点和末端,或者,在射线的末端增加其他图案,例如圆环形光标、实心圆点,以区分射线的起点和末端。Optionally, an arrow may be added to the end of the ray to distinguish the starting point and the end of the ray by the arrow, or other patterns, such as a circular cursor or a solid dot, may be added to the end of the ray to distinguish the starting point and the end of the ray.

一种实现方式中,当检测到目标手势时,以虚拟场景中的预设位置为射线的起点,生成并显示射线。In one implementation, when a target gesture is detected, a ray is generated and displayed with a preset position in the virtual scene as the starting point of the ray.

该预设位置可以为虚拟场景中的任意一个位置,例如,该预设位置为虚拟场景底部靠近中间的位置,或者为虚拟场景中的空白位置,还可以为虚拟场景左下角或者右下角的位置。The preset position may be any position in the virtual scene, for example, the preset position may be a position near the middle of the bottom of the virtual scene, or a blank position in the virtual scene, or a position at the lower left corner or lower right corner of the virtual scene.

另一种实现方式中,当检测到目标手势时,在虚拟场景中显示一虚拟对象,以虚拟对象上的点作为射线的起点,生成并显示射线。In another implementation, when a target gesture is detected, a virtual object is displayed in the virtual scene, and a ray is generated and displayed using a point on the virtual object as a starting point of the ray.

该虚拟对象可以为目标手势对应的虚拟手势、任意的手势、卡通形象、眼睛、五角星、圆形或者其他物体等,本实施例不对此进行限制。The virtual object may be a virtual gesture corresponding to the target gesture, an arbitrary gesture, a cartoon image, an eye, a five-pointed star, a circle or other objects, etc., which is not limited in this embodiment.

该虚拟对象的显示位置可以为虚拟场景中的任意一个位置,例如,该虚拟对象的显示位置为虚拟场景底部靠近中间的位置,或者为虚拟场景中的空白位置,还可以为虚拟场景左下角或者右下角的位置。The display position of the virtual object can be any position in the virtual scene, for example, the display position of the virtual object is near the middle of the bottom of the virtual scene, or a blank position in the virtual scene, or the lower left corner or lower right corner of the virtual scene.

当虚拟对象为目标手势对应的虚拟手势时,射线的起点可以为虚拟手势上的任意一个点,例如,手掌中心、指尖位置,指尖位置可以是任意一个指头的指尖位置,例如,中指指尖位置或者大拇指指尖的位置。When the virtual object is a virtual gesture corresponding to the target gesture, the starting point of the ray can be any point on the virtual gesture, such as the center of the palm or the fingertip position. The fingertip position can be the fingertip position of any finger, such as the fingertip position of the middle finger or the fingertip position of the thumb.

XR设备除了确定射线的起点外,还需要确定射线的初始长度和延伸方向。可选的,射线的初始长度可以为预设长度,该预设长度可以由系统配置,也可以由用户自行设定。In addition to determining the starting point of the ray, the XR device also needs to determine the initial length and extension direction of the ray. Optionally, the initial length of the ray can be a preset length, which can be configured by the system or set by the user.

射线的延伸方向可以是预设方向,该预设方向由系统配置,也可以由用户自行设定。射线的延伸方向还可以与用户当前的方位有关,例如,将射线的延伸方向设置为用户面对的方向,或者设置为用户的左前方或者右前方,本实施例不对此进行限制。The extension direction of the ray may be a preset direction, which is configured by the system or set by the user. The extension direction of the ray may also be related to the current orientation of the user, for example, the extension direction of the ray may be set to the direction the user is facing, or to the left front or right front of the user, which is not limited in this embodiment.

图2为用户的位置移动被触发时虚拟场景的一种界面示意图,如图2所示,在检测到用户姿势为手掌向上且手指捏合姿势时,在虚拟场景中显示该手势,并以手指捏合位置为起点发射出一条射线,射线的末端为具有圆环形光标的一端,圆环形光标所在的位置即用户移动后的目标位置。Figure 2 is a schematic diagram of an interface of a virtual scene when the user's position movement is triggered. As shown in Figure 2, when it is detected that the user's posture is with the palm facing up and the fingers pinched, the gesture is displayed in the virtual scene, and a ray is emitted with the finger pinching position as the starting point. The end of the ray is the end with a circular cursor, and the position of the circular cursor is the target position after the user moves.

S102、检测目标手势的运动方向和运动速度。S102: Detect the movement direction and movement speed of the target gesture.

位置移动被触发后,用户可以保持该目标手势并移动和/或转动手的位置,通过手的方向改变和运动速度调整射线的位置。After the position movement is triggered, the user can maintain the target gesture and move and/or rotate the position of the hand to adjust the position of the ray by changing the direction and movement speed of the hand.

目标手势的运动方向和运动速度可以为目标手势所在的手的运动速度和运动方向,目标手势的运动方向和运动速度的检测可以使用XR设备自带的摄像头或者外接摄像头采集到的图像进行检测获得,也可以基于来自可穿戴设备的输入,例如跟踪用户手部运动的手套或腕带,根据可穿戴设备采集到的数据检测手部的运动方向和运动速度。还可以结合摄像头采集到的图像数据和手部佩戴的可穿戴设备采集的传感器数据共同检测得到目标手势的运动方向和运动速度。可选的,目标手势的运动速度还可以为目标手势中手指的弹开速度。The movement direction and speed of the target gesture may be the movement speed and direction of the hand where the target gesture is located. The detection of the movement direction and speed of the target gesture may be obtained by using the image captured by the camera of the XR device or an external camera, or may be based on input from a wearable device, such as a glove or wristband that tracks the movement of the user's hand, and detecting the movement direction and speed of the hand based on the data collected by the wearable device. The movement direction and speed of the target gesture may also be detected in combination with the image data collected by the camera and the sensor data collected by the wearable device worn on the hand. Optionally, the movement speed of the target gesture may also be the flicking speed of the fingers in the target gesture.

目标手势的运动包括移动和/或转动,相应的,目标手势的运动方向包括移动方向和/或转动方向,目标手势的运动速度包括移动速度和/或转动速度。示例性的,移动方向例如为向上、向下、向左和向右等,转动方向例如为向内转动、向外转动、向左转动和向右转动等。The movement of the target gesture includes movement and/or rotation, and accordingly, the movement direction of the target gesture includes the movement direction and/or the rotation direction, and the movement speed of the target gesture includes the movement speed and/or the rotation speed. Exemplarily, the movement direction is, for example, upward, downward, leftward, and rightward, and the rotation direction is, for example, inward rotation, outward rotation, leftward rotation, and rightward rotation.

当目标手势的运动方向是移动方向时,目标手势的移动方向可以是单一方向,例如,目标手势的移动方向为向上移动。目标手势的移动方向也可以是多个方向,例如,目标手势的移动方向为向右上移动,此时,移动方向可以理解为两个方向:向上和向右移动。When the motion direction of the target gesture is a moving direction, the moving direction of the target gesture may be a single direction, for example, the moving direction of the target gesture is moving upward. The moving direction of the target gesture may also be multiple directions, for example, the moving direction of the target gesture is moving to the upper right, in which case the moving direction may be understood as two directions: moving upward and moving to the right.

当目标手势的运动方向为转动方向时,转动方向是相对于一个转动点而言,该转动点可以是手腕,此时,用户保持手腕不动,手向内转动、向外转动、向左转动或者向右转动等。该转动点还可以是手肘,此时,用户保持手肘不动,手向内转动、向外转动、向左转动或者向右转动等。When the movement direction of the target gesture is a rotation direction, the rotation direction is relative to a rotation point, and the rotation point may be a wrist. At this time, the user keeps the wrist still and the hand rotates inward, outward, leftward, or rightward, etc. The rotation point may also be an elbow. At this time, the user keeps the elbow still and the hand rotates inward, outward, leftward, or rightward, etc.

可选的,目标手势的运动方向还可以是朝向用户和背离用户,朝向用户的方向是指朝向用户面部的方向,背离用户的方向是指远离用户面部的方向,朝向用户和背离用户是相对的两个方向。Optionally, the movement direction of the target gesture may also be toward the user and away from the user, the direction toward the user refers to the direction toward the user's face, and the direction away from the user refers to the direction away from the user's face. Towards the user and away from the user are two relative directions.

可以理解,在实际操作过程中,目标手势运动时,很难做到只在一个方向上移动或者转动,此时,可以选择一个主要的方向作为运动方向,控制射线的位置移动,也可以选择多个方向组合控制射线的位置的移动。It can be understood that in actual operation, it is difficult to move or rotate in only one direction when the target gesture moves. At this time, you can select a main direction as the movement direction to control the position movement of the ray, or you can select a combination of multiple directions to control the position movement of the ray.

S103、当目标手势的运动速度小于预设速度时,根据目标手势的运动方向控制射线的位置移动。S103: When the movement speed of the target gesture is less than a preset speed, controlling the movement of the ray according to the movement direction of the target gesture.

可选的,判断目标手势的运动速度是否小于预设速度,如果目标手势的运动速度小于预设速度,则根据目标手势的运动方向控制射线的位置移动。如果用户手势的运动速度大于或等于该预设速度,则控制虚拟场景中用户移动到射线的末端位置,此时,并没有调整射线的位置,那么用户移动后的位置是射线的初始位置,即将用户移动到了一个默认位置。Optionally, it is determined whether the movement speed of the target gesture is less than a preset speed. If the movement speed of the target gesture is less than the preset speed, the position of the ray is controlled to move according to the movement direction of the target gesture. If the movement speed of the user gesture is greater than or equal to the preset speed, the user in the virtual scene is controlled to move to the end position of the ray. At this time, the position of the ray is not adjusted, and the position after the user moves is the initial position of the ray, that is, the user is moved to a default position.

本实施例中,用户可以保持目标姿势不变,然后缓慢的上下抬手或者左右摆手,改变射线的位置,从而改变用户的目标位置。In this embodiment, the user can keep the target posture unchanged, and then slowly raise the hand up and down or swing the hand left and right to change the position of the ray, thereby changing the user's target position.

示例性的,当目标手势的运动方向为上、下移动时,和/或,向内、向外转动时,控制射线的位置远、近移动;和/或,当目标手势的运动方向为左、右移动时,和/或,向左、向右转动时,控制射线的位置左、右移动。Exemplarily, when the target gesture moves in the direction of up or down, and/or rotates inward or outward, the position of the ray is controlled to move far or near; and/or, when the target gesture moves in the direction of left or right, and/or rotates left or right, the position of the ray is controlled to move left or right.

例如,用户保持目标姿势向上抬手时,或者,用户保持目标姿势向外转动时,射线的位置向远移动;用户保持目标姿势向下抬手时,或者,用户保持目标姿势向内转动时,射线的位置向近移动;用户保持目标姿势向右摆手时,或者,用户保持目标姿势向右转动时,射线的位置向右移动;用户保持目标姿势向左摆手时,或者,用户保持目标姿势向左转动时,射线的位置向左移动。For example, when the user maintains the target posture and raises his hand upward, or when the user maintains the target posture and turns outward, the position of the ray moves away; when the user maintains the target posture and raises his hand downward, or when the user maintains the target posture and turns inward, the position of the ray moves closer; when the user maintains the target posture and swings his hand to the right, or when the user maintains the target posture and turns to the right, the position of the ray moves to the right; when the user maintains the target posture and swings his hand to the left, or when the user maintains the target posture and turns to the left, the position of the ray moves to the left.

又如,当目标手势的运动方向为朝向用户时,射线的位置向近移动,当目标手势的运动方向为背离用户时,射线的位置向远移动。该场景下,在确定用户的目标手势的运动方向时,不需要关注目标手指的运动是移动、转动还是二者的结合,用户在移动目标手势时,移动方向可以是朝向用户和背离用户,用户在转动目标手势时,转动方向可以是朝向用户和背离用户,用户在同时进行移动和转动目标手势时,运动方向可以是朝向用户和背离用户。For another example, when the target gesture moves toward the user, the position of the ray moves closer, and when the target gesture moves away from the user, the position of the ray moves farther. In this scenario, when determining the movement direction of the user's target gesture, it is not necessary to pay attention to whether the movement of the target finger is moving, rotating, or a combination of the two. When the user moves the target gesture, the movement direction can be toward the user or away from the user. When the user rotates the target gesture, the rotation direction can be toward the user or away from the user. When the user moves and rotates the target gesture at the same time, the movement direction can be toward the user or away from the user.

其中,目标手势的运动方向和射线的运动方向的对应关系可以根据用户的需求设置,并不限于上述举例的对应关系。The correspondence between the movement direction of the target gesture and the movement direction of the ray can be set according to the needs of the user, and is not limited to the correspondence in the above example.

一种示例性方式中,在控制射线的位置远、近、左、右移动时,射线的起点位置不变,射线的末端位置移动。即保持用户当前位置不变,只改变用户的目标位置。In an exemplary manner, when the position of the ray is controlled to move far, near, left, or right, the starting position of the ray remains unchanged, and the end position of the ray moves, that is, the current position of the user remains unchanged, and only the target position of the user is changed.

另一种示例性的方式中,在控制射线的位置远、近、左、右移动时,射线的起点位置和末端位置均移动,但射线的起点位置的移动距离小于末端位置的移动距离。即射线的起点位置的移动距离与末端位置的移动距离具有一定比例关系,该比例关系例如为1:3,即射线末端位置移动3厘米,起点位置移动1厘米。可以理解,该方式中虽然射线的起点位置移动了,但是用户的实际位置并没有移动。In another exemplary manner, when the position of the ray is controlled to move far, near, left, or right, both the starting position and the end position of the ray move, but the moving distance of the starting position of the ray is less than the moving distance of the end position. That is, the moving distance of the starting position of the ray and the moving distance of the end position have a certain proportional relationship, and the proportional relationship is, for example, 1:3, that is, the end position of the ray moves 3 centimeters and the starting position moves 1 centimeter. It can be understood that in this manner, although the starting position of the ray moves, the actual position of the user does not move.

一种实现方式中,射线的末端移动的距离与目标手势移动的距离关联,例如,目标手势移动的距离越大,射线的末端移动的距离越大,目标手势移动的距离越小,射线的末端移动的距离越小,二者可以呈线性关系,也可以呈非线性关系,本实施例不对此进行限制。In one implementation, the distance that the end of the ray moves is associated with the distance that the target gesture moves. For example, the greater the distance that the target gesture moves, the greater the distance that the end of the ray moves, and the smaller the distance that the target gesture moves, the smaller the distance that the end of the ray moves. The two can be linearly related or nonlinearly related, and this embodiment does not limit this.

另一种实现方式中,射线的末端移动的距离与目标手势移动的距离无关,每次检测到目标手势移动时,控制射线的末端移动预设距离,无论目标手势移动多远,射线的末端都移动固定距离,用户可以通过多次移动手,将射线的末端移动到用户所需的目标位置。In another implementation, the distance that the end of the ray moves is independent of the distance that the target gesture moves. Each time the target gesture movement is detected, the end of the ray is controlled to move a preset distance. No matter how far the target gesture moves, the end of the ray moves a fixed distance. The user can move the end of the ray to the target position desired by the user by moving his hand multiple times.

可选的,在控制射线的位置移动过程中,可以区别显示射线的初始位置和移动后的位置,这里射线的初始位置是指通过目标手势触发射线初次显示时射线的位置。Optionally, in the process of controlling the position movement of the ray, the initial position of the ray and the position after the movement may be displayed differently, where the initial position of the ray refers to the position of the ray when the ray is first displayed by triggering the target gesture.

例如,通过不同颜色的射线区别射线的初始位置和移动后的位置,可以通过黑色的射线表示射线的初始位置,通过红色的射线表示射线移动后的位置。还可以通过不同的线条线区别射线的初始位置和移动后的位置,例如,用实线示射线的初始位置,用虚线表示射线移动后的位置。For example, the initial position and the position after the movement of the ray can be distinguished by using rays of different colors, and the initial position of the ray can be represented by a black ray, and the position after the movement of the ray can be represented by a red ray. The initial position and the position after the movement of the ray can also be distinguished by using different lines, for example, a solid line can be used to represent the initial position of the ray, and a dotted line can be used to represent the position after the movement of the ray.

实际场景中,用户可能会通过多次移动手部调整射线的位置,可以理解,每次调整后,射线的初始位置都保持不变,只移动射线调整后的位置。In actual scenarios, the user may adjust the position of the ray by moving the hand multiple times. It can be understood that after each adjustment, the initial position of the ray remains unchanged, and only the adjusted position of the ray moves.

S104、当目标手势的运动速度大于或等于预设速度时,控制虚拟场景中用户移动到射线的末端位置。S104: When the movement speed of the target gesture is greater than or equal to a preset speed, control the user in the virtual scene to move to the end position of the ray.

在将射线的末端调整到最终位置之后,用户可以保持目标手势不变,快速向任意方向移动手,此时检测到的目标手势的运动速度大于或等于预设速度,从而使得用户的位置移动生效,或者称为传送生效。After adjusting the end of the ray to the final position, the user can keep the target gesture unchanged and quickly move the hand in any direction. At this time, the movement speed of the detected target gesture is greater than or equal to the preset speed, so that the user's position movement takes effect, or it is called teleportation.

可选的,当目标手势为手掌向上且手指捏合姿势时,可以将捏合的手势快速弹开以实现快速移动手部,此时检测到的目标手势的运动速度是指捏合的手指弹开的速度。Optionally, when the target gesture is a palm-up and finger-pinch gesture, the pinch gesture may be quickly released to achieve rapid hand movement, and the movement speed of the target gesture detected at this time refers to the speed at which the pinched fingers are released.

示例性的,当运动速度大于或等于预设速度时,可以控制用户瞬移到射线的末端位置,瞬移方式用户通常感觉不到位置移动过程,即从用户的视角来说,只能感觉到移动前的视角和移动后的视角。Exemplarily, when the movement speed is greater than or equal to a preset speed, the user can be controlled to teleport to the end position of the ray. The user usually cannot feel the position movement process in the teleportation mode, that is, from the user's perspective, he can only feel the perspective before and after the movement.

可选的,当运动速度大于或等于预设速度时,还可以控制用户按照预设的目标速度移动到射线的末端位置,按照目标速度移动通常是缓慢移动,用户可以感觉到位置移动过程,用户位置在移动过程中,用户视角不断变化。Optionally, when the movement speed is greater than or equal to the preset speed, the user can also be controlled to move to the end position of the ray according to the preset target speed. Moving according to the target speed is usually slow, and the user can feel the position movement process. The user's perspective keeps changing during the movement of the user's position.

可选的,在用户移动到射线的末端位置之后,在虚拟场景中隐藏射线,射线隐藏说明用户位置移动结束。可选的,在虚拟场景中,也可以通过文字和/或语音方式提示用户位置移动结束,本申请实施例不对此进行限制。如果在虚拟场景中处理了射线还显示了其他相关的虚拟对象,例如目标手势对应的虚拟手势,则在用户移动到射线的末端位置之后,该虚拟对象也会隐藏,即界面上由于位置移动被触发引入的所有对象均被隐藏。Optionally, after the user moves to the end position of the ray, the ray is hidden in the virtual scene, and the hiding of the ray indicates that the user's position movement has ended. Optionally, in the virtual scene, the user can also be prompted to end the position movement through text and/or voice, and the embodiments of the present application are not limited to this. If the ray is processed in the virtual scene and other related virtual objects are displayed, such as a virtual gesture corresponding to the target gesture, then after the user moves to the end position of the ray, the virtual object will also be hidden, that is, all objects on the interface that are triggered and introduced due to position movement are hidden.

在本次位置移动结束后,XR设备会继续检测用户手势,当检测到用户手势,但是该用户手势不是目标手势时,返回继续检测用户手势,当该用户手势是目标手势时,则触发上述的位置移动流程。After this position movement is completed, the XR device will continue to detect user gestures. When a user gesture is detected but the user gesture is not the target gesture, it will return to continue detecting the user gesture. When the user gesture is the target gesture, the above-mentioned position movement process is triggered.

图3为用户的位置移动触发至结束时虚拟场景的界面变换示意图,如图3所示,将整个过程分为4个阶段。阶段a:在检测到用户手势为手掌向上且手指捏合姿势时,在虚拟场景中显示该手势,并以手指捏合位置为起点发射出一条射线。阶段b:用户保持该姿势向右上方向移动时,射线向远以及右方移动,图3中射线1表示射线的初始位置,射线2表示射线1移动后的位置。阶段c:在将射线的末端移动到目标位置之后,用户保持手势不变,快速向任一方向移动手,图3中所示用户快速向右侧移动手,则位置移动生效。阶段d:用户从当前位置移动到射线的末端对应的目标位置,图中用户移动到目标位置之后,射线消失,在射线的末端,即圆环形光标的位置上显示了一个人物形象,用于表示用户移动到该位置,虽然图3中未示出,可选的,在射线被隐藏之后,用户手势、圆环形光标也会依次被隐藏,即界面上由于位置移动被触发引入的所有对象均被隐藏。FIG3 is a schematic diagram of the interface transformation of the virtual scene from the user's position movement trigger to the end. As shown in FIG3, the whole process is divided into four stages. Stage a: When the user gesture is detected to be a palm-up and finger-pinch gesture, the gesture is displayed in the virtual scene, and a ray is emitted from the finger-pinch position as the starting point. Stage b: When the user maintains this gesture and moves to the upper right, the ray moves away and to the right. Ray 1 in FIG3 represents the initial position of the ray, and ray 2 represents the position after ray 1 moves. Stage c: After moving the end of the ray to the target position, the user keeps the gesture unchanged and quickly moves the hand in any direction. As shown in FIG3, the user quickly moves the hand to the right, and the position movement takes effect. Stage d: The user moves from the current position to the target position corresponding to the end of the ray. In the figure, after the user moves to the target position, the ray disappears. At the end of the ray, that is, at the position of the circular cursor, a character image is displayed to indicate that the user has moved to this position. Although not shown in FIG3, optionally, after the ray is hidden, the user gesture and the circular cursor will also be hidden in turn, that is, all objects introduced by the position movement on the interface are hidden.

本实施例中,当检测到目标手势时,在虚拟场景中显示射线,射线的起点表示用户的当前位置,射线的末端表示用户移动后的目标位置,通过检测目标手势的运动方向和运动速度,当目标手势的运动速度小于预设速度时,根据目标手势的运动方向控制射线的位置移动,当目标手势的运动速度大于或等于预设速度时,控制虚拟场景中用户移动到射线的末端位置。该方法能够根据现实场景中用户手势的运动方向和运动速度,控制虚拟场景中用户位置的快速移动,不需要额外的控制器即可实现用户位置的移动,方便用户操作,提高了用户体验。In this embodiment, when a target gesture is detected, a ray is displayed in the virtual scene, the starting point of the ray indicates the current position of the user, and the end of the ray indicates the target position after the user moves. By detecting the movement direction and speed of the target gesture, when the movement speed of the target gesture is less than a preset speed, the position movement of the ray is controlled according to the movement direction of the target gesture, and when the movement speed of the target gesture is greater than or equal to the preset speed, the user in the virtual scene is controlled to move to the end position of the ray. This method can control the rapid movement of the user's position in the virtual scene according to the movement direction and speed of the user's gesture in the real scene, and can realize the movement of the user's position without an additional controller, which is convenient for user operation and improves the user experience.

为便于更好的实施本申请实施例的虚拟场景中用户位置的控制方法,本申请实施例还提供一种虚拟场景中用户位置的控制装置。图4为本申请实施例提供的一种虚拟场景中用户位置的控制装置的结构示意图,如图4所示,该虚拟场景中用户位置的控制装置100可以包括:In order to better implement the method for controlling the user position in a virtual scene of the embodiment of the present application, the embodiment of the present application also provides a device for controlling the user position in a virtual scene. FIG4 is a schematic diagram of the structure of a device for controlling the user position in a virtual scene provided by the embodiment of the present application. As shown in FIG4 , the device 100 for controlling the user position in a virtual scene may include:

显示模块11,用于当检测到目标手势时,在虚拟场景中显示射线,所述射线的起点表示用户的当前位置,所述射线的末端表示所述用户移动后的目标位置;A display module 11 is used to display a ray in a virtual scene when a target gesture is detected, wherein the starting point of the ray represents the current position of the user, and the end point of the ray represents the target position after the user moves;

检测模块12,用于检测所述目标手势的运动方向和运动速度;A detection module 12, used to detect the movement direction and movement speed of the target gesture;

控制模块13,用于当所述目标手势的运动速度小于预设速度时,根据所述目标手势的运动方向控制所述射线的位置移动;A control module 13, configured to control the position movement of the ray according to the movement direction of the target gesture when the movement speed of the target gesture is less than a preset speed;

所述控制模块13,还用于当所述目标手势的运动速度大于或等于所述预设速度时,控制所述虚拟场景中所述用户移动到所述射线的末端位置。The control module 13 is further configured to control the user in the virtual scene to move to the end position of the ray when the movement speed of the target gesture is greater than or equal to the preset speed.

在一些实施例中,所述控制模块13具体用于:In some embodiments, the control module 13 is specifically used for:

当所述目标手势的运动方向为上、下移动和/或向内、向外转动时,控制所述射线的末端位置远、近移动;和/或When the movement direction of the target gesture is upward or downward movement and/or inward or outward rotation, controlling the end position of the ray to move far or near; and/or

当所述目标手势的运动方向为左、右移动和/或向左、向右转动时,控制所述射线的末端位置左、右移动。When the movement direction of the target gesture is leftward or rightward movement and/or leftward or rightward rotation, the end position of the ray is controlled to move leftward or rightward.

在一些实施例中,控制所述射线的位置远、近、左、右移动时,所述射线的起点位置不变,所述射线的末端位置移动。In some embodiments, when the position of the ray is controlled to move far, near, left, or right, the starting position of the ray remains unchanged, and the end position of the ray moves.

在一些实施例中,控制所述射线的位置远、近、左、右移动时,所述射线的起点位置和末端位置均移动,所述射线的起点位置的移动距离小于所述末端位置的移动距离。In some embodiments, when the position of the ray is controlled to move far, near, left, or right, both the starting position and the end position of the ray move, and the moving distance of the starting position of the ray is smaller than the moving distance of the end position.

在一些实施例中,所述显示模块11具体用于:In some embodiments, the display module 11 is specifically used for:

当检测到所述目标手势时,以所述虚拟场景中的预设位置为所述射线的起点,生成并显示所述射线。When the target gesture is detected, the ray is generated and displayed with the preset position in the virtual scene as the starting point of the ray.

在另一些实施例中,所述显示模块11具体用于:In some other embodiments, the display module 11 is specifically used for:

当检测到所述目标手势时,在所述虚拟场景中显示一虚拟对象,以所述虚拟对象上的点作为所述射线的起点,生成并显示所述射线。When the target gesture is detected, a virtual object is displayed in the virtual scene, and a point on the virtual object is used as a starting point of the ray to generate and display the ray.

在一些实施例中,所述虚拟对象为所述目标手势对应的虚拟手势,所述射线的起点为所述虚拟手势的掌心位置或者指尖位置。In some embodiments, the virtual object is a virtual gesture corresponding to the target gesture, and the starting point of the ray is a palm position or a fingertip position of the virtual gesture.

在一些实施例中,所述射线的初始长度为预设长度。In some embodiments, the initial length of the ray is a preset length.

在一些实施例中,所述射线的延伸方向为所述用户面对的方向。In some embodiments, the extending direction of the ray is the direction the user is facing.

在一些实施例中,所述显示模块11还用于:In some embodiments, the display module 11 is further used for:

区别显示所述射线的初始位置和移动后的位置。The initial position and the position after the movement of the ray are displayed separately.

在一些实施例中,所述目标手势为手掌向上且手指捏合姿势,所述目标手势的运动速度为所述捏合的手指弹开的速度。In some embodiments, the target gesture is a palm-up and finger-pinching posture, and the movement speed of the target gesture is the speed at which the pinched fingers pop open.

在一些实施例中,所述控制模块13具体用于:In some embodiments, the control module 13 is specifically used for:

当所述运动速度大于或等于所述预设速度时,控制所述用户瞬移到所述射线的末端位置;When the movement speed is greater than or equal to the preset speed, controlling the user to teleport to the end position of the ray;

或者,当所述运动速度大于或等于所述预设速度时,控制所述用户按照预设的目标速度移动到所述射线的末端位置。Alternatively, when the movement speed is greater than or equal to the preset speed, the user is controlled to move to the end position of the ray at a preset target speed.

在一些实施例中,装置100还包括:In some embodiments, the apparatus 100 further comprises:

隐藏模块,用于在所述用户移动到所述射线的末端位置之后,在所述虚拟场景中隐藏所述射线。A hiding module is used to hide the ray in the virtual scene after the user moves to the end position of the ray.

应理解的是,装置实施例与方法实施例可以相互对应,类似的描述可以参照方法实施例。为避免重复,此处不再赘述。It should be understood that the device embodiment and the method embodiment may correspond to each other, and similar descriptions may refer to the method embodiment. To avoid repetition, they will not be described here.

上文中结合附图从功能模块的角度描述了本申请实施例的装置100。应理解,该功能模块可以通过硬件形式实现,也可以通过软件形式的指令实现,还可以通过硬件和软件模块组合实现。具体地,本申请实施例中的方法实施例的各步骤可以通过处理器中的硬件的集成逻辑电路和/或软件形式的指令完成,结合本申请实施例公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。可选地,软件模块可以位于随机存储器,闪存、只读存储器、可编程只读存储器、电可擦写可编程存储器、寄存器等本领域的成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法实施例中的步骤。The above describes the device 100 of the embodiment of the present application from the perspective of the functional module in conjunction with the accompanying drawings. It should be understood that the functional module can be implemented in hardware form, can be implemented by instructions in software form, and can also be implemented by a combination of hardware and software modules. Specifically, the steps of the method embodiment in the embodiment of the present application can be completed by the hardware integrated logic circuit and/or software form instructions in the processor, and the steps of the method disclosed in the embodiment of the present application can be directly embodied as a hardware decoding processor to perform, or a combination of hardware and software modules in the decoding processor to perform. Optionally, the software module can be located in a mature storage medium in the field such as a random access memory, a flash memory, a read-only memory, a programmable read-only memory, an electrically erasable programmable memory, a register, etc. The storage medium is located in a memory, and the processor reads the information in the memory, and completes the steps in the above method embodiment in conjunction with its hardware.

本申请实施例还提供一种虚拟场景中用户位置的控制设备。图5为本申请实施例提供的虚拟场景中用户位置的控制设备的一种结构示意图,如图5所示,该虚拟场景中用户位置的控制设备200可以包括:The embodiment of the present application also provides a control device for the user position in a virtual scene. FIG5 is a schematic diagram of a structure of a control device for the user position in a virtual scene provided by an embodiment of the present application. As shown in FIG5 , the control device 200 for the user position in the virtual scene may include:

存储器21和处理器22,该存储器21用于存储计算机程序,并将该程序代码传输给该处理器22。换言之,该处理器22可以从存储器21中调用并运行计算机程序,以实现本申请实施例中的方法。The memory 21 and the processor 22, the memory 21 is used to store the computer program and transmit the program code to the processor 22. In other words, the processor 22 can call and run the computer program from the memory 21 to implement the method in the embodiment of the present application.

例如,该处理器22可用于根据该计算机程序中的指令执行上述方法实施例。For example, the processor 22 may be configured to execute the above method embodiments according to instructions in the computer program.

在本申请的一些实施例中,该处理器22可以包括但不限于:In some embodiments of the present application, the processor 22 may include but is not limited to:

通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(FieldProgrammable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等等。General-purpose processor, digital signal processor (DSP), application-specific integrated circuit (ASIC), field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, etc.

在本申请的一些实施例中,该存储器21包括但不限于:In some embodiments of the present application, the memory 21 includes but is not limited to:

易失性存储器和/或非易失性存储器。其中,非易失性存储器可以是只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable ROM,PROM)、可擦除可编程只读存储器(Erasable PROM,EPROM)、电可擦除可编程只读存储器(Electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(Random Access Memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(Static RAM,SRAM)、动态随机存取存储器(Dynamic RAM,DRAM)、同步动态随机存取存储器(Synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(Double DataRate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(Enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synch link DRAM,SLDRAM)和直接内存总线随机存取存储器(Direct Rambus RAM,DR RAM)。Volatile memory and/or non-volatile memory. Among them, the non-volatile memory can be a read-only memory (ROM), a programmable read-only memory (PROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM) or a flash memory. The volatile memory can be a random access memory (RAM), which is used as an external cache. By way of example but not limitation, many forms of RAM are available, such as static random access memory (SRAM), dynamic random access memory (DRAM), synchronous dynamic random access memory (SDRAM), double data rate synchronous dynamic random access memory (DDR SDRAM), enhanced synchronous dynamic random access memory (ESDRAM), synchronous link dynamic random access memory (SLDRAM) and direct memory bus random access memory (DR RAM).

在本申请的一些实施例中,该计算机程序可以被分割成一个或多个模块,该一个或者多个模块被存储在该存储器21中,并由该处理器22执行,以完成本申请提供的方法。该一个或多个模块可以是能够完成特定功能的一系列计算机程序指令段,该指令段用于描述该计算机程序在该虚拟场景中用户位置的控制设备中的执行过程。In some embodiments of the present application, the computer program may be divided into one or more modules, which are stored in the memory 21 and executed by the processor 22 to complete the method provided by the present application. The one or more modules may be a series of computer program instruction segments capable of completing specific functions, and the instruction segments are used to describe the execution process of the computer program in the control device of the user's position in the virtual scene.

如图5所示,该虚拟场景中用户位置的控制设备还可包括:收发器23,该收发器23可连接至该处理器22或存储器21。As shown in FIG. 5 , the control device for the user position in the virtual scene may further include: a transceiver 23 , which may be connected to the processor 22 or the memory 21 .

其中,处理器22可以控制该收发器23与其他设备进行通信,具体地,可以向其他设备发送信息或数据,或接收其他设备发送的信息或数据。收发器23可以包括发射机和接收机。收发器23还可以进一步包括天线,天线的数量可以为一个或多个。The processor 22 may control the transceiver 23 to communicate with other devices, specifically, to send information or data to other devices, or to receive information or data sent by other devices. The transceiver 23 may include a transmitter and a receiver. The transceiver 23 may further include an antenna, and the number of antennas may be one or more.

可以理解,虽然图5中未示出,该视虚拟场景中用户位置的控制设备200还可以包括摄像头模组、无线保真WIFI模块、定位模块、蓝牙模块、显示器、控制器等,在此不再赘述。It can be understood that, although not shown in FIG. 5 , the control device 200 for visually viewing the user's position in the virtual scene may also include a camera module, a wireless fidelity WIFI module, a positioning module, a Bluetooth module, a display, a controller, etc., which will not be repeated here.

应当理解,该虚拟场景中用户位置的控制设备中的各个组件通过总线系统相连,其中,总线系统除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。It should be understood that the various components in the control device at the user's position in the virtual scene are connected via a bus system, wherein the bus system includes not only a data bus but also a power bus, a control bus and a status signal bus.

本申请还提供了一种计算机存储介质,其上存储有计算机程序,该计算机程序被计算机执行时使得该计算机能够执行上述方法实施例的方法。或者说,本申请实施例还提供一种包含指令的计算机程序产品,该指令被计算机执行时使得计算机执行上述方法实施例的方法。The present application also provides a computer storage medium on which a computer program is stored, and when the computer program is executed by a computer, the computer can perform the method of the above method embodiment. In other words, the present application embodiment also provides a computer program product containing instructions, and when the instructions are executed by a computer, the computer can perform the method of the above method embodiment.

本申请还提供了一种计算机程序产品,该计算机程序产品包括计算机程序,该计算机程序存储在计算机可读存储介质中。虚拟场景中用户位置的控制设备的处理器从计算机可读存储介质读取该计算机程序,处理器执行该计算机程序,使得虚拟场景中用户位置的控制设备执行本申请实施例中的虚拟场景中用户位置的控制方法中的相应流程,为了简洁,在此不再赘述。The present application also provides a computer program product, which includes a computer program, and the computer program is stored in a computer-readable storage medium. The processor of the control device of the user position in the virtual scene reads the computer program from the computer-readable storage medium, and the processor executes the computer program, so that the control device of the user position in the virtual scene executes the corresponding process in the control method of the user position in the virtual scene in the embodiment of the present application, which will not be repeated here for the sake of brevity.

在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,该模块的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个模块或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或模块的间接耦合或通信连接,可以是电性,机械或其它的形式。In the several embodiments provided in the present application, it should be understood that the disclosed systems, devices and methods can be implemented in other ways. For example, the device embodiments described above are only schematic. For example, the division of the module is only a logical function division. There may be other division methods in actual implementation, such as multiple modules or components can be combined or integrated into another system, or some features can be ignored or not executed. Another point is that the mutual coupling or direct coupling or communication connection shown or discussed can be through some interfaces, indirect coupling or communication connection of devices or modules, which can be electrical, mechanical or other forms.

作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理模块,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。例如,在本申请各个实施例中的各功能模块可以集成在一个处理模块中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块中。The modules described as separate components may or may not be physically separated, and the components displayed as modules may or may not be physical modules, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the scheme of this embodiment. For example, each functional module in each embodiment of the present application may be integrated into a processing module, or each module may exist physically separately, or two or more modules may be integrated into one module.

以上该,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以该权利要求的保护范围为准。The above is only a specific implementation of the present application, but the protection scope of the present application is not limited thereto. Any technician familiar with the technical field can easily think of changes or substitutions within the technical scope disclosed in the present application, which should be included in the protection scope of the present application. Therefore, the protection scope of the present application should be based on the protection scope of the claims.

Claims (17)

1.一种虚拟场景中用户位置的控制方法,其特征在于,包括:1. A method for controlling a user's position in a virtual scene, comprising: 当检测到目标手势时,在虚拟场景中显示射线,所述射线的起点表示用户的当前位置,所述射线的末端表示所述用户移动后的目标位置;When a target gesture is detected, a ray is displayed in the virtual scene, wherein the starting point of the ray represents the current position of the user, and the end point of the ray represents the target position after the user moves; 检测所述目标手势的运动方向和运动速度;Detecting the movement direction and movement speed of the target gesture; 当所述目标手势的运动速度小于预设速度时,根据所述目标手势的运动方向控制所述射线的位置移动;When the movement speed of the target gesture is less than a preset speed, controlling the position movement of the ray according to the movement direction of the target gesture; 当所述目标手势的运动速度大于或等于所述预设速度时,控制所述虚拟场景中所述用户移动到所述射线的末端位置。When the movement speed of the target gesture is greater than or equal to the preset speed, the user in the virtual scene is controlled to move to the end position of the ray. 2.根据权利要求1所述的方法,其特征在于,所述根据所述目标手势的运动方向控制所述射线的位置移动,包括:2. The method according to claim 1, characterized in that controlling the position movement of the ray according to the movement direction of the target gesture comprises: 当所述目标手势的运动方向为上、下移动和/或向内、向外转动时,控制所述射线的位置远、近移动;和/或When the movement direction of the target gesture is upward or downward movement and/or inward or outward rotation, controlling the position of the ray to move far or near; and/or 当所述目标手势的运动方向为左、右移动和/或向左、向右转动时,控制所述射线的位置左、右移动。When the movement direction of the target gesture is leftward or rightward movement and/or leftward or rightward rotation, the position of the ray is controlled to move leftward or rightward. 3.根据权利要求2所述的方法,其特征在于,控制所述射线的位置远、近、左、右移动时,所述射线的起点位置不变,所述射线的末端位置移动。3. The method according to claim 2 is characterized in that when the position of the ray is controlled to move far, near, left or right, the starting position of the ray remains unchanged, and the end position of the ray moves. 4.根据权利要求2所述的方法,其特征在于,控制所述射线的位置远、近、左、右移动时,所述射线的起点位置和末端位置均移动,所述射线的起点位置的移动距离小于所述末端位置的移动距离。4. The method according to claim 2 is characterized in that when the position of the ray is controlled to move far, near, left or right, both the starting position and the end position of the ray move, and the moving distance of the starting position of the ray is smaller than the moving distance of the end position. 5.根据权利要求1所述的方法,其特征在于,当检测到目标手势时,在虚拟场景中显示射线,包括:5. The method according to claim 1, characterized in that when a target gesture is detected, displaying a ray in a virtual scene comprises: 当检测到所述目标手势时,以所述虚拟场景中的预设位置为所述射线的起点,生成并显示所述射线。When the target gesture is detected, the ray is generated and displayed with the preset position in the virtual scene as the starting point of the ray. 6.根据权利要求1所述的方法,其特征在于,当检测到目标手势时,在虚拟场景中显示射线,包括:6. The method according to claim 1, characterized in that when a target gesture is detected, displaying a ray in a virtual scene comprises: 当检测到所述目标手势时,在所述虚拟场景中显示一虚拟对象,以所述虚拟对象上的点作为所述射线的起点,生成并显示所述射线。When the target gesture is detected, a virtual object is displayed in the virtual scene, and a point on the virtual object is used as a starting point of the ray to generate and display the ray. 7.根据权利要求6所述的方法,其特征在于,所述虚拟对象为所述目标手势对应的虚拟手势,所述射线的起点为所述虚拟手势的掌心位置或者指尖位置。7. The method according to claim 6 is characterized in that the virtual object is a virtual gesture corresponding to the target gesture, and the starting point of the ray is the palm position or fingertip position of the virtual gesture. 8.根据权利要求5或6所述的方法,其特征在于,所述射线的初始长度为预设长度。8. The method according to claim 5 or 6, characterized in that the initial length of the ray is a preset length. 9.根据权利要求7所述的方法,其特征在于,所述射线的延伸方向为所述用户面对的方向。9 . The method according to claim 7 , wherein the extending direction of the ray is the direction the user is facing. 10.根据权利要求1-7任一项所述的方法,其特征在于,在根据所述目标手势的运动方向控制所述射线的位置移动过程中,还包括:10. The method according to any one of claims 1 to 7, characterized in that, in the process of controlling the position movement of the ray according to the movement direction of the target gesture, it further comprises: 区别显示所述射线的初始位置和移动后的位置。The initial position and the position after the movement of the ray are displayed separately. 11.根据权利要求1-7任一项所述的方法,其特征在于,所述目标手势为手掌向上且手指捏合姿势,所述目标手势的运动速度为所述捏合的手指弹开的速度。11. The method according to any one of claims 1 to 7, characterized in that the target gesture is a gesture of palm facing up and fingers pinching, and the movement speed of the target gesture is a speed at which the pinched fingers bounce off. 12.根据权利要求1-7任一项所述的方法,其特征在于,当所述目标手势的运动速度大于或等于所述预设速度时,控制所述虚拟场景中所述用户移动到所述射线的末端位置,包括:12. The method according to any one of claims 1 to 7, characterized in that when the movement speed of the target gesture is greater than or equal to the preset speed, controlling the user in the virtual scene to move to the end position of the ray comprises: 当所述运动速度大于或等于所述预设速度时,控制所述用户瞬移到所述射线的末端位置;When the movement speed is greater than or equal to the preset speed, controlling the user to teleport to the end position of the ray; 或者,当所述运动速度大于或等于所述预设速度时,控制所述用户按照预设的目标速度移动到所述射线的末端位置。Alternatively, when the movement speed is greater than or equal to the preset speed, the user is controlled to move to the end position of the ray at a preset target speed. 13.根据权利要求1-7任一项所述的方法,其特征在于,还包括:13. The method according to any one of claims 1 to 7, further comprising: 在所述用户移动到所述射线的末端位置之后,在所述虚拟场景中隐藏所述射线。After the user moves to an end position of the ray, the ray is hidden in the virtual scene. 14.一种虚拟场景中用户位置的控制装置,其特征在于,包括:14. A device for controlling a user's position in a virtual scene, comprising: 显示模块,用于当检测到目标手势时,在虚拟场景中显示射线,所述射线的起点表示用户的当前位置,所述射线的末端表示所述用户移动后的目标位置;A display module, used for displaying a ray in a virtual scene when a target gesture is detected, wherein the starting point of the ray represents the current position of the user, and the end point of the ray represents the target position after the user moves; 检测模块,用于检测所述目标手势的运动方向和运动速度;A detection module, used to detect the movement direction and movement speed of the target gesture; 控制模块,用于当所述目标手势的运动速度小于预设速度时,根据所述目标手势的运动方向控制所述射线的位置移动;A control module, configured to control the position movement of the ray according to the movement direction of the target gesture when the movement speed of the target gesture is less than a preset speed; 所述控制模块,还用于当所述目标手势的运动速度大于或等于所述预设速度时,控制所述虚拟场景中所述用户移动到所述射线的末端位置。The control module is further configured to control the user in the virtual scene to move to the end position of the ray when the movement speed of the target gesture is greater than or equal to the preset speed. 15.一种虚拟场景中用户位置的控制设备,其特征在于,包括:15. A device for controlling a user's position in a virtual scene, comprising: 处理器和存储器,所述存储器用于存储计算机程序,所述处理器用于调用并运行所述存储器中存储的计算机程序,以执行权利要求1至13中任一项所述的方法。A processor and a memory, the memory being used to store a computer program, and the processor being used to call and run the computer program stored in the memory to execute the method according to any one of claims 1 to 13. 16.一种计算机可读存储介质,其特征在于,用于存储计算机程序,所述计算机程序使得计算机执行如权利要求1至13中任一项所述的方法。16. A computer-readable storage medium, characterized in that it is used to store a computer program, wherein the computer program enables a computer to execute the method according to any one of claims 1 to 13. 17.一种计算机程序产品,包括计算机程序,其特征在于,所述计算机程序被处理器执行时,实现如权利要求1至13中任一项所述的方法。17. A computer program product, comprising a computer program, wherein when the computer program is executed by a processor, the method according to any one of claims 1 to 13 is implemented.
CN202211462815.2A 2022-11-21 2022-11-21 Method and device for controlling user position in virtual scene Pending CN118057266A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211462815.2A CN118057266A (en) 2022-11-21 2022-11-21 Method and device for controlling user position in virtual scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211462815.2A CN118057266A (en) 2022-11-21 2022-11-21 Method and device for controlling user position in virtual scene

Publications (1)

Publication Number Publication Date
CN118057266A true CN118057266A (en) 2024-05-21

Family

ID=91069200

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211462815.2A Pending CN118057266A (en) 2022-11-21 2022-11-21 Method and device for controlling user position in virtual scene

Country Status (1)

Country Link
CN (1) CN118057266A (en)

Similar Documents

Publication Publication Date Title
EP4196867B1 (en) Devices, methods, and graphical user interfaces for interacting with three-dimensional environments
US20220083197A1 (en) Devices, Methods, and Graphical User Interfaces for Providing Computer-Generated Experiences
CN112639685B (en) Display device sharing and interaction in simulated reality (SR)
AU2022398468B2 (en) Devices, methods, and graphical user interfaces for generating and displaying a representation of a user
CN113892074A (en) Arm gaze driven user interface element gating for artificial reality systems
KR20220018562A (en) Gating Edge-Identified Gesture-Driven User Interface Elements for Artificial Reality Systems
WO2017153771A1 (en) Virtual reality
KR20220018561A (en) Artificial Reality Systems with Personal Assistant Element for Gating User Interface Elements
KR20230037054A (en) Systems, methods, and graphical user interfaces for updating a display of a device relative to a user's body
WO2022177900A1 (en) Interfaces for presenting avatars in three-dimensional environments
CN118012265A (en) Human-computer interaction method, device, equipment and medium
CN118628570A (en) Space calibration method, device, equipment and storage medium
WO2024193568A1 (en) Interaction method and apparatus, and device, medium and program
CN117994839B (en) Gesture recognition method, device, equipment, medium and program
CN118057266A (en) Method and device for controlling user position in virtual scene
CN118349138A (en) Human-computer interaction method, device, equipment and medium
CN117666769A (en) Interaction methods, devices, storage media and equipment for virtual scenes
CN118689298A (en) Interactive methods, devices, equipment, media and programs
CN117369622A (en) Virtual object control method, device, equipment and medium
CN117742479A (en) Man-machine interaction method, device, equipment and medium
EP4567565A1 (en) Interaction method and apparatus, storage medium, device, and program product
US20250244834A1 (en) Display control method and apparatus, device, and medium
US20240329817A1 (en) Method and apparatus for publishing virtual object, device, medium, and program
EP4541438A1 (en) Method, apparatus, device, medium and program for displaying a virtual character
CN118642631A (en) Method, device, equipment, medium and program for resetting cover image of virtual object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination