WO2024037559A1 - Information interaction method and apparatus, and human-computer interaction method and apparatus, and electronic device and storage medium - Google Patents
Information interaction method and apparatus, and human-computer interaction method and apparatus, and electronic device and storage medium Download PDFInfo
- Publication number
- WO2024037559A1 WO2024037559A1 PCT/CN2023/113250 CN2023113250W WO2024037559A1 WO 2024037559 A1 WO2024037559 A1 WO 2024037559A1 CN 2023113250 W CN2023113250 W CN 2023113250W WO 2024037559 A1 WO2024037559 A1 WO 2024037559A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- throwing
- virtual
- virtual object
- virtual space
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04842—Selection of displayed objects or displayed text elements
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/332—Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
- H04N13/344—Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
Definitions
- the present disclosure relates to the fields of computer technology and extended reality (XR), and specifically relates to an information interaction method, device, electronic equipment and storage medium, as well as a human-computer interaction method, device, equipment and storage medium.
- XR extended reality
- VR virtual reality
- users can watch the anchor's performance through, for example, head-mounted display devices and related accessories, and can interact with the anchor through emoticons, barrages, virtual gifts, etc.
- XR technology allows users to immersively watch various virtual live broadcasts. For example, users can experience real live interactive scenes by wearing a head-mounted display (HMD).
- HMD head-mounted display
- an information interaction method including:
- the two or more target messages are sent based on the determined mobile endpoints of the more than two target messages.
- an information interaction device including:
- the first end point determination unit is used to determine the moving end point of the first target message sent among more than two target messages sent continuously;
- a second end point determination unit configured to determine the moving end points of the remaining target messages in the two or more target messages based on the moving end point of the first sent target message
- a message display unit is configured to send two or more target messages based on the determined moving destinations of the two or more target messages.
- an electronic device including: at least one memory and at least one processor; wherein the memory is used to store program code, and the processor is used to call The program code stored in the memory enables the electronic device to execute the information interaction method provided according to one or more embodiments of the present disclosure.
- a non-transitory computer storage medium stores program code, and when the program code is executed by a computer device, such that The computer device executes the information interaction method provided according to one or more embodiments of the present disclosure.
- embodiments of the present disclosure provide a human-computer interaction method applied to XR equipment.
- the method includes:
- the throwing special effect of the virtual object is presented in the virtual space.
- inventions of the present disclosure provide a human-computer interaction device configured in XR equipment.
- the device includes:
- a throwing position determination module configured to determine the throwing position of the virtual object in the virtual space in response to a throwing operation of any virtual object in the virtual space;
- Throwing module used for throwing according to the throwing position and the demarcated throwable area in the virtual space.
- the throwing special effect of the virtual object is presented in the virtual space.
- an electronic device which includes:
- a processor and a memory The memory is used to store a computer program.
- the processor is used to call and run the computer program stored in the memory to execute the human-computer interaction method provided in the fifth aspect of the present disclosure.
- an embodiment of the present disclosure provides a computer-readable storage medium for storing a computer program.
- the computer program causes the computer to execute the human-computer interaction method as provided in the fifth aspect of the present disclosure.
- an embodiment of the present disclosure provides a computer program product, including a computer program/instructions that causes a computer to execute the human-computer interaction method as provided in the fifth aspect of the present disclosure.
- a computer program including: instructions that, when executed by a processor, cause the processor to perform the information interaction method of any of the above embodiments. , or human-computer interaction methods.
- Figure 1 is a schematic diagram of a virtual reality device according to some embodiments of the present disclosure.
- FIG. 2 is a schematic diagram of a virtual field of view of a virtual reality device according to other embodiments of the present disclosure.
- Figure 3 is a flow chart of an information interaction method provided by some embodiments of the present disclosure.
- Figure 4 is a schematic diagram of a virtual reality space provided according to some embodiments of the present disclosure.
- Figure 5 is a schematic diagram of a virtual reality space provided according to other embodiments of the present disclosure.
- Figure 6 is a schematic structural diagram of an electronic device according to some embodiments of the present disclosure.
- Figure 7 is a flow chart of a human-computer interaction method provided by an embodiment of the present disclosure.
- Figure 8 is a schematic diagram of the architecture of a virtual space provided by an embodiment of the present disclosure.
- Figure 9(A) and Figure 9(B) are an exemplary schematic diagram of a throwable area in the virtual space provided by an embodiment of the present disclosure.
- 10(A) and 10(B) are another exemplary schematic diagram of a throwable area in a virtual space provided by an embodiment of the present disclosure.
- Figure 11 is a flow chart of a method for throwing any virtual object in a virtual space provided by an embodiment of the present disclosure.
- Figure 12(A) shows a method of throwing a virtual gift in the virtual space through a hand model according to an embodiment of the present disclosure.
- Figure 12(B) is a schematic diagram of the special effects when a hand model is used to continuously throw multiple virtual gifts in the virtual space provided by an embodiment of the present disclosure.
- Figure 12(C) is a schematic diagram of the special effects when the gift-giving prop is held by a hand model and the gift-giving prop is used to continuously emit multiple virtual gifts in the virtual space according to an embodiment of the present disclosure.
- Figure 13 is an exemplary schematic diagram of a virtual object colliding with any other virtual object provided by an embodiment of the present disclosure.
- FIG. 14 is a schematic diagram of canceling the holding of the virtual object through the hand model and giving up throwing according to an embodiment of the present disclosure.
- Figure 15 is a schematic diagram of a human-computer interaction device provided by an embodiment of the present disclosure.
- Figure 16 is a schematic block diagram of an electronic device provided by an embodiment of the present disclosure.
- the term “include” and its variations are open-ended, ie, "including but not limited to.”
- the term “based on” means “based at least in part on.”
- the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one additional embodiment”; and the term “some embodiments” means “at least some embodiments”.
- the term “responsive to” and related terms means that one signal or event is affected by another signal or event to some extent, but not necessarily completely or directly. If event x occurs "in response to" event y, x may respond to y, directly or indirectly. For example, the occurrence of y may eventually lead to the occurrence of x, but there may be other intermediate events and/or conditions. In other cases, y may not necessarily cause x to occur, and x may occur even if y has not yet occurred. Furthermore, the term “responsive to” may also mean “responsive at least in part to.”
- the term "determine” broadly encompasses a wide variety of actions, which may include retrieving, calculating, calculating, processing, deriving, investigating, looking up (e.g., in a table, database, or other data structure), exploring, and similar actions, Also included may be receiving (e.g., receiving information), accessing (e.g., accessing data in memory), and similar actions, as well as parsing, selecting, selecting, creating, and similar actions, and the like. Relevant definitions of other terms will be given in the description below. Relevant definitions of other terms will be given in the description below. Relevant definitions of other terms will be given in the description below.
- phrase "A and/or B” means (A), (B) or (A and B).
- the method provided by the embodiment of the present disclosure can be used to send target messages in the virtual reality space, such as emoticons, barrages, gifts, etc.
- Virtual reality space can be a simulation environment of the real world, a semi-simulation and semi-fictional virtual scene, or a purely fictitious virtual scene.
- the virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, or a three-dimensional virtual scene.
- the embodiments of the present disclosure do not limit the dimensions of the virtual scene.
- the virtual scene can include the sky, land, ocean, etc.
- the land can include environmental elements such as deserts and cities, and the user can control virtual objects to move in the virtual scene.
- users can enter the virtual reality space through smart terminal devices such as head-mounted VR glasses, and control their own virtual characters (Avatar) in the virtual reality space to interact socially, entertain and learn with virtual characters controlled by other users. , remote working, etc.
- smart terminal devices such as head-mounted VR glasses
- Avatar virtual characters
- the user in the virtual reality space, can implement related interactive operations through a controller, which can be a handle.
- a controller can be a handle.
- the user can perform related operation controls by operating buttons on the handle.
- gestures or voice or multi-modal control methods may be used to control the target object in the virtual reality device.
- Extended reality technology can combine reality and virtuality through computers to provide users with a virtual reality space that allows human-computer interaction.
- users can use helmet-mounted displays (Head-mounted Displays) Mount Display (HMD) and other virtual reality devices for social interaction, entertainment, learning, work, telecommuting, creation of UGC (User Generated Content), etc.
- HMD helmet-mounted displays
- UGC User Generated Content
- PCVR Computer-side virtual reality
- the external computer-side virtual reality equipment uses the data output from the PC side to achieve virtual reality effects.
- Mobile virtual reality equipment supports setting up a mobile terminal (such as a smartphone) in various ways (such as a head-mounted display with a special card slot), and through a wired or wireless connection with the mobile terminal, the mobile terminal performs virtual reality Function-related calculations and output data to mobile virtual reality devices, such as viewing virtual reality videos through mobile terminal APPs.
- a mobile terminal such as a smartphone
- ways such as a head-mounted display with a special card slot
- the mobile terminal performs virtual reality Function-related calculations and output data to mobile virtual reality devices, such as viewing virtual reality videos through mobile terminal APPs.
- the all-in-one virtual reality device has a processor for performing calculations related to virtual functions, so it has independent virtual reality input and output functions. It does not need to be connected to a PC or mobile terminal, and has a high degree of freedom in use.
- the form of the virtual reality device is not limited to this, and can be further miniaturized or enlarged as needed.
- the virtual reality device is equipped with a posture detection sensor (such as a nine-axis sensor), which is used to detect posture changes of the virtual reality device in real time. If the user wears a virtual reality device, when the user's head posture changes, the head posture will be changed.
- the real-time posture is passed to the processor to calculate the gaze point of the user's line of sight in the virtual environment. Based on the gaze point, the image in the three-dimensional model of the virtual environment within the user's gaze range (i.e., the virtual field of view) is calculated and displayed on the display screen. display, giving people an immersive experience as if they were watching in a real environment.
- Figure 2 shows a schematic diagram of the virtual field of view of the virtual reality device provided by some embodiments of the present disclosure.
- the horizontal field of view angle and the vertical field of view angle are used to describe the distribution range of the virtual field of view in the virtual environment.
- the distribution range in the vertical direction is
- the vertical field of view angle BOC is used to express, and the distribution range in the horizontal direction is represented by the horizontal field of view angle AOB.
- the human eye can always perceive the image in the virtual field of view in the virtual environment through the lens. It can be understood that the larger the field of view angle, the greater the virtual field of view angle. The larger the field of view, the larger the area of the virtual environment that the user can perceive. Among them, the field of view represents the distribution range of the viewing angle when the environment is perceived through the lens.
- the field of view of a virtual reality device represents the distribution range of the viewing angle of the human eye when the virtual environment is perceived through the lens of the virtual reality device; for another example, for a mobile terminal equipped with a camera, the field of view of the camera The angle is the distribution range of the viewing angle when the camera perceives the real environment and shoots.
- Virtual reality devices such as HMDs, integrate several cameras (such as depth cameras, RGB cameras, etc.). The purpose of the cameras is not limited to providing a pass-through view. Camera images and an integrated inertial measurement unit (IMU) provide data that can be processed through computer vision methods to automatically analyze and understand the environment. Also, HMD is designed not only to Passive computer vision analysis is supported, and active computer vision analysis is also supported. Passive computer vision methods analyze image information captured from the environment. These methods can be monoscopic (images from a single camera) or stereoscopic (images from two cameras). They include, but are not limited to, feature tracking, object recognition, and depth estimation. Active computer vision methods add information to the environment by projecting patterns that are visible to the camera but not necessarily to the human visual system. Such technologies include time-of-flight (ToF) cameras, laser scanning or structured light to simplify the stereo matching problem. Active computer vision is used to achieve deep scene reconstruction.
- ToF time-of-flight
- the virtual reality space includes a virtual live broadcast space.
- performer users can live broadcast with virtual images or real images, and audience users can control virtual characters to watch the performers' live broadcast from viewing angles such as first-person perspective or third-person perspective.
- a video stream may be obtained and video content may be presented in a virtual reality space based on the video stream.
- the video stream may adopt encoding formats such as H.265, H.264, and MPEG-4.
- the client can receive the live video stream sent by the server and display the live video image in the virtual reality space based on the live video stream.
- Figure 3 shows a flow chart of an information interaction method 100 provided by some embodiments of the present disclosure.
- the method 100 includes steps S120 to S160.
- Step S120 Determine the moving end point of the first target message sent among two or more target messages sent continuously.
- target messages include but are not limited to text messages (such as comments, barrages), image messages (such as emojis, pictures, virtual items, etc.).
- the target message may be a custom message edited by the user, a system-provided message selected by the user through a messaging operation, a message associated with a messaging operation, or a message randomly assigned by the operating system in response to the messaging operation.
- Message sending operations include but are not limited to somatosensory control operations, gesture control operations, eye movement operations, touch operations, voice control instructions, or operations on external control devices (such as button operations).
- the user can invoke the message editing interface through preset operations, select a candidate target message from the message editing interface, or edit a customized target message, send the target message, and display the current user in the virtual reality space.
- the target message to send can invoke the message editing interface through preset operations, select a candidate target message from the message editing interface, or edit a customized target message, send the target message, and display the current user in the virtual reality space.
- the target message to send can invoke the message editing interface through preset operations, select a candidate target message from the message editing interface, or edit a customized target message, send the target message, and display the current user in the virtual reality space.
- the target message to send can invoke the message editing interface through preset operations, select a candidate target message from the message editing interface, or edit a customized target message, send the target message, and display the current user in the virtual reality space. The target message to send.
- the user can select an existing candidate target message from the message editing interface displayed in the virtual reality space, or edit a customized target message, send the target message, and display the current user in the target message display space.
- the target message to send is an area in the virtual reality space used to display the target message.
- the message editing interface may be displayed in the virtual reality space in advance, or may Called up with preset-based actions.
- the message editing interface can be used to edit the target message, or to directly display one or more preset candidate target messages for the user to directly select.
- the message editing interface can be a message panel (for example, used for an emoticon panel).
- the message editing interface may be a preset area in the virtual reality space for displaying one or more candidate target messages.
- the message sending operation may include a user's preset operation on the virtual reality control device, such as triggering a preset button of the virtual reality control device (such as a handle).
- the preset button can be associated with a preset target message.
- the target message can be sent; or when the user triggers the preset button, the system randomly triggers the message. Assign a target message.
- the user can continuously trigger N message sending operations to continuously send N target messages, and the interval between two adjacent first message sending operations does not exceed the preset time interval.
- the user can continuously trigger the preset buttons of the virtual reality control device (such as a handle) N times, or continuously click the candidate target message displayed on the message panel N times, and the interval between each trigger/click does not exceed With a preset time interval, N target messages can be sent continuously.
- Step S140 Determine the moving end points of the remaining target messages in the two or more target messages based on the moving end point of the first sent target message.
- the remaining target messages among the two or more target messages are target messages other than the first sent target message.
- the user triggers message sending operations A, B, C and D in sequence, where the time interval between message sending operations A and B is greater than the preset time interval, and the time interval between message sending operations B and C is greater than the preset time interval. If the time interval between C and D is not greater than the preset time interval, the message sending operations B, C, and D can be determined as operations for sending more than two target messages continuously, and the target sent by the message sending operation B can be determined as Message b is the first target message to be sent. The target message b is determined, and the moving end points of the target messages c and d sent by the message sending operations C and D are determined based on the determined moving end point of the target message b.
- the user when the user triggers the first message sending operation, if it is determined that the time interval between the first message sending operation and the last message sending operation exceeds the preset time interval, in response to the first message sending operation, Then determine the moving end point of the first target message sent by the first message sending operation; if the user triggers the second message sending operation within the preset time interval after triggering the first message sending operation, based on the first target message The mobile end point determines the mobile end point of the second target message sent by the second message sending operation; similarly, if the user triggers the third message sending operation within the preset time interval after triggering the second message sending operation, then the basic The moving end point of the second target message sent by the third message sending operation is determined based on the moving end point of the first target message.
- the user can also directly send a preset number of target messages continuously through a preset message burst instruction.
- Step S160 Send the corresponding target message based on the determined mobile destination of the target message.
- the target message can move toward the corresponding mobile end point in the virtual reality space. For example, among the two or more target messages sent continuously, the first target message sent moves toward the moving end point determined in step S120, and the remaining target messages move toward the moving end point determined in step S140.
- the moving starting point of the target message can be located at any location in the virtual reality space, or at a preset specific location, or at a location determined based on the corresponding message sending operation. This disclosure is not limited here.
- the moving path of the target message can be a straight line, a curve, or other shapes, and the disclosure is not limited here.
- the diversity and consistency of the continuous message can be balanced. , so that continuous messages can be identified and distinguished, and it can also improve the efficiency of determining the moving end point of continuous messages.
- the mobile destination may be randomly assigned to the first target message sent among the N target messages sent consecutively.
- a target message display space for displaying the target message can be set in the virtual reality space, and the moving end point of the first sent target message can be randomly determined in the target message display space.
- the distance between the moving end points of the remaining target messages and the moving end point of the first sent target message may not exceed a preset threshold, or the moving end points of the remaining target messages may be located in the same direction as the first sent target message.
- the moving end point of the message is within a preset area in the center (such as a circular area, a square area, a spherical area, a cube area, etc.).
- the moving end points of the remaining target messages are randomly determined within a preset area centered on the moving end point of the first sent target message.
- the continuous messages can be concentrated and randomly displayed around the first continuous message, which can further balance the diversity and consistency of the continuous messages, and can present the visual effect of the continuous messages being randomly scattered.
- the remaining target messages may be randomly assigned corresponding moving end points.
- method 100 further includes:
- Step S170 After the first target message among the two or more target messages moves to the determined moving end point, display a message indicating that the first target message is at the preset position of the first target message.
- the first target message may be any one of the two or more target messages.
- the continuous transmission can be displayed in sequence.
- the order and quantity of messages are such that bursts of messages are easily identifiable.
- sequence identifier may be located above, below, left, right, etc. of the corresponding first target message, but the disclosure is not limited thereto.
- the continuous display duration of the sequence identification does not exceed the continuous display duration of the first target message at the moving end position. In this embodiment, by making the continuous display duration of the sequence identification not exceed the continuous display duration of the first target message at the mobile end position, the continuous display duration of the sequence identification is prevented from being too long and occupying the display of continuous messages. space.
- method 100 further includes:
- Step S180 During the sending process of the target message and/or after moving to the determined moving end point, display a user identification representing the sender information of the target message at a preset position of the target message.
- user identification includes but is not limited to user avatar, user nickname, user identification, etc.
- the user identification may be located above, below, left, right, etc. of the corresponding first target message, but the disclosure is not limited thereto.
- different types of target messages have different preset position relationships with the user identification. For example, if the target message is an image of a solid or opaque object, the corresponding user identification can be displayed outside the image of the solid opaque object; if the target message is an image of a hollow or transparent object, the corresponding user identification can be displayed outside the image of the hollow or transparent object. The corresponding user ID is displayed inside the image of a or transparent object.
- the display diversity and display fit between the user identification and the target message can be improved, thereby improving the user experience.
- the virtual reality space 10 includes a video image display space 20 and a target message display space 40.
- the video image display space 20 may be used to display video images, such as live broadcast images.
- the virtual reality space also includes consumer
- the information editing interface 30 displays a plurality of candidate target expressions for the user to select.
- the target expression 421 can move to the target message display space starting from a position on the message editing interface 30.
- the target emoticons 411, 412, and 413 can start from a position on the message editing interface 30 and continuously move along the curve path to the target message display space. 40 in.
- the target expressions 412 and 413 finally move to the spherical area with the target expression 411 as the center.
- the right sides of the target expressions 411, 412, and 413 respectively display sequence identifiers " ⁇ 1", “ ⁇ 2", and " ⁇ 3", which respectively indicate that the target expression 411 is the first expression to be sent in the continuous expression, and the target expression 412 is The second emoticon sent in a burst of emoticons, and the target emoticon 413 is the third emoticon sent.
- the sender's user ID "Tom" is displayed below each target emoticon.
- method 100 further includes:
- Step S191 During the process of moving the target message to the corresponding moving end point, display the first special effect associated with the target message; and/or
- Step S192 After the target message moves to the corresponding moving end point, display the second special effect associated with the target message.
- the first special effect may be a rotation special effect of the target message
- the second special effect may be a deformation special effect of the target message, but the disclosure is not limited thereto.
- the first special effect associated with the target message is displayed while the target message is moving to the corresponding moving end point, and the first special effect associated with the target message is displayed after the target message moves to the corresponding moving end point.
- the second special effect associated with the target message can enrich the display form of the target message sending process and improve the user experience.
- an information interaction device including:
- the first end point determination unit is used to determine the moving end point of the first target message sent among more than two target messages sent continuously;
- a second end point determination unit configured to determine the moving end points of the remaining target messages in the two or more target messages based on the moving end point of the first sent target message
- the message display unit is configured to send the corresponding target message based on the determined mobile end point of the target message.
- determining the moving end point of the first sent target message includes: randomly determining the moving end point of the first sent target message in a target message display space set in the virtual reality space.
- the message display unit is configured to display the first sent target message on a mobile terminal. Within the preset area centered on the point, the moving end points of the remaining target messages are randomly determined.
- the information interaction device further includes:
- a first identification unit configured to display the first target message at a preset position of the first target message to indicate the first target message after the first target message among the two or more target messages moves to the determined moving end point.
- the sequence identifier of the sending order of the target message among the two or more target messages is not limited to:
- the continuous display duration of the sequence identification does not exceed the continuous display duration of the first target message at the moving end position.
- the information interaction device further includes:
- a second identification unit configured to display the sender information representing the target message at a preset position of the target message during the sending process of the target message and/or after moving to the determined moving end point. user ID.
- different types of target messages have different preset position relationships with the user identification.
- the information interaction device further includes: a special effect unit, configured to display the first special effect associated with the target message while the target message is moving to the corresponding moving end point; and/or when the target message After the message moves to the corresponding moving end point, the second special effect associated with the target message is displayed.
- a special effect unit configured to display the first special effect associated with the target message while the target message is moving to the corresponding moving end point; and/or when the target message After the message moves to the corresponding moving end point, the second special effect associated with the target message is displayed.
- the device embodiment since it basically corresponds to the method embodiment, please refer to the partial description of the method embodiment for relevant details.
- the device embodiments described above are only illustrative, and the modules described as separate modules may or may not be separate. Some or all of the modules can be selected according to actual needs to achieve the purpose of the solution of this embodiment. Persons of ordinary skill in the art can understand and implement the method without any creative effort.
- an electronic device including:
- the memory is used to store program codes
- the processor is used to call the program codes stored in the memory to cause the electronic device to execute the information interaction method provided according to one or more embodiments of the present disclosure.
- a non-transitory computer storage medium stores program code, and the program code can be executed by a computer device to cause the computer device to execute An information interaction method provided according to one or more embodiments of the present disclosure.
- Terminal devices in embodiments of the present disclosure may include, but are not limited to, mobile devices such as Mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PAD (tablet computers), PMP (portable multimedia players), vehicle-mounted terminals (such as vehicle-mounted navigation terminals), etc., as well as mobile terminals such as digital TV, Fixed terminals for desktop computers, etc.
- mobile devices such as Mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PAD (tablet computers), PMP (portable multimedia players), vehicle-mounted terminals (such as vehicle-mounted navigation terminals), etc., as well as mobile terminals such as digital TV, Fixed terminals for desktop computers, etc.
- the electronic device shown in FIG. 6 is only an example and should not impose any limitations on the functions and scope of use of the embodiments of the present disclosure.
- the electronic device 800 may include a processing device (eg, central processing unit, graphics processor, etc.) 801 , which may be loaded into a random access device according to a program stored in a read-only memory (ROM) 802 or from a storage device 808 .
- the program in the memory (RAM) 803 executes various appropriate actions and processes.
- various programs and data required for the operation of the electronic device 800 are also stored.
- the processing device 801, ROM 802 and RAM 803 are connected to each other via a bus 804.
- An input/output (I/O) interface 805 is also connected to bus 804.
- the following devices may be connected to the I/O interface 805: input devices 806 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speakers, vibration An output device 807 such as a computer; a storage device 808 including a magnetic tape, a hard disk, etc.; and a communication device 809.
- the communication device 809 may allow the electronic device 800 to communicate wirelessly or wiredly with other devices to exchange data.
- FIG. 6 illustrates electronic device 800 with various means, it should be understood that implementation or availability of all illustrated means is not required. More or fewer means may alternatively be implemented or provided.
- embodiments of the present disclosure include a computer program product including a computer program carried on a computer-readable medium, the computer program containing program code for performing the method illustrated in the flowchart.
- the computer program may be downloaded and installed from the network via communication device 809, or from storage device 808, or from ROM 802.
- the processing device 801 the above-mentioned functions defined in the method of the embodiment of the present disclosure are performed.
- the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
- the computer-readable storage medium may be, for example, but is not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or any combination thereof.
- Examples of computer readable storage media may include, but are not limited to: an electrical connection having one or more wires, a portable computer disk, a hard drive, random access memory (RAM), read only memory (ROM), erasable programmable read only memory Memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
- a computer-readable storage medium may be any tangible medium that contains or stores a program for use by or in connection with an instruction execution system, apparatus, or device.
- computer-readable signal media may include A data signal propagated in baseband or as part of a carrier wave that carries computer-readable program code. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the above.
- a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium that can send, propagate, or transmit a program for use by or in connection with an instruction execution system, apparatus, or device .
- Program code embodied on a computer-readable medium may be transmitted using any suitable medium, including but not limited to: wire, optical cable, RF (radio frequency), etc., or any suitable combination of the above.
- the client and server can communicate using any currently known or future developed network protocol such as HTTP (HyperText Transfer Protocol), and can communicate with digital data in any form or medium.
- Communications e.g., communications network
- communications networks include local area networks (“LAN”), wide area networks (“WAN”), the Internet (e.g., the Internet), and end-to-end networks (e.g., ad hoc end-to-end networks), as well as any currently known or developed in the future network of.
- the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; it may also exist independently without being assembled into the electronic device.
- the above-mentioned computer-readable medium carries one or more programs.
- the electronic device When the above-mentioned one or more programs are executed by the electronic device, the electronic device is caused to perform the above-mentioned method of the present disclosure.
- Computer program code for performing the operations of the present disclosure may be written in one or more programming languages, including object-oriented programming languages such as Java, Smalltalk, C++, and conventional Procedural programming language—such as "C" or a similar programming language.
- the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
- the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as an Internet service provider through Internet connection).
- LAN local area network
- WAN wide area network
- Internet service provider such as an Internet service provider through Internet connection
- each block in the flowchart or block diagram may represent a module, segment, or portion of code that contains one or more logic functions that implement the specified executable instructions.
- the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown one after another may actually execute substantially in parallel, and they may sometimes execute in reverse order, This depends on the functionality involved.
- each block of the block diagram and/or flowchart illustration, and combinations of blocks in the block diagram and/or flowchart illustration can be implemented by special purpose hardware-based systems that perform the specified functions or operations. , or can be implemented using a combination of specialized hardware and computer instructions.
- the units involved in the embodiments of the present disclosure can be implemented in software or hardware. Among them, the name of a unit does not constitute a limitation on the unit itself under certain circumstances.
- FPGAs Field Programmable Gate Arrays
- ASICs Application Specific Integrated Circuits
- ASSPs Application Specific Standard Products
- SOCs Systems on Chips
- CPLD Complex Programmable Logical device
- a machine-readable medium may be a tangible medium that may contain or store a program for use by or in connection with an instruction execution system, apparatus, or device.
- the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
- Machine-readable media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices or devices, or any suitable combination of the foregoing.
- machine-readable storage media examples include one or more wire-based electrical connections, laptop disks, hard drives, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM) or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
- RAM random access memory
- ROM read only memory
- EPROM erasable programmable read only memory
- flash memory flash memory
- optical fiber portable compact disk read-only memory
- CD-ROM compact disk read-only memory
- magnetic storage device magnetic storage device
- an information interaction method including: determining the moving end point of the first target message sent among more than two target messages sent continuously; based on the first sent target message The mobile terminal determines the mobile terminal of the remaining target messages in the two or more target messages; and sends the corresponding target message based on the determined mobile terminal of the target message.
- determining the moving end point of the first sent target message includes: randomly determining the moving end point of the first sent target message in a target message display space set in the virtual reality space. Move the end point.
- determining the mobile endpoints of the remaining target messages in the two or more target messages based on the mobile endpoint of the first sent target message includes: Within the preset area centered on the moving end point of the target message, the moving end points of the remaining target messages are randomly determined.
- the information interaction method provided according to one or more embodiments of the present disclosure also includes: in the two or more After the first target message among the target messages moves to the determined moving end point, a display is displayed at the preset position of the first target message to indicate that the first target message is sent among the two or more target messages.
- the sequence identifier of the sequence is displayed at the preset position of the first target message to indicate that the first target message is sent among the two or more target messages.
- the continuous display duration of the sequence identification does not exceed the continuous display duration of the first target message at the mobile end position.
- the information interaction method provided according to one or more embodiments of the present disclosure further includes: during the sending process of the target message and/or after moving to the determined mobile end point, at the preset position of the target message A user identification representing the sender information of the target message is displayed.
- different types of target messages have different preset position relationships with the user identification.
- the information interaction method provided according to one or more embodiments of the present disclosure further includes: displaying the first special effect associated with the target message during the movement of the target message to the corresponding mobile end point; and/or in the process of moving the target message to the corresponding moving end point; After the target message is moved to the corresponding moving end point, the second special effect associated with the target message is displayed.
- an information interaction device including: a first end point determination unit, configured to determine the moving end point of the first target message sent among more than two target messages sent continuously; Two end point determination units, used to determine the moving end points of the remaining target messages in the two or more target messages based on the moving end point of the first sent target message; a message display unit, used to determine the moving end points of the determined target message based on Send the corresponding target message.
- an electronic device including: at least one memory and at least one processor; wherein the memory is used to store program code, and the processor is used to call the memory.
- the stored program code causes the electronic device to execute the information interaction method provided according to one or more embodiments of the present disclosure.
- a non-transitory computer storage medium stores program code, and when the program code is executed by a computer device, the computer device causes The information interaction method provided according to one or more embodiments of the present disclosure is executed.
- the present disclosure provides a human-computer interaction method, device, equipment and storage medium to ensure the intuitiveness and interest of virtual object interaction in a virtual space, and to mobilize the enthusiasm for live broadcast in the virtual space.
- the technical solution of the present disclosure includes: pre-defining a throwable area in the virtual space. Furthermore, when any virtual object is thrown in the virtual space, the throwing position of the virtual object is first determined. Then, based on the throwing position and the demarcated throwable area, the throwing special effect of the virtual object is presented in the virtual space, thereby ensuring the intuitiveness and accuracy of throwing the virtual object in the virtual space and enhancing the interaction of the virtual objects in the virtual space. Interesting and user-interactive atmosphere.
- Figure 7 is a flow chart of a human-computer interaction method provided by an embodiment of the present disclosure. This method can be applied to XR equipment, but is not limited thereto. This method can be executed by the human-computer interaction device provided by the present disclosure, wherein the human-computer interaction device can be implemented by any software and/or hardware.
- the human-computer interaction device can be configured in electronic equipment capable of simulating virtual scenes such as AR/VR/MR. This disclosure does not place any restrictions on the specific type of electronic equipment.
- the method may include the following steps:
- the virtual space can be a corresponding virtual environment simulated by the XR device for a certain live broadcast scene selected by any user, so as to display the corresponding live interactive information in the virtual space.
- the anchor is supported to select a certain type of live broadcast scene to build a corresponding virtual live broadcast environment as the virtual space in this disclosure, so that each audience can enter the virtual space to realize corresponding live broadcast interaction.
- multiple virtual screens such as live broadcast screen, control screen and public screen can be set up in the virtual space for different live broadcast functions to display different live broadcast contents respectively.
- the live video stream of the anchor can be displayed in the live screen so that users can watch the corresponding live screen.
- the control screen can display host information, online audience information, related live broadcast recommendation lists, and current live broadcast resolution options to facilitate users to perform various related live broadcast operations.
- Various user comment information, likes, gifts, etc. in the current live broadcast can be displayed on the public screen to facilitate users to manage the current live broadcast.
- live screen, control screen and public screen are all facing users and displayed at different locations in the virtual space. Furthermore, the position and style of any virtual screen can be adjusted to prevent it from blocking other virtual screens.
- a corresponding gift entrance will be displayed in the virtual space.
- the gift portal When a user wants to give a virtual gift to the anchor, the gift portal will be triggered first, for example, by selecting the gift portal with the handle cursor, or controlling the hand model to click on the gift portal, etc. Therefore, after detecting the triggering operation of the gift entrance, the corresponding gift panel will be displayed in the virtual space.
- the gift panel will display a variety of different types of virtual gifts to the user, so that the user can independently select a virtual gift to give to the host in some embodiments.
- the present disclosure can detect in real time whether the user performs a corresponding throwing operation on the selected virtual object to determine whether the virtual object needs to be thrown into the virtual space.
- the present disclosure After detecting the throwing operation of any virtual object in the virtual space, the present disclosure will first analyze the presentation form of the virtual object in the virtual space when it is thrown to the host, so as to determine the behavior of the virtual object when it is thrown into the virtual space. The final position that can be reached is used as the throwing position in this disclosure.
- the throwing position of the virtual object in the virtual space is first determined. Then, based on the throwing position and the demarcated throwable area in the virtual space, the throwing special effects of the virtual object are presented in the virtual space, thereby ensuring the intuitiveness and accuracy of throwing virtual objects in the virtual space and enhancing the virtual objects in the virtual space.
- the interactive fun and user interactive atmosphere can mobilize the enthusiasm of users in the virtual space for live broadcasting.
- the throwing special effects in the virtual space usually need to be presented between the user and the host's live video stream (that is, the live screen), so that the user can watch the throwing special effects, so that the user can judge whether the virtual object is successfully thrown. If a virtual object is thrown in the virtual space, the user cannot see the special effects of the virtual object being thrown, and the interaction between the user and the anchor regarding the virtual object cannot be guaranteed.
- the present disclosure can predetermine the boundaries in the virtual space according to the user's position in the virtual space and the position of the live broadcast screen.
- a throwable area of the virtual object is created, so that the user can ensure that the user can see the corresponding throwing special effect for the virtual gift that falls within the throwable area after being thrown.
- the throwing position of any virtual object in the virtual space After determining the throwing position of any virtual object in the virtual space, it is judged whether the throwing position can be successfully executed by judging whether the throwing position is within the defined throwable area in the virtual space. Furthermore, when it is determined that the virtual object can be thrown into the throwable area in the virtual space, it means that the throw can be successfully executed, and the throwing special effect of the virtual object can be presented in the virtual space.
- the present disclosure can determine the throwable area in the virtual space based on the user's visible area in the virtual space and the preset throwing range.
- the present disclosure can determine the visible area when the user faces the live broadcast screen based on the relative positional relationship between the user's position in the virtual space and the position of the live broadcast screen. Moreover, since the user's visible area is relatively large, in order to ensure that the special effects are concentrated when the virtual object is thrown to the anchor, the present disclosure can also set a throwing range, such as an angle range of 0-170 degrees when the user faces the live screen. Then, by judging the overlap area between the user's visual area and the preset throwing range, the throwable area in the virtual space can be determined. Furthermore, other areas in the virtual space except the throwable area can be used as non-throwing areas in the virtual space.
- a throwing range such as an angle range of 0-170 degrees
- the present disclosure can set the area where the user faces the live broadcast screen and is within 170 degrees as the throwable area in the present disclosure, and other areas are non-throwable areas.
- the area where the line of sight is blocked by the gift panel also belongs to the non-throwing area. That is to say, the blind spots and sight-obstructed areas in the virtual space above the user's head, feet, and behind the user are all designated as non-throwing areas in the virtual space.
- the present disclosure can set a throwing boundary for the user within the throwable area.
- the throwing boundary is equivalent to a virtual wall.
- the throwing boundary in the throwable area can be located at a small distance from the user, and the height of the throwing boundary can be consistent with the height of the live broadcast screen to ensure that the throwing boundary can be Boundary integrity within the throwing zone.
- the throwing position of the virtual object in the virtual space can exist in the following three situations:
- Scenario 1 The first landing point of the virtual object within the throwable area. That is, the virtual object lands on the ground within the throwable area, and the first landing point is obtained as the throwing position of the virtual object in the present disclosure.
- Scenario 2 The virtual object lands at the second landing point within the demarcated non-throwing area in the virtual space. That is, the virtual object falls on the ground within the non-throwing area, and a second landing point is obtained, which is the throwing position of the virtual object in the present disclosure.
- Scenario 3 The intersection point between the virtual object and the throwing boundary defined in the throwable area. That is, the virtual object does not fall to the ground in the virtual space, but touches the throwing boundary within the throwable area during the throwing process. Then, the intersection point when the virtual object touches the throwing boundary is within the throwable area, which is regarded as the throwing position of the virtual object in the present disclosure.
- the throwing position of the virtual object in the virtual space it can be determined whether the throwing position is within the throwable area to determine whether the current throw in the virtual space can be successfully executed. If the throwing position is within the throwable area, it means that the throw can be successfully executed, so the throwing special effect of the virtual object is presented in the virtual space.
- the throwing special effect can be a throwing of the virtual object into the virtual space through the hand model. Effects can also be props that emit virtual objects into the virtual space by throwing props.
- the throwing position is within the non-throwing area, it means that the throwing cannot be successfully executed, so the virtual object is controlled to return to its original position in the virtual space after being presented in the virtual space for an expected period of time. That is to say, when this throw cannot be successfully executed, in response to the throwing operation of the virtual object, the present disclosure will also control the virtual object to be presented in the virtual space for a short period of time. When the presentation duration of the virtual object reaches the expected duration Afterwards, the throwing special effect indicating a successful throw will not be played, but the virtual gift will be controlled to return from the virtual space to its original position in the virtual space to indicate to the user that the throw was not successful.
- the throwing special effects of the virtual object may include but are not limited to: the spatial throwing trajectory of the virtual object in the virtual space, and the throwing special effects set based on the spatial throwing trajectory or/and the virtual object.
- the throwing special effect can be an animation effect displayed at the final throwing point after the throwing is completed.
- the technical solution provided by the embodiments of the present disclosure is that in response to the throwing operation of any virtual object in the virtual space, the throwing position of the virtual object in the virtual space is first determined. Then, based on the throwing position and the demarcated throwable area in the virtual space, the throwing special effects of the virtual object are presented in the virtual space, thereby ensuring the intuitiveness and accuracy of throwing virtual objects in the virtual space and enhancing the virtual objects in the virtual space.
- the interactive fun and user interactive atmosphere can mobilize the enthusiasm of users in the virtual space for live broadcasting.
- the present disclosure can use the following steps to illustrate the process of throwing any virtual object in the virtual space:
- the virtual space in order to increase the user's interactive operation when throwing virtual objects in the virtual space and avoid a single interaction when throwing any virtual object through the handle cursor and Trigger key, the virtual space can be controlled through handle operation or gesture operation.
- the hand model simulated in the computer performs corresponding movements to perform corresponding holding operations on any virtual object.
- the hand model will be required to perform various motions related to throwing in the virtual space after holding any virtual object in order to simulate the The actual throwing process enhances the diverse interactions when throwing virtual objects in the virtual space.
- the user in response to the hand model's holding operation facing any virtual object, the user can be supported to input corresponding motion information in the XR device to indicate the specific motion performed by the hand model, such as manipulating various directions on the handle. Pressing buttons, controlling the handle to perform corresponding movements, or controlling the hand to perform corresponding movement gestures, etc., can represent the movements that the hand model needs to perform after holding any virtual object. Based on this kind of movement information, movement instructions initiated by the user towards the hand model can be generated.
- the present disclosure It is also necessary to determine the movement posture change amount of the hand model after holding the virtual object in real time, so as to determine whether the movement posture change amount satisfies the throwing trigger condition of the held virtual object.
- the present disclosure can determine the movement posture change amount of the hand model when holding the virtual object, and can determine the position of the virtual object in the virtual space based on the movement posture change amount.
- Throwing trajectory determine the throwing position of the virtual object in the virtual space based on the throwing trajectory.
- the movement posture changes performed by the hand model after holding the virtual object information such as the movement direction and speed of the hand model driving the held virtual object in the virtual space can be determined. Then, based on the motion direction and motion speed information represented by the motion posture change, it is determined that the motion trajectory that the virtual object can still execute under the action of motion inertia after the hand model is canceled is used as the throwing in the present disclosure. trajectory. Furthermore, according to the throwing trajectory, the throwing position of the virtual object in the virtual space can be determined.
- the present disclosure can set a throwing trigger condition for the virtual object in advance for the actual throwing operation of any virtual object.
- virtual objects in the virtual space can be divided into throwable objects and non-throwable objects to enhance the diversity of virtual object throwing in the virtual space.
- the throwable objects can be some virtual objects that can be successfully sent to the host by directly throwing the hand model in the virtual space, such as individual emoticon gifts, heart-shaped gifts, etc.
- Non-throwable objects can be other virtual objects that are stored in the corresponding throwing props and require the hand model to perform corresponding interactive operations with the throwing props held, and are successfully thrown to the host, such as gifting through bubble wands. Bubble gifts, hot air balloons activated by heating devices, etc.
- the throwing trigger condition of the throwable object can be set as follows: the hand model is in the grip cancellation posture after executing a throwing motion or a continuous throwing motion. That is to say, based on the movement posture changes performed by the hand model after holding the throwable object, it is determined whether the hand model has performed a throwing movement or a continuous throwing movement. Moreover, after executing the above movement, whether to cancel the grip of the throwable object so that the hand model is finally in the grip cancellation posture. If the above throwing trigger conditions of the throwable object are met, it means that the held virtual object is thrown in the virtual space through the hand model, which means that the host needs to perform the corresponding interactive operation of the throwable object in the virtual space. .
- the throwing trigger condition of the non-throwable object can be set as follows: the target part of the hand model that interacts with the throwing prop performs the throwing operation set by the throwing prop.
- the hand model when the hand model is holding a non-throwable object, it will be represented as a hand model holding a non-throwable object.
- a throwing prop for throwable objects For example, if you give a bubble gift through a bubble wand, the hand model will hold the bubble wand from the gift panel to emit the corresponding bubble gift.
- the target part of the hand model that interacts with the throwing prop performs the throwing operation set by the throwing prop. If the above throwing trigger conditions for non-throwable objects are met, it means that the non-throwable objects in the throwing props can be sent out in the virtual space through the corresponding interaction between the target part in the hand model and the throwing props, which means that in the virtual space You need to perform the corresponding interactive operation on the non-throwable object to the host.
- the present disclosure after determining the movement posture change amount performed by the hand model after holding the virtual object, the present disclosure first determines the throwing trigger condition of the held virtual object. Then, by judging whether the movement posture change amount performed by the hand model after holding the virtual object satisfies the throwing trigger condition of the virtual object, it is judged whether it is necessary to throw the virtual object to the host in the virtual space. When the movement posture change meets the throwing trigger condition of the held virtual object, it means that the user instructs to throw the virtual object to the host, thereby determining the throwing operation of the virtual object.
- the throwing special effect of the virtual object can be presented in the virtual space according to the throwing position of the virtual object in the virtual space and the throwable area in the virtual space.
- the throwing operation can be performed once or continuously, so that there are different throwing types of virtual gifts. Therefore, when the present disclosure presents the throwing special effect of a virtual object in the virtual space, it will also determine whether the movement performed by the hand model is a one-time throw or a continuous throw based on the change in movement posture of the hand model after holding the virtual object. Determines the throwing type of this virtual object. Then, the throwing special effects of the virtual object under the corresponding throwing type can be presented in the virtual space.
- Figure 12(A) can show the special effect when a virtual gift is thrown in the virtual space through the hand model
- Figure 12(B) can show the special effect of throwing a virtual gift through the hand model in the virtual space.
- the special effects when multiple virtual gifts are continuously thrown in the virtual space
- Figure 12(C) can show the special effects when the gift-giving prop is held by the hand model, and the gift-giving prop is used to continuously emit multiple virtual gifts in the virtual space.
- the present disclosure can perform different interactive operations on the virtual object according to the hand model when interacting with any virtual object in the virtual space through the hand model.
- the XR device to perform different levels of vibration.
- the XR device such as a real handle
- the XR device can be controlled to perform a slight vibration.
- the XR device can be controlled to perform a larger intensity vibration.
- the interactive virtual objects can be the gift entrance, the gift panel, each virtual gift in the gift panel, or related user interaction controls, etc.
- the XR device can be controlled to perform a slight vibration.
- the XR device can be controlled. The device performs greater intensity vibrations.
- the present disclosure When throwing any virtual object in the virtual space, there may be situations where the virtual object fails to be thrown in the virtual space due to insufficient throwing distance, wrong throwing area, network problems, etc. Then, when the corresponding throwing special effect is presented in the virtual space, the present disclosure will also detect whether the virtual object is successfully thrown in real time. If the virtual object fails to be thrown in the virtual space, the virtual object has already been thrown into the virtual space facing the host. Therefore, the present disclosure can control the virtual object to be folded back from the virtual space to its original position in the virtual space.
- the present disclosure can present the collision special effect of the virtual object in the virtual space.
- the collision special effect can be set to a rebound special effect, so that the elastic ball will will be bounced and then disappear in the virtual space.
- the hand model Since any virtual object is held by the hand model and the virtual object is driven to perform corresponding movements in the virtual space, there may be situations where the user actively gives up throwing the virtual object to the anchor. Then, in the case where the user actively gives up throwing the virtual object to the anchor, before throwing the virtual object to the anchor, the hand model will cancel the grip of the virtual object to indicate that the user actively Give up throwing the virtual object. Therefore, the present disclosure can set a return condition by determining whether the hand model cancels the grip of the virtual object without performing the corresponding throwing operation.
- the present disclosure can determine whether the movement posture change amount satisfies the return condition of the virtual object, and determine whether the user actively gives up throwing. the virtual object.
- the change amount of the movement posture indicates that the hand model performs the grip cancellation operation without performing the operation of throwing the virtual object, it means that the change amount of the movement posture meets the return condition of the virtual object. Therefore, in the virtual space, the virtual object that is no longer held by the hand model can be controlled to return from the position of the hand model to the original position of the virtual object in the virtual space.
- the hand model cancels the grip of the virtual object considering that the hand model cancels the grip of the virtual object, as shown in Figure 14, there may be the following two situations: 1) Any movement point after the hand model performs the corresponding movement Next, directly cancel the grip of the virtual object, that is, after the hand model performs the corresponding movement, the hand model's grip cancellation operation on the virtual object is performed in situ at any movement point. 2) Use the hand model to drive the held virtual object to perform corresponding movements, and after returning to the original position in the virtual space, cancel the grip of the virtual object, that is, use the hand model to drive the virtual object from any After the motion point returns to its original position in the virtual space, the hand model's grip cancellation operation on the virtual object is then performed.
- the present disclosure can set a first homing condition, and the first homing condition can be: the hand model performs a grip cancellation operation at any motion point when holding the virtual object. That is to say, the hand model moves to any motion point after holding the virtual object, and the grip cancellation operation is performed in situ at the motion point. Therefore, when the movement posture change amount of the hand model when holding the virtual object satisfies the first return condition, the present disclosure can control the virtual object to perform a preset vertical movement downward from the hand model and then return to its original position. The original position in the virtual space.
- the virtual object in order to simulate the gravitational influence of the virtual object after being released by the hand model, the virtual object can be controlled to perform a preset vertical movement downward for a short period of time.
- the downward movement distance of the preset vertical movement can be determined based on the position height of the hand model when it cancels the grip of the virtual object and the position height of the original position. Normally, the height of the hand model when it cancels the grip on the virtual object will be greater than the height of the original position.
- the present disclosure can set a second homing condition.
- the second homing condition can be: after the hand model holds the virtual object and moves to the virtual object's original position in the virtual space, Perform a hold cancel operation. That is to say, after the hand model moves to return to above its original position in the virtual space after grasping the virtual object, the grip cancellation operation is performed above the original position. Therefore, when the movement posture change amount of the hand model when holding the virtual object satisfies the second homing condition, the present disclosure can control the virtual object to move from the current position to Return to the original position in the virtual space.
- the hand model drives the virtual object to return to its original position in the virtual space, it can directly control the virtual object from its current position above the original position after the virtual object is cancelled. Return to its original position in virtual space.
- FIG. 9 is a schematic diagram of a human-computer interaction device provided by an embodiment of the present disclosure.
- the human-computer interaction device 900 can be configured in an XR device.
- the human-computer interaction device 900 includes:
- the throwing position determination module 910 is used to determine the throwing position of the virtual object in the virtual space in response to the throwing operation of any virtual object in the virtual space;
- the throwing module 920 is configured to present the throwing special effect of the virtual object in the virtual space according to the throwing position and the defined throwable area in the virtual space.
- the throwing module 920 can be used to:
- the throwing position is within the throwable area, the throwing special effect of the virtual object is presented in the virtual space;
- the virtual object is controlled to return to its original position in the virtual space after being presented in the virtual space for a desired duration.
- the throwing position of the virtual object in the virtual space includes one of the following position points:
- intersection point of the virtual object and the throwing boundary defined within the throwable area The intersection point of the virtual object and the throwing boundary defined within the throwable area.
- the virtual gift interactive device 900 may also include:
- An area dividing module is used to determine the throwable area in the virtual space according to the user's visible area in the virtual space and the preset throwing range.
- the throwing special effects of the virtual object include: a spatial throwing trajectory of the virtual object in the virtual space, and a throwing set based on the spatial throwing trajectory or/and the virtual object. special effects.
- the throwing operation of any virtual object in the virtual space is determined by the throwing operation determination module. This throwing operation determines the module and can be used for:
- the throwing operation of the virtual object is determined.
- the throwing position determination module 910 can be used to:
- the throwing position of the virtual object in the virtual space is determined according to the throwing trajectory.
- the throwing trigger condition of the virtual object at least includes: the hand model is in a grip cancellation posture after performing a throwing motion or a continuous throwing motion. Down;
- the throwing trigger condition of the virtual object at least includes: the target part of the hand model that interacts with the throwing prop executes the throwing set by the throwing prop. operate.
- the human-computer interaction device 900 may also include:
- a homing module configured to control the virtual object to fold back from the hand model to the virtual object in the virtual space when the motion posture change meets the homing condition of the virtual object. at the original position.
- the human-computer interaction device 900 may also include:
- a throwing failure module configured to control the virtual gift to return from the virtual space to its original position in the virtual space if the virtual object fails to be thrown in the virtual space.
- the human-computer interaction device 900 may also include:
- a collision module configured to present a collision special effect of the virtual object in the virtual space if the virtual object collides with any other virtual object in the virtual space.
- the human-computer interaction device 900 may also include:
- a vibration module is used to control the XR device to perform different degrees of vibration according to different interactive operations performed by the hand model toward any virtual object in the virtual space.
- the throwing position of the virtual object in the virtual space is first determined. Then, based on the throwing position and the demarcated throwable area in the virtual space, the throwing special effects of the virtual object are presented in the virtual space, thereby ensuring the intuitiveness and accuracy of throwing virtual objects in the virtual space and enhancing the virtual objects in the virtual space.
- the interactive fun and user interactive atmosphere can mobilize the enthusiasm of users in the virtual space for live broadcasting.
- the device 900 shown in Figure 15 can execute any method embodiment provided by the present disclosure, and the foregoing and other operations and/or functions of each module in the device 900 shown in Figure 15 are respectively to implement the above method embodiments. The corresponding process will not be repeated here for the sake of brevity.
- the software module may be located in a mature storage medium in the field such as random access memory, flash memory, read-only memory, programmable read-only memory, electrically erasable programmable memory, register, etc.
- the storage medium is located in the memory, and the processor reads the information in the memory and completes the steps in the above method embodiment in combination with its hardware.
- Figure 10 is a schematic block diagram of an electronic device provided by an embodiment of the present disclosure.
- the electronic device 1000 may include:
- Memory 1010 and processor 1020 are used to store computer programs and transmit the program code to the processor 1020.
- the processor 1020 can call and run the computer program from the memory 1010 to implement the method in the embodiment of the present disclosure.
- the processor 1020 may be configured to execute the above method embodiments according to instructions in the computer program.
- the processor 1020 may include, but is not limited to:
- DSP Digital Signal Processor
- ASIC Application Specific Integrated Circuit
- FPGA Field Programmable Gate Array
- the memory 1010 includes, but is not limited to:
- Non-volatile memory can be read-only memory (Read-Only Memory, ROM), programmable read-only memory (Programmable ROM, PROM), erasable programmable read-only memory (Erasable PROM, EPROM), electrically removable memory. Erase programmable read-only memory (Electrically EPROM, EEPROM) or flash memory.
- the volatile memory may be random access memory (RAM), which is used as an external cache.
- RAM static random access memory
- DRAM Dynamic random access memory
- DRAM synchronous dynamic random access memory
- SDRAM double data rate synchronous dynamic random access memory
- Double Data Rate SDRAM Double Data Rate SDRAM
- DDR SDRAM double data rate synchronous dynamic random access memory
- Enhanced SDRAM ESDRAM
- SLDRAM synchronous link dynamic random access memory
- Direct Rambus RAM Direct Rambus RAM
- the computer program can be divided into one or more modules, and the one or more modules are stored in the memory 1010 and executed by the processor 1020 to complete the tasks provided by the present disclosure.
- the one or more modules may be a series of computer program instruction segments capable of completing specific functions. The instruction segments are used to describe the execution process of the computer program on the electronic device 1000 .
- the electronic device may also include:
- Transceiver 1030 which may be connected to the processor 1020 or the memory 1010.
- the processor 1020 can control the transceiver 1030 to communicate with other devices. For example, it can send information or data to other devices, or receive information or data sent by other devices.
- Transceiver 1030 may include a transmitter and a receiver.
- the transceiver 1030 may further include an antenna, and the number of antennas may be one or more.
- bus system where in addition to the data bus, the bus system also includes a power bus, a control bus and a status signal bus.
- the present disclosure also provides a computer storage medium on which a computer program is stored.
- the computer program When the computer program is executed by a computer, the computer can perform the method of the above method embodiment.
- An embodiment of the present disclosure also provides a computer program product containing instructions, which when executed by a computer causes the computer to perform the method of the above method embodiment.
- the computer program product includes one or more computer instructions.
- the computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable device.
- the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted over a wired connection from a website, computer, server, or data center (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.) to another website, computer, server or data center.
- the computer-readable storage medium can be any available medium that can be accessed by the computer or contain one or more Data storage devices such as servers and data centers integrated with media.
- the available media may be magnetic media (eg, floppy disk, hard disk, tape), optical media (eg, digital video disc (DVD)), or semiconductor media (eg, solid state disk (SSD)), etc.
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Software Systems (AREA)
- Computer Hardware Design (AREA)
- Computer Graphics (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Information Transfer Between Computers (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
相关申请的交叉引用Cross-references to related applications
本申请是以CN申请号为202210992896.0,申请日为2022年08月18日的申请,以及CN申请号为202211131665.7,申请日为2022年09月15日的申请为基础,并主张其优先权,该CN申请的公开内容在此作为整体引入本公开中。This application is based on the application with CN application number 202210992896.0 and a filing date of August 18, 2022, and the CN application number 202211131665.7 with a filing date of September 15, 2022, and claims priority. The disclosure of the CN application is hereby incorporated into this disclosure in its entirety.
本公开涉及计算机技术和扩展现实(Extended Reality,XR)领域,具体涉及一种信息交互方法、装置、电子设备和存储介质,以及一种人机交互方法、装置、设备和存储介质。The present disclosure relates to the fields of computer technology and extended reality (XR), and specifically relates to an information interaction method, device, electronic equipment and storage medium, as well as a human-computer interaction method, device, equipment and storage medium.
随着虚拟现实技术(Virtual Reality,VR)的发展,越来越多的虚拟直播平台或应用被开发出来供用户使用。在虚拟直播平台中,用户可以通过例如头戴式显示设备及相关配件观看主播的表演,并可以通过表情、弹幕、虚拟礼物等与主播进行互动。With the development of virtual reality technology (Virtual Reality, VR), more and more virtual live broadcast platforms or applications have been developed for users to use. In the virtual live broadcast platform, users can watch the anchor's performance through, for example, head-mounted display devices and related accessories, and can interact with the anchor through emoticons, barrages, virtual gifts, etc.
目前,XR技术的应用场景越来越广泛了,包含虚拟现实(Virtual Reality,VR)、增强现实(Augmented Reality,AR)和混合现实(Mixed Reality,MR)等。在虚拟直播场景下,通过XR技术可以使用户能够沉浸式观看各种虚拟直播画面,例如用户可以通过佩戴头戴式显示器(Head Mounted Display,HMD),来体验真实的直播互动场景。At present, the application scenarios of XR technology are becoming more and more extensive, including virtual reality (Virtual Reality, VR), augmented reality (Augmented Reality, AR) and mixed reality (Mixed Reality, MR), etc. In virtual live broadcast scenarios, XR technology allows users to immersively watch various virtual live broadcasts. For example, users can experience real live interactive scenes by wearing a head-mounted display (HMD).
通常情况下,观众可以向所喜欢的主播进行点赞、评论、赠送虚拟礼物等,来增强虚拟直播场景下的用户互动。然而,在虚拟直播场景下,通常由观众通过发射光标射线,选中某一虚拟对象,并触控手柄Trigger键,来在虚拟空间内完成用户与主播间对于该虚拟对象的互动。Under normal circumstances, viewers can like, comment, and give virtual gifts to their favorite anchors to enhance user interaction in virtual live broadcast scenarios. However, in a virtual live broadcast scenario, the audience usually completes the interaction between the user and the anchor on the virtual object in the virtual space by emitting cursor rays, selecting a virtual object, and touching the Trigger button on the handle.
发明内容Contents of the invention
提供该发明内容部分以便以简要的形式介绍构思,这些构思将在后面的具体实施方式部分被详细描述。该发明内容部分并不旨在标识要求保护的技术方案的关键特征或必要特征,也不旨在用于限制所要求的保护的技术方案的范围。 This Summary is provided to introduce in a simplified form concepts that are further described in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed technical solution, nor is it intended to be used to limit the scope of the claimed technical solution.
第一方面,根据本公开的一个或多个实施例,提供了一种信息交互方法,包括:In a first aspect, according to one or more embodiments of the present disclosure, an information interaction method is provided, including:
确定连续发送的两个以上目标消息中首个发送的目标消息的移动终点;Determine the moving end point of the first target message sent among more than two target messages sent continuously;
基于所述首个发送的目标消息的移动终点确定所述两个以上目标消息中其余目标消息的移动终点;Determine the moving end points of the remaining target messages in the two or more target messages based on the moving end point of the first sent target message;
基于所确定的两个以上目标消息的移动终点发送两个以上目标消息。The two or more target messages are sent based on the determined mobile endpoints of the more than two target messages.
第二方面,根据本公开的一个或多个实施例,提供了一种信息交互装置,包括:In a second aspect, according to one or more embodiments of the present disclosure, an information interaction device is provided, including:
第一终点确定单元,用于确定连续发送的两个以上目标消息中首个发送的目标消息的移动终点;The first end point determination unit is used to determine the moving end point of the first target message sent among more than two target messages sent continuously;
第二终点确定单元,用于基于所述首个发送的目标消息的移动终点确定所述两个以上目标消息中其余目标消息的移动终点;A second end point determination unit configured to determine the moving end points of the remaining target messages in the two or more target messages based on the moving end point of the first sent target message;
消息显示单元,用于基于所确定的两个以上目标消息的移动终点发送两个以上目标消息。A message display unit is configured to send two or more target messages based on the determined moving destinations of the two or more target messages.
第三方面,根据本公开的一个或多个实施例,提供了一种电子设备,包括:至少一个存储器和至少一个处理器;其中,所述存储器用于存储程序代码,所述处理器用于调用所述存储器所存储的程序代码以使所述电子设备执行根据本公开的一个或多个实施例提供的信息交互方法。In a third aspect, according to one or more embodiments of the present disclosure, an electronic device is provided, including: at least one memory and at least one processor; wherein the memory is used to store program code, and the processor is used to call The program code stored in the memory enables the electronic device to execute the information interaction method provided according to one or more embodiments of the present disclosure.
第四方面,根据本公开的一个或多个实施例,提供了一种非暂态计算机存储介质,所述非暂态计算机存储介质存储有程序代码,所述程序代码被计算机设备执行时,使得所述计算机设备执行根据本公开的一个或多个实施例提供的信息交互方法。In a fourth aspect, according to one or more embodiments of the present disclosure, a non-transitory computer storage medium is provided, the non-transitory computer storage medium stores program code, and when the program code is executed by a computer device, such that The computer device executes the information interaction method provided according to one or more embodiments of the present disclosure.
第五方面,本公开实施例提供了一种人机交互方法,应用于XR设备,该方法包括:In the fifth aspect, embodiments of the present disclosure provide a human-computer interaction method applied to XR equipment. The method includes:
响应于虚拟空间内任一虚拟对象的投掷操作,确定所述虚拟对象在所述虚拟空间内的投掷位置;In response to a throwing operation of any virtual object in the virtual space, determine the throwing position of the virtual object in the virtual space;
根据所述投掷位置和所述虚拟空间内已划定的可投掷区域,在所述虚拟空间内呈现所述虚拟对象的投掷特效。According to the throwing position and the defined throwable area in the virtual space, the throwing special effect of the virtual object is presented in the virtual space.
第六方面,本公开实施例提供了一种人机交互装置,配置于XR设备,该装置包括:In a sixth aspect, embodiments of the present disclosure provide a human-computer interaction device configured in XR equipment. The device includes:
投掷位置确定模块,用于响应于虚拟空间内任一虚拟对象的投掷操作,确定所述虚拟对象在所述虚拟空间内的投掷位置;A throwing position determination module, configured to determine the throwing position of the virtual object in the virtual space in response to a throwing operation of any virtual object in the virtual space;
投掷模块,用于根据所述投掷位置和所述虚拟空间内已划定的可投掷区域,在所 述虚拟空间内呈现所述虚拟对象的投掷特效。Throwing module, used for throwing according to the throwing position and the demarcated throwable area in the virtual space. The throwing special effect of the virtual object is presented in the virtual space.
第七方面,本公开实施例提供了一种电子设备,该电子设备包括:In a seventh aspect, an embodiment of the present disclosure provides an electronic device, which includes:
处理器和存储器,该存储器用于存储计算机程序,该处理器用于调用并运行该存储器中存储的计算机程序,以执行本公开第五方面中提供的人机交互方法。A processor and a memory. The memory is used to store a computer program. The processor is used to call and run the computer program stored in the memory to execute the human-computer interaction method provided in the fifth aspect of the present disclosure.
第八方面,本公开实施例提供了一种计算机可读存储介质,用于存储计算机程序,该计算机程序使得计算机执行如本公开第五方面中提供的人机交互方法。In an eighth aspect, an embodiment of the present disclosure provides a computer-readable storage medium for storing a computer program. The computer program causes the computer to execute the human-computer interaction method as provided in the fifth aspect of the present disclosure.
第九方面,本公开实施例提供了一种计算机程序产品,包括计算机程序/指令,该计算机程序/指令使得计算机执行如本公开第五方面中提供的人机交互方法。In a ninth aspect, an embodiment of the present disclosure provides a computer program product, including a computer program/instructions that causes a computer to execute the human-computer interaction method as provided in the fifth aspect of the present disclosure.
第十方面,根据本公开的一个或多个实施例,提供了一种计算机程序,包括:指令,所述指令当由处理器执行时使所述处理器执行上述任一个实施例的信息交互方法,或者人机交互方法。In a tenth aspect, according to one or more embodiments of the present disclosure, a computer program is provided, including: instructions that, when executed by a processor, cause the processor to perform the information interaction method of any of the above embodiments. , or human-computer interaction methods.
结合附图并参考以下具体实施方式,本公开各实施例的上述和其他特征、优点及方面将变得更加明显。贯穿附图中,相同或相似的附图标记表示相同或相似的元素。应当理解附图是示意性的,原件和元素不一定按照比例绘制。The above and other features, advantages, and aspects of various embodiments of the present disclosure will become more apparent with reference to the following detailed description taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It is to be understood that the drawings are schematic and that elements and elements are not necessarily drawn to scale.
图1为根据本公开一些实施例虚拟现实设备的示意图。Figure 1 is a schematic diagram of a virtual reality device according to some embodiments of the present disclosure.
图2为根据本公开另一些实施例提供的虚拟现实设备的虚拟视场的示意图。FIG. 2 is a schematic diagram of a virtual field of view of a virtual reality device according to other embodiments of the present disclosure.
图3为本公开一些实施例提供的信息交互方法的流程图。Figure 3 is a flow chart of an information interaction method provided by some embodiments of the present disclosure.
图4为根据本公开一些实施例提供的虚拟现实空间的示意图。Figure 4 is a schematic diagram of a virtual reality space provided according to some embodiments of the present disclosure.
图5为根据本公开另一些实施例提供的虚拟现实空间的示意图。Figure 5 is a schematic diagram of a virtual reality space provided according to other embodiments of the present disclosure.
图6为根据本公开一些实施例提供的电子设备的结构示意图。Figure 6 is a schematic structural diagram of an electronic device according to some embodiments of the present disclosure.
图7为本公开实施例提供的一种人机交互方法的流程图。Figure 7 is a flow chart of a human-computer interaction method provided by an embodiment of the present disclosure.
图8为本公开实施例提供的虚拟空间的架构示意图。Figure 8 is a schematic diagram of the architecture of a virtual space provided by an embodiment of the present disclosure.
图9(A)和图9(B)为本公开实施例提供的虚拟空间内可投掷区域的一种示例性示意图。Figure 9(A) and Figure 9(B) are an exemplary schematic diagram of a throwable area in the virtual space provided by an embodiment of the present disclosure.
图10(A)和图10(B)为本公开实施例提供的虚拟空间内可投掷区域的另一种示例性示意图。10(A) and 10(B) are another exemplary schematic diagram of a throwable area in a virtual space provided by an embodiment of the present disclosure.
图11为本公开实施例提供的虚拟空间内投掷任一虚拟对象的方法流程图。Figure 11 is a flow chart of a method for throwing any virtual object in a virtual space provided by an embodiment of the present disclosure.
图12(A)为本公开实施例提供的通过手部模型在虚拟空间内投掷一次虚拟礼物 时的特效的示意图。Figure 12(A) shows a method of throwing a virtual gift in the virtual space through a hand model according to an embodiment of the present disclosure. A schematic diagram of the special effects.
图12(B)为本公开实施例提供的通过手部模型在虚拟空间内连续投掷多次虚拟礼物时的特效的示意图。Figure 12(B) is a schematic diagram of the special effects when a hand model is used to continuously throw multiple virtual gifts in the virtual space provided by an embodiment of the present disclosure.
图12(C)为本公开实施例提供的通过手部模型握持礼物赠送道具,而由礼物赠送道具在虚拟空间内连续发出多次虚拟礼物时的特效的示意图。Figure 12(C) is a schematic diagram of the special effects when the gift-giving prop is held by a hand model and the gift-giving prop is used to continuously emit multiple virtual gifts in the virtual space according to an embodiment of the present disclosure.
图13为本公开实施例提供的虚拟对象与任一其他虚拟对象发生碰撞的示例性示意图。Figure 13 is an exemplary schematic diagram of a virtual object colliding with any other virtual object provided by an embodiment of the present disclosure.
图14为本公开实施例提供的通过手部模型取消握持虚拟对象而放弃投掷的示意图。FIG. 14 is a schematic diagram of canceling the holding of the virtual object through the hand model and giving up throwing according to an embodiment of the present disclosure.
图15为本公开实施例提供的一种人机交互装置的示意图。Figure 15 is a schematic diagram of a human-computer interaction device provided by an embodiment of the present disclosure.
图16是本公开实施例提供的电子设备的示意性框图。Figure 16 is a schematic block diagram of an electronic device provided by an embodiment of the present disclosure.
下面将参照附图更详细地描述本公开的实施例。虽然附图中显示了本公开的某些实施例,然而应当理解的是,本公开可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施例,相反提供这些实施例是为了更加透彻和完整地理解本公开。应当理解的是,本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. Although certain embodiments of the disclosure are shown in the drawings, it should be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, which rather are provided for A more thorough and complete understanding of this disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of the present disclosure.
应当理解,本公开的实施方式中记载的步骤可以按照不同的顺序执行,和/或并行执行。此外,实施方式可以包括附加的步骤和/或省略执行示出的步骤。本公开的范围在此方面不受限制。It should be understood that the steps described in embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, embodiments may include additional steps and/or omit performance of illustrated steps. The scope of the present disclosure is not limited in this regard.
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。术语“响应于”以及有关的术语是指一个信号或事件被另一个信号或事件影响到某个程度,但不一定是完全地或直接地受到影响。如果事件x“响应于”事件y而发生,则x可以直接或间接地响应于y。例如,y的出现最终可能导致x的出现,但可能存在其它中间事件和/或条件。在其它情形中,y可能不一定导致x的出现,并且即使y尚未发生,x也可能发生。此外,术语“响应于”还可以意味着“至少部分地响应于”。 As used herein, the term "include" and its variations are open-ended, ie, "including but not limited to." The term "based on" means "based at least in part on." The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; and the term "some embodiments" means "at least some embodiments". The term "responsive to" and related terms means that one signal or event is affected by another signal or event to some extent, but not necessarily completely or directly. If event x occurs "in response to" event y, x may respond to y, directly or indirectly. For example, the occurrence of y may eventually lead to the occurrence of x, but there may be other intermediate events and/or conditions. In other cases, y may not necessarily cause x to occur, and x may occur even if y has not yet occurred. Furthermore, the term "responsive to" may also mean "responsive at least in part to."
术语“确定”广泛涵盖各种各样的动作,可包括获取、演算、计算、处理、推导、调研、查找(例如,在表、数据库或其他数据结构中查找)、探明、和类似动作,还可包括接收(例如,接收信息)、访问(例如,访问存储器中的数据)和类似动作,以及解析、选择、选取、建立和类似动作等等。其他术语的相关定义将在下文描述中给出。其他术语的相关定义将在下文描述中给出。The term "determine" broadly encompasses a wide variety of actions, which may include retrieving, calculating, calculating, processing, deriving, investigating, looking up (e.g., in a table, database, or other data structure), exploring, and similar actions, Also included may be receiving (e.g., receiving information), accessing (e.g., accessing data in memory), and similar actions, as well as parsing, selecting, selecting, creating, and similar actions, and the like. Relevant definitions of other terms will be given in the description below. Relevant definitions of other terms will be given in the description below.
需要注意,本公开中提及的“第一”、“第二”等概念仅用于对不同的装置、模块或单元进行区分,并非用于限定这些装置、模块或单元所执行的功能的顺序或者相互依存关系。It should be noted that concepts such as “first” and “second” mentioned in this disclosure are only used to distinguish different devices, modules or units, and are not used to limit the order of functions performed by these devices, modules or units. Or interdependence.
需要注意,本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,本领域技术人员应当理解,除非在上下文另有明确指出,否则应该理解为“一个或多个”。It should be noted that the modifications of "one" and "plurality" mentioned in this disclosure are illustrative and not restrictive. Those skilled in the art will understand that unless the context clearly indicates otherwise, it should be understood as "one or Multiple”.
为了本公开的目的,短语“A和/或B”意为(A)、(B)或(A和B)。For the purposes of this disclosure, the phrase "A and/or B" means (A), (B) or (A and B).
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。The names of messages or information exchanged between multiple devices in the embodiments of the present disclosure are for illustrative purposes only and are not used to limit the scope of these messages or information.
本公开实施例提供的方法可以用于虚拟现实空间中发送目标消息,例如表情、弹幕、礼物等。虚拟现实空间可以是对真实世界的仿真环境,也可以是半仿真半虚构的虚拟场景,还可以是纯虚构的虚拟场景。虚拟场景可以是二维虚拟场景、2.5维虚拟场景或者三维虚拟场景中的任意一种,本公开实施例对虚拟场景的维度不加以限定。例如,虚拟场景可以包括天空、陆地、海洋等,该陆地可以包括沙漠、城市等环境元素,用户可以控制虚拟对象在该虚拟场景中进行移动。The method provided by the embodiment of the present disclosure can be used to send target messages in the virtual reality space, such as emoticons, barrages, gifts, etc. Virtual reality space can be a simulation environment of the real world, a semi-simulation and semi-fictional virtual scene, or a purely fictitious virtual scene. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, or a three-dimensional virtual scene. The embodiments of the present disclosure do not limit the dimensions of the virtual scene. For example, the virtual scene can include the sky, land, ocean, etc. The land can include environmental elements such as deserts and cities, and the user can control virtual objects to move in the virtual scene.
参考图1,用户可以通过例如头戴式VR眼镜等智能终端设备进入虚拟现实空间,并在虚拟现实空间中控制自己的虚拟角色(Avatar)与其他用户控制的虚拟角色进行社交互动、娱乐、学习、远程办公等。Referring to Figure 1, users can enter the virtual reality space through smart terminal devices such as head-mounted VR glasses, and control their own virtual characters (Avatar) in the virtual reality space to interact socially, entertain and learn with virtual characters controlled by other users. , remote working, etc.
在一些实施例中,在虚拟现实空间中,用户可以通过控制器来实现相关的交互操作,该控制器可以为手柄,例如用户通过对手柄的按键的操作来进行相关的操作控制。当然在另外的实施例中,也可以不使用控制器而使用手势或者语音或者多模态控制方式来对虚拟现实设备中的目标对象进行控制。In some embodiments, in the virtual reality space, the user can implement related interactive operations through a controller, which can be a handle. For example, the user can perform related operation controls by operating buttons on the handle. Of course, in other embodiments, instead of using a controller, gestures or voice or multi-modal control methods may be used to control the target object in the virtual reality device.
本公开一个或多个实施例提供的信息交互方法采用扩展现实(Extended Reality,简称XR)技术。扩展现实技术可以通过计算机将真实与虚拟相结合,为用户提供可人机交互的虚拟现实空间。在虚拟现实空间中,用户可以通过例如头盔式显示器(Head Mount Display,HMD)等虚拟现实设备,进行社交互动、娱乐、学习、工作、远程办公、创作UGC(User Generated Content,用户生成内容)等。The information interaction method provided by one or more embodiments of the present disclosure adopts extended reality (Extended Reality, XR for short) technology. Extended reality technology can combine reality and virtuality through computers to provide users with a virtual reality space that allows human-computer interaction. In the virtual reality space, users can use helmet-mounted displays (Head-mounted Displays) Mount Display (HMD) and other virtual reality devices for social interaction, entertainment, learning, work, telecommuting, creation of UGC (User Generated Content), etc.
本公开实施例记载的虚拟现实设备可以包括但不限于如下几个类型:The virtual reality devices recorded in the embodiments of this disclosure may include but are not limited to the following types:
电脑端虚拟现实(PCVR)设备,利用PC端进行虚拟现实功能的相关计算以及数据输出,外接的电脑端虚拟现实设备利用PC端输出的数据实现虚拟现实的效果。Computer-side virtual reality (PCVR) equipment uses the PC side to perform calculations and data output related to virtual reality functions. The external computer-side virtual reality equipment uses the data output from the PC side to achieve virtual reality effects.
移动虚拟现实设备,支持以各种方式(如设置有专门的卡槽的头戴式显示器)设置移动终端(如智能手机),通过与移动终端有线或无线方式的连接,由移动终端进行虚拟现实功能的相关计算,并输出数据至移动虚拟现实设备,例如通过移动终端的APP观看虚拟现实视频。Mobile virtual reality equipment supports setting up a mobile terminal (such as a smartphone) in various ways (such as a head-mounted display with a special card slot), and through a wired or wireless connection with the mobile terminal, the mobile terminal performs virtual reality Function-related calculations and output data to mobile virtual reality devices, such as viewing virtual reality videos through mobile terminal APPs.
一体机虚拟现实设备,具备用于进行虚拟功能的相关计算的处理器,因而具备独立的虚拟现实输入和输出的功能,不需要与PC端或移动终端连接,使用自由度高。The all-in-one virtual reality device has a processor for performing calculations related to virtual functions, so it has independent virtual reality input and output functions. It does not need to be connected to a PC or mobile terminal, and has a high degree of freedom in use.
当然虚拟现实设备实现的形态不限于此,可以根据需要可以进一步小型化或大型化。Of course, the form of the virtual reality device is not limited to this, and can be further miniaturized or enlarged as needed.
虚拟现实设备中设置有姿态检测的传感器(如九轴传感器),用于实时检测虚拟现实设备的姿态变化,如果用户佩戴了虚拟现实设备,那么当用户头部姿态发生变化时,会将头部的实时姿态传给处理器,以此计算用户的视线在虚拟环境中的注视点,根据注视点计算虚拟环境的三维模型中处于用户注视范围(即虚拟视场)的图像,并在显示屏上显示,使人仿佛在置身于现实环境中观看一样的沉浸式体验。The virtual reality device is equipped with a posture detection sensor (such as a nine-axis sensor), which is used to detect posture changes of the virtual reality device in real time. If the user wears a virtual reality device, when the user's head posture changes, the head posture will be changed. The real-time posture is passed to the processor to calculate the gaze point of the user's line of sight in the virtual environment. Based on the gaze point, the image in the three-dimensional model of the virtual environment within the user's gaze range (i.e., the virtual field of view) is calculated and displayed on the display screen. display, giving people an immersive experience as if they were watching in a real environment.
图2示出了本公开一些实施例提供的虚拟现实设备的虚拟视场的示意图,使用水平视场角和垂直视场角来描述虚拟视场在虚拟环境中的分布范围,垂直方向的分布范围使用垂直视场角BOC表示,水平方向的分布范围使用水平视场角AOB表示,人眼通过透镜总是能够感知到虚拟环境中位于虚拟视场的影像,可以理解,视场角越大,虚拟视场的尺寸也就越大,用户能够感知的虚拟环境的区域也就越大。其中,视场角,表示通过透镜感知到环境时所具有的视角的分布范围。例如,虚拟现实设备的视场角,表示通过虚拟现实设备的透镜感知到虚拟环境时,人眼所具有的视角的分布范围;再例如,对于设置有摄像头的移动终端来说,摄像头的视场角为摄像头感知真实环境进行拍摄时,所具有的视角的分布范围。Figure 2 shows a schematic diagram of the virtual field of view of the virtual reality device provided by some embodiments of the present disclosure. The horizontal field of view angle and the vertical field of view angle are used to describe the distribution range of the virtual field of view in the virtual environment. The distribution range in the vertical direction is The vertical field of view angle BOC is used to express, and the distribution range in the horizontal direction is represented by the horizontal field of view angle AOB. The human eye can always perceive the image in the virtual field of view in the virtual environment through the lens. It can be understood that the larger the field of view angle, the greater the virtual field of view angle. The larger the field of view, the larger the area of the virtual environment that the user can perceive. Among them, the field of view represents the distribution range of the viewing angle when the environment is perceived through the lens. For example, the field of view of a virtual reality device represents the distribution range of the viewing angle of the human eye when the virtual environment is perceived through the lens of the virtual reality device; for another example, for a mobile terminal equipped with a camera, the field of view of the camera The angle is the distribution range of the viewing angle when the camera perceives the real environment and shoots.
虚拟现实设备,例如HMD集成有若干的相机(例如深度相机、RGB相机等),相机的目的不仅仅限于提供直通视图。相机图像和集成的惯性测量单元(IMU)提供可通过计算机视觉方法处理以自动分析和理解环境的数据。还有,HMD被设计成不仅 支持无源计算机视觉分析,而且还支持有源计算机视觉分析。无源计算机视觉方法分析从环境中捕获的图像信息。这些方法可为单视场的(来自单个相机的图像)或体视的(来自两个相机的图像)。它们包括但不限于特征跟踪、对象识别和深度估计。有源计算机视觉方法通过投影对于相机可见但不一定对人视觉系统可见的图案来将信息添加到环境。此类技术包括飞行时间(ToF)相机、激光扫描或结构光,以简化立体匹配问题。有源计算机视觉用于实现场景深度重构。Virtual reality devices, such as HMDs, integrate several cameras (such as depth cameras, RGB cameras, etc.). The purpose of the cameras is not limited to providing a pass-through view. Camera images and an integrated inertial measurement unit (IMU) provide data that can be processed through computer vision methods to automatically analyze and understand the environment. Also, HMD is designed not only to Passive computer vision analysis is supported, and active computer vision analysis is also supported. Passive computer vision methods analyze image information captured from the environment. These methods can be monoscopic (images from a single camera) or stereoscopic (images from two cameras). They include, but are not limited to, feature tracking, object recognition, and depth estimation. Active computer vision methods add information to the environment by projecting patterns that are visible to the camera but not necessarily to the human visual system. Such technologies include time-of-flight (ToF) cameras, laser scanning or structured light to simplify the stereo matching problem. Active computer vision is used to achieve deep scene reconstruction.
在一些实施例中,所述虚拟现实空间包括虚拟直播空间。在虚拟直播空间中,表演者用户可以以虚拟形象或真实影像进行直播,观众用户可以控制虚拟角色以诸如第一人称视角或第三人称视角等观看视角,观看表演者的直播。In some embodiments, the virtual reality space includes a virtual live broadcast space. In the virtual live broadcast space, performer users can live broadcast with virtual images or real images, and audience users can control virtual characters to watch the performers' live broadcast from viewing angles such as first-person perspective or third-person perspective.
在一些实施例中,可以获取视频流,并基于视频流在虚拟现实空间中呈现视频内容。示例性地,视频流可以采用H.265、H.264、MPEG-4等编码格式。In some embodiments, a video stream may be obtained and video content may be presented in a virtual reality space based on the video stream. For example, the video stream may adopt encoding formats such as H.265, H.264, and MPEG-4.
在一些实施例中,客户端可以接收服务器发送的视频直播流,并基于该视频直播流在虚拟现实空间内显示视频直播图像。In some embodiments, the client can receive the live video stream sent by the server and display the live video image in the virtual reality space based on the live video stream.
参考图3,图3示出了本公开一些实施例提供的信息交互方法100的流程图,方法100包括步骤S120-步骤S160。Referring to Figure 3, Figure 3 shows a flow chart of an information interaction method 100 provided by some embodiments of the present disclosure. The method 100 includes steps S120 to S160.
步骤S120:确定连续发送的两个以上目标消息中首个发送的目标消息的移动终点。Step S120: Determine the moving end point of the first target message sent among two or more target messages sent continuously.
在一些实施例中,目标消息包括但不限于文本消息(例如评论、弹幕)、图像消息(例如表情emoji、图片、虚拟物品等)。In some embodiments, target messages include but are not limited to text messages (such as comments, barrages), image messages (such as emojis, pictures, virtual items, etc.).
在一些实施例中,目标消息可以为用户所编辑的自定义消息、用户通过消息发送操作选中的系统提供的消息、消息发送操作所关联的消息、或响应于消息发送操作系统随机分配的消息。In some embodiments, the target message may be a custom message edited by the user, a system-provided message selected by the user through a messaging operation, a message associated with a messaging operation, or a message randomly assigned by the operating system in response to the messaging operation.
消息发送操作包括但不限于体感控制操作、手势控制操作、眼球晃动操作、触控操作、语音控制指令、或对外接控制设备的操作(例如按键操作)。Message sending operations include but are not limited to somatosensory control operations, gesture control operations, eye movement operations, touch operations, voice control instructions, or operations on external control devices (such as button operations).
在一些实施例中,用户可以通过预设的操作唤起消息编辑界面,从消息编辑界面中选择候选的目标消息、或编辑自定义的目标消息,发送目标消息,并在虚拟现实空间内显示当前用户发送的目标消息。In some embodiments, the user can invoke the message editing interface through preset operations, select a candidate target message from the message editing interface, or edit a customized target message, send the target message, and display the current user in the virtual reality space. The target message to send.
在一些实施例中,用户可以从虚拟现实空间中所显示的消息编辑界面选择已有的候选目标消息、或编辑自定义的目标消息,并发送目标消息,并在目标消息显示空间内显示当前用户发送的目标消息。其中,目标消息现实空间为虚拟现实空间中,用于显示目标消息的区域。示例性地,消息编辑界面可以预先显示于虚拟现实空间,或可 以基于预设的操作唤出。消息编辑界面可以用于编辑目标消息,或用于直接显示预设的一个或多个候选的目标消息供用户直接选中,例如,消息编辑界面可以为消息面板(例如用于表情面板)。In some embodiments, the user can select an existing candidate target message from the message editing interface displayed in the virtual reality space, or edit a customized target message, send the target message, and display the current user in the target message display space. The target message to send. Among them, the target message real space is an area in the virtual reality space used to display the target message. For example, the message editing interface may be displayed in the virtual reality space in advance, or may Called up with preset-based actions. The message editing interface can be used to edit the target message, or to directly display one or more preset candidate target messages for the user to directly select. For example, the message editing interface can be a message panel (for example, used for an emoticon panel).
在一些实施例中,消息编辑界面可以为虚拟现实空间中预设的用于显示一个或多个候选目标消息的区域。In some embodiments, the message editing interface may be a preset area in the virtual reality space for displaying one or more candidate target messages.
在另一些实施例中,消息发送操作可以包括用户针对虚拟现实控制设备的预设操作,例如触发虚拟现实控制设备(例如手柄)的预设按键。示例性地,该预设按键可以与预设的目标消息关联,当用户触发该预设按键时,则可以发送该目标消息;或者,当用户触发该预设按键时,系统为该次触发随机分配一个目标消息。In other embodiments, the message sending operation may include a user's preset operation on the virtual reality control device, such as triggering a preset button of the virtual reality control device (such as a handle). For example, the preset button can be associated with a preset target message. When the user triggers the preset button, the target message can be sent; or when the user triggers the preset button, the system randomly triggers the message. Assign a target message.
在一些实施例中,用户可以连续触发N个消息发送操作来连续发送N个目标消息,且相邻两个第一消息发送操作之间的间隔时间不超过预设时间间隔。示例性地,用户可以连续触发N次虚拟现实控制设备(例如手柄)的预设按键,或连续点击N次消息面板显示的候选目标消息,且使每次触发/点击之间的间隔时间不超过预设时间间隔,则可以连续发送N个目标消息。In some embodiments, the user can continuously trigger N message sending operations to continuously send N target messages, and the interval between two adjacent first message sending operations does not exceed the preset time interval. For example, the user can continuously trigger the preset buttons of the virtual reality control device (such as a handle) N times, or continuously click the candidate target message displayed on the message panel N times, and the interval between each trigger/click does not exceed With a preset time interval, N target messages can be sent continuously.
步骤S140:基于所述首个发送的目标消息的移动终点确定所述两个以上目标消息中其余目标消息的移动终点。Step S140: Determine the moving end points of the remaining target messages in the two or more target messages based on the moving end point of the first sent target message.
所述两个以上目标消息中其余目标消息为所述首个发送的目标消息以外的目标消息。The remaining target messages among the two or more target messages are target messages other than the first sent target message.
在一些实施例中,假设用户依次先后触发了消息发送操作A、B、C和D,其中消息发送操作A、B之间的时间间隔大于预设时间间隔,消息发送操作B、C之间以及C、D之间的时间间隔均不大于预设时间间隔,则可以将消息发送操作B、C、D确定为用于连续发送两个以上目标消息的操作,将消息发送操作B所发送的目标消息b作为所述首个发送的目标消息,确定该目标消息b的,并基于所确定的目标消息b的移动终点来确定消息发送操作C、D所发送的目标消息c、d的移动终点。In some embodiments, it is assumed that the user triggers message sending operations A, B, C and D in sequence, where the time interval between message sending operations A and B is greater than the preset time interval, and the time interval between message sending operations B and C is greater than the preset time interval. If the time interval between C and D is not greater than the preset time interval, the message sending operations B, C, and D can be determined as operations for sending more than two target messages continuously, and the target sent by the message sending operation B can be determined as Message b is the first target message to be sent. The target message b is determined, and the moving end points of the target messages c and d sent by the message sending operations C and D are determined based on the determined moving end point of the target message b.
在一些实施例中,当用户触发了第一消息发送操作,若确定该第一消息发送操作距上一次消息发送操作的时间间隔超过了预设时间间隔,则响应于该第一消息发送操作,则确定该第一消息发送操作所发送的第一目标消息的移动终点;若用户在触发了第一消息发送操作后的预设时间间隔内触发了第二消息发送操作,则基于第一目标消息的移动终点确定第二消息发送操作所发送的第二目标消息的移动终点;类似地,若用户在触发了第二消息发送操作后的预设时间间隔内触发了第三消息发送操作,则基 于第一目标消息的移动终点确定第三消息发送操作所发送的第二目标消息的移动终点。In some embodiments, when the user triggers the first message sending operation, if it is determined that the time interval between the first message sending operation and the last message sending operation exceeds the preset time interval, in response to the first message sending operation, Then determine the moving end point of the first target message sent by the first message sending operation; if the user triggers the second message sending operation within the preset time interval after triggering the first message sending operation, based on the first target message The mobile end point determines the mobile end point of the second target message sent by the second message sending operation; similarly, if the user triggers the third message sending operation within the preset time interval after triggering the second message sending operation, then the basic The moving end point of the second target message sent by the third message sending operation is determined based on the moving end point of the first target message.
在另一些实施例中,用户还可以通过预设消息连发指令,直接连续发送预设数量的目标消息。In other embodiments, the user can also directly send a preset number of target messages continuously through a preset message burst instruction.
步骤S160:基于所确定的目标消息的移动终点发送对应的目标消息。Step S160: Send the corresponding target message based on the determined mobile destination of the target message.
在一些实施例中,目标消息在虚拟现实空间内可以朝向对应的移动终点进行移动。例如,所述连续发送的两个以上目标消息中,首个发送的目标消息朝向步骤S120确定的移动终点进行移动,其余目标消息朝向步骤S140确定的移动终点进行移动。In some embodiments, the target message can move toward the corresponding mobile end point in the virtual reality space. For example, among the two or more target messages sent continuously, the first target message sent moves toward the moving end point determined in step S120, and the remaining target messages move toward the moving end point determined in step S140.
需要说明的是,目标消息的移动起点可以位于虚拟现实空间内任意的位置、或位于预设的特定位置、或基于对应的消息发送操作所确定的位置,本公开在此不做限制。此外,目标消息的移动路径可以为直线、曲线或其他形状,本公开在此亦不做限制。It should be noted that the moving starting point of the target message can be located at any location in the virtual reality space, or at a preset specific location, or at a location determined based on the corresponding message sending operation. This disclosure is not limited here. In addition, the moving path of the target message can be a straight line, a curve, or other shapes, and the disclosure is not limited here.
根据本公开的一个或多个实施例,通过基于该首个发送的目标消息的移动终点确定所述两个以上目标消息中其余目标消息的移动终点,可以平衡连发消息的多样性和一致性,以便于连发消息被识别与区分,还可以提升连发消息移动终点的确定效率。According to one or more embodiments of the present disclosure, by determining the moving end points of the remaining target messages in the two or more target messages based on the moving end point of the first sent target message, the diversity and consistency of the continuous message can be balanced. , so that continuous messages can be identified and distinguished, and it can also improve the efficiency of determining the moving end point of continuous messages.
在一些实施例中,可以为连续发送的N个目标消息中的首个发送的目标消息随机地分配移动终点。示例性地,可以在虚拟现实空间中设置用于展示目标消息的目标消息显示空间,并可以在目标消息显示空间中随机确定所述首个发送的目标消息的移动终点。In some embodiments, the mobile destination may be randomly assigned to the first target message sent among the N target messages sent consecutively. For example, a target message display space for displaying the target message can be set in the virtual reality space, and the moving end point of the first sent target message can be randomly determined in the target message display space.
在一些实施例中,所述其余目标消息的移动终点与所述首个发送的目标消息的移动终点之间具有预设的位置关系。示例性地,其余目标消息的移动终点与所述首个发送的目标消息的移动终点之间的距离可以不超过预设阈值,或者其余目标消息的移动终点可以位于以所述首个发送的目标消息的移动终点为中心的预设区域内(例如圆形区域、正方形区域、球形区域、正方体区域等)。In some embodiments, there is a preset positional relationship between the moving end points of the remaining target messages and the moving end point of the first sent target message. Exemplarily, the distance between the moving end points of the remaining target messages and the moving end point of the first sent target message may not exceed a preset threshold, or the moving end points of the remaining target messages may be located in the same direction as the first sent target message. The moving end point of the message is within a preset area in the center (such as a circular area, a square area, a spherical area, a cube area, etc.).
在一些实施例中,在以所述首个发送的目标消息的移动终点为中心的预设区域内,随机确定所述其余目标消息的移动终点。这样,可以将连发消息集中在首个连发消息的周围随机显示,可以进一步平衡连发消息的多样性和一致性,且可以呈现出连发消息随机散落的视觉效果。In some embodiments, the moving end points of the remaining target messages are randomly determined within a preset area centered on the moving end point of the first sent target message. In this way, the continuous messages can be concentrated and randomly displayed around the first continuous message, which can further balance the diversity and consistency of the continuous messages, and can present the visual effect of the continuous messages being randomly scattered.
示例性地,可以在以所述首个发送的目标消息的移动终点为球心、半径为预设长度的球体内,随机地所述其余目标消息分配对应的移动终点。For example, within a sphere with the moving end point of the first sent target message as the center of the sphere and the radius as the preset length, the remaining target messages may be randomly assigned corresponding moving end points.
在一些实施例中,方法100还包括: In some embodiments, method 100 further includes:
步骤S170:在所述两个以上目标消息中的第一目标消息移动至所确定的移动终点后,在所述第一目标消息的预设位置处显示用于表示所述第一目标消息在所述两个以上目标消息中的发送次序的次序标识。Step S170: After the first target message among the two or more target messages moves to the determined moving end point, display a message indicating that the first target message is at the preset position of the first target message. The sequence identifier of the sending order among the two or more target messages.
第一目标消息可以是所述两个以上目标消息中任意一个目标消息。The first target message may be any one of the two or more target messages.
在本实施例中,通过在所述第一目标消息的预设位置处显示用于表示所述第一目标消息在所述两个以上目标消息中的发送次序的次序标识,可以依次显示连发信息的次序和数量,以易于连发消息被识别。In this embodiment, by displaying an order identifier indicating the sending order of the first target message in the two or more target messages at a preset position of the first target message, the continuous transmission can be displayed in sequence. The order and quantity of messages are such that bursts of messages are easily identifiable.
示例性地,次序标识可以位于对应的第一目标消息的上、下、左、右侧等,但本公开不限于此。For example, the sequence identifier may be located above, below, left, right, etc. of the corresponding first target message, but the disclosure is not limited thereto.
在一些实施例中,所述次序标识的持续显示时长不超过所述第一目标消息在所述移动终点位置处的持续显示时长。在本实施例中,通过使次序标识的持续显示时长不超过所述第一目标消息在所述移动终点位置处的持续显示时长,防止次序标识的持续显示时长过长而挤占连发消息的显示空间。In some embodiments, the continuous display duration of the sequence identification does not exceed the continuous display duration of the first target message at the moving end position. In this embodiment, by making the continuous display duration of the sequence identification not exceed the continuous display duration of the first target message at the mobile end position, the continuous display duration of the sequence identification is prevented from being too long and occupying the display of continuous messages. space.
在一些实施例中,方法100还包括:In some embodiments, method 100 further includes:
步骤S180:在所述目标消息的发送过程中和/或移动至所确定的移动终点后,在所述目标消息的预设位置处显示用于表示所述目标消息的发送者信息的用户标识。Step S180: During the sending process of the target message and/or after moving to the determined moving end point, display a user identification representing the sender information of the target message at a preset position of the target message.
在本实施例中,通过在所述目标消息的发送过程中和/或移动至所确定的移动终点后,在所述目标消息的预设位置处显示相应的用户标识,可以更直观便捷地体现目标消息的发送者。In this embodiment, by displaying the corresponding user identification at the preset position of the target message during the sending process of the target message and/or after moving to the determined mobile end point, it can be reflected more intuitively and conveniently. The sender of the target message.
示例性地,用户标识包括但不限于用户头像、用户昵称、用户标识等。For example, user identification includes but is not limited to user avatar, user nickname, user identification, etc.
示例性地,用户标识可以位于对应的第一目标消息的上、下、左、右侧等,但本公开不限于此。For example, the user identification may be located above, below, left, right, etc. of the corresponding first target message, but the disclosure is not limited thereto.
在一些实施例中,不同类型的目标消息与所述用户标识之间具有不同的预设位置关系。示例性地,如果目标消息为实心的或不透明物体的图像,则可以在该实心不透明物体的图像的外部显示对应的用户标识;如果目标消息为空心的或透明物体的图像,则可以在该空心的或透明物体的图像的内部显示对应的用户标识。In some embodiments, different types of target messages have different preset position relationships with the user identification. For example, if the target message is an image of a solid or opaque object, the corresponding user identification can be displayed outside the image of the solid opaque object; if the target message is an image of a hollow or transparent object, the corresponding user identification can be displayed outside the image of the hollow or transparent object. The corresponding user ID is displayed inside the image of a or transparent object.
在本实施例中,通过根据目标消息的可视化类型设置其与相应的用户标识的位置关系,可以提升用户标识与目标消息的显示多样性和显示契合度,从而提升用户体验。In this embodiment, by setting the positional relationship between the target message and the corresponding user identification according to the visualization type, the display diversity and display fit between the user identification and the target message can be improved, thereby improving the user experience.
参见图4-5,虚拟现实空间10包括视频图像显示空间20、目标消息显示空间40。视频图像显示空间20可以用于显示视频图像,例如直播图像。虚拟现实空间还包括消 息编辑界面30,其显示有多个候选的目标表情供用户选择。当用户选中目标表情421时,则目标表情421可以以消息编辑界面30上的一位置为起点,移动至目标消息显示空间中。若用户连续触发3次了消息编辑界面30中显示的爱心形表情,则目标表情411、412和413可以以消息编辑界面30上的一位置为起点,接连沿着曲线路径移动至目标消息显示空间40中。目标表情412和413最终移动至以目标表情411为球心的球体区域内。目标表情411、412和413的右侧分别对应显示有次序标识“×1”“×2”和“×3”,分别表示目标表情411为连发表情中首个发送的表情,目标表情412为连发表情中第二个发送的表情,目标表情413为第三个发送的表情。每个目标表情的下方显示有发送者的用户标识“Tom”。Referring to Figures 4-5, the virtual reality space 10 includes a video image display space 20 and a target message display space 40. The video image display space 20 may be used to display video images, such as live broadcast images. The virtual reality space also includes consumer The information editing interface 30 displays a plurality of candidate target expressions for the user to select. When the user selects the target expression 421, the target expression 421 can move to the target message display space starting from a position on the message editing interface 30. If the user triggers the heart-shaped emoticon displayed in the message editing interface 30 three times in succession, the target emoticons 411, 412, and 413 can start from a position on the message editing interface 30 and continuously move along the curve path to the target message display space. 40 in. The target expressions 412 and 413 finally move to the spherical area with the target expression 411 as the center. The right sides of the target expressions 411, 412, and 413 respectively display sequence identifiers "×1", "×2", and "×3", which respectively indicate that the target expression 411 is the first expression to be sent in the continuous expression, and the target expression 412 is The second emoticon sent in a burst of emoticons, and the target emoticon 413 is the third emoticon sent. The sender's user ID "Tom" is displayed below each target emoticon.
在一些实施例中,方法100还包括:In some embodiments, method 100 further includes:
步骤S191:在所述目标消息移动至对应的移动终点的过程中,显示所述目标消息关联的第一特效;和/或Step S191: During the process of moving the target message to the corresponding moving end point, display the first special effect associated with the target message; and/or
步骤S192:在所述目标消息在移动至对应移动终点后,显示所述目标消息关联的第二特效。Step S192: After the target message moves to the corresponding moving end point, display the second special effect associated with the target message.
示例性地,第一特效可以为目标消息的旋转特效,第二特效为目标消息的形变特效,但本公开不限于此。For example, the first special effect may be a rotation special effect of the target message, and the second special effect may be a deformation special effect of the target message, but the disclosure is not limited thereto.
在本实施例中,通过在所述目标消息移动至对应的移动终点的过程中,显示所述目标消息关联的第一特效,并在所述目标消息在移动至对应移动终点后,显示所述目标消息关联的第二特效,从而可以丰富目标消息发送过程的展现形式,提升用户体验。In this embodiment, the first special effect associated with the target message is displayed while the target message is moving to the corresponding moving end point, and the first special effect associated with the target message is displayed after the target message moves to the corresponding moving end point. The second special effect associated with the target message can enrich the display form of the target message sending process and improve the user experience.
相应地,根据本公开一些实施例提供了一种信息交互装置,包括:Correspondingly, some embodiments of the present disclosure provide an information interaction device, including:
第一终点确定单元,用于确定连续发送的两个以上目标消息中首个发送的目标消息的移动终点;The first end point determination unit is used to determine the moving end point of the first target message sent among more than two target messages sent continuously;
第二终点确定单元,用于基于所述首个发送的目标消息的移动终点确定所述两个以上目标消息中其余目标消息的移动终点;A second end point determination unit configured to determine the moving end points of the remaining target messages in the two or more target messages based on the moving end point of the first sent target message;
消息显示单元,用于基于所确定的目标消息的移动终点发送对应的目标消息。The message display unit is configured to send the corresponding target message based on the determined mobile end point of the target message.
在一些实施例中,确定所述首个发送的目标消息的移动终点,包括:在虚拟现实空间中设置的目标消息显示空间中,随机确定所述首个发送的目标消息的移动终点。In some embodiments, determining the moving end point of the first sent target message includes: randomly determining the moving end point of the first sent target message in a target message display space set in the virtual reality space.
在一些实施例中,所述其余目标消息的移动终点与所述首个发送的目标消息的移动终点之间具有预设的位置关系。In some embodiments, there is a preset positional relationship between the moving end points of the remaining target messages and the moving end point of the first sent target message.
在一些实施例中,所述消息显示单元用于在以所述首个发送的目标消息的移动终 点为中心的预设区域内,随机确定所述其余目标消息的移动终点。In some embodiments, the message display unit is configured to display the first sent target message on a mobile terminal. Within the preset area centered on the point, the moving end points of the remaining target messages are randomly determined.
在一些实施例中,信息交互装置还包括:In some embodiments, the information interaction device further includes:
第一标识单元,用于在所述两个以上目标消息中的第一目标消息移动至所确定的移动终点后,在所述第一目标消息的预设位置处显示用于表示所述第一目标消息在所述两个以上目标消息中的发送次序的次序标识。A first identification unit, configured to display the first target message at a preset position of the first target message to indicate the first target message after the first target message among the two or more target messages moves to the determined moving end point. The sequence identifier of the sending order of the target message among the two or more target messages.
在一些实施例中,所述次序标识的持续显示时长不超过所述第一目标消息在所述移动终点位置处的持续显示时长。In some embodiments, the continuous display duration of the sequence identification does not exceed the continuous display duration of the first target message at the moving end position.
在一些实施例中,信息交互装置还包括:In some embodiments, the information interaction device further includes:
第二标识单元,用于在所述目标消息的发送过程中和/或移动至所确定的移动终点后,在所述目标消息的预设位置处显示用于表示所述目标消息的发送者信息的用户标识。A second identification unit configured to display the sender information representing the target message at a preset position of the target message during the sending process of the target message and/or after moving to the determined moving end point. user ID.
在一些实施例中,不同类型的目标消息与所述用户标识之间具有不同的预设位置关系。In some embodiments, different types of target messages have different preset position relationships with the user identification.
在一些实施例中,信息交互装置还包括:特效单元,用于在所述目标消息移动至对应的移动终点的过程中,显示所述目标消息关联的第一特效;和/或在所述目标消息在移动至对应移动终点后,显示所述目标消息关联的第二特效。In some embodiments, the information interaction device further includes: a special effect unit, configured to display the first special effect associated with the target message while the target message is moving to the corresponding moving end point; and/or when the target message After the message moves to the corresponding moving end point, the second special effect associated with the target message is displayed.
对于装置的实施例而言,由于其基本对应于方法实施例,所以相关之处参见方法实施例的部分说明即可。以上所描述的装置实施例仅仅是示意性的,其中作为分离模块说明的模块可以是或者也可以不是分开的。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。As for the device embodiment, since it basically corresponds to the method embodiment, please refer to the partial description of the method embodiment for relevant details. The device embodiments described above are only illustrative, and the modules described as separate modules may or may not be separate. Some or all of the modules can be selected according to actual needs to achieve the purpose of the solution of this embodiment. Persons of ordinary skill in the art can understand and implement the method without any creative effort.
相应地,根据本公开的一个或多个实施例,提供了一种电子设备,包括:Accordingly, according to one or more embodiments of the present disclosure, an electronic device is provided, including:
至少一个存储器和至少一个处理器;at least one memory and at least one processor;
其中,存储器用于存储程序代码,处理器用于调用存储器所存储的程序代码以使所述电子设备执行根据本公开一个或多个实施例提供的信息交互方法。The memory is used to store program codes, and the processor is used to call the program codes stored in the memory to cause the electronic device to execute the information interaction method provided according to one or more embodiments of the present disclosure.
相应地,根据本公开的一个或多个实施例,提供了一种非暂态计算机存储介质,非暂态计算机存储介质存储有程序代码,程序代码可被计算机设备执行来使得所述计算机设备执行根据本公开一个或多个实施例提供的信息交互方法。Accordingly, according to one or more embodiments of the present disclosure, a non-transitory computer storage medium is provided, the non-transitory computer storage medium stores program code, and the program code can be executed by a computer device to cause the computer device to execute An information interaction method provided according to one or more embodiments of the present disclosure.
下面参考图6,其示出了适于用来实现本公开实施例的电子设备(例如终端设备或服务器)800的结构示意图。本公开实施例中的终端设备可以包括但不限于诸如移 动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图6示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。Referring now to FIG. 6 , a schematic structural diagram of an electronic device (eg, terminal device or server) 800 suitable for implementing embodiments of the present disclosure is shown. Terminal devices in embodiments of the present disclosure may include, but are not limited to, mobile devices such as Mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PAD (tablet computers), PMP (portable multimedia players), vehicle-mounted terminals (such as vehicle-mounted navigation terminals), etc., as well as mobile terminals such as digital TV, Fixed terminals for desktop computers, etc. The electronic device shown in FIG. 6 is only an example and should not impose any limitations on the functions and scope of use of the embodiments of the present disclosure.
如图6所示,电子设备800可以包括处理装置(例如中央处理器、图形处理器等)801,其可以根据存储在只读存储器(ROM)802中的程序或者从存储装置808加载到随机访问存储器(RAM)803中的程序而执行各种适当的动作和处理。在RAM803中,还存储有电子设备800操作所需的各种程序和数据。处理装置801、ROM 802以及RAM 803通过总线804彼此相连。输入/输出(I/O)接口805也连接至总线804。As shown in FIG. 6 , the electronic device 800 may include a processing device (eg, central processing unit, graphics processor, etc.) 801 , which may be loaded into a random access device according to a program stored in a read-only memory (ROM) 802 or from a storage device 808 . The program in the memory (RAM) 803 executes various appropriate actions and processes. In the RAM 803, various programs and data required for the operation of the electronic device 800 are also stored. The processing device 801, ROM 802 and RAM 803 are connected to each other via a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
通常,以下装置可以连接至I/O接口805:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置806;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置807;包括例如磁带、硬盘等的存储装置808;以及通信装置809。通信装置809可以允许电子设备800与其他设备进行无线或有线通信以交换数据。虽然图6示出了具有各种装置的电子设备800,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。Generally, the following devices may be connected to the I/O interface 805: input devices 806 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speakers, vibration An output device 807 such as a computer; a storage device 808 including a magnetic tape, a hard disk, etc.; and a communication device 809. The communication device 809 may allow the electronic device 800 to communicate wirelessly or wiredly with other devices to exchange data. Although FIG. 6 illustrates electronic device 800 with various means, it should be understood that implementation or availability of all illustrated means is not required. More or fewer means may alternatively be implemented or provided.
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置809从网络上被下载和安装,或者从存储装置808被安装,或者从ROM 802被安装。在该计算机程序被处理装置801执行时,执行本公开实施例的方法中限定的上述功能。In particular, according to embodiments of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product including a computer program carried on a computer-readable medium, the computer program containing program code for performing the method illustrated in the flowchart. In such embodiments, the computer program may be downloaded and installed from the network via communication device 809, or from storage device 808, or from ROM 802. When the computer program is executed by the processing device 801, the above-mentioned functions defined in the method of the embodiment of the present disclosure are performed.
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括 在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。It should be noted that the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two. The computer-readable storage medium may be, for example, but is not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or any combination thereof. Examples of computer readable storage media may include, but are not limited to: an electrical connection having one or more wires, a portable computer disk, a hard drive, random access memory (RAM), read only memory (ROM), erasable programmable read only memory Memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above. In this disclosure, a computer-readable storage medium may be any tangible medium that contains or stores a program for use by or in connection with an instruction execution system, apparatus, or device. In this disclosure, computer-readable signal media may include A data signal propagated in baseband or as part of a carrier wave that carries computer-readable program code. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the above. A computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium that can send, propagate, or transmit a program for use by or in connection with an instruction execution system, apparatus, or device . Program code embodied on a computer-readable medium may be transmitted using any suitable medium, including but not limited to: wire, optical cable, RF (radio frequency), etc., or any suitable combination of the above.
在一些实施方式中,客户端、服务器可以利用诸如HTTP(HyperText Transfer Protocol,超文本传输协议)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(“LAN”),广域网(“WAN”),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。In some embodiments, the client and server can communicate using any currently known or future developed network protocol such as HTTP (HyperText Transfer Protocol), and can communicate with digital data in any form or medium. Communications (e.g., communications network) interconnections. Examples of communications networks include local area networks ("LAN"), wide area networks ("WAN"), the Internet (e.g., the Internet), and end-to-end networks (e.g., ad hoc end-to-end networks), as well as any currently known or developed in the future network of.
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。The above-mentioned computer-readable medium may be included in the above-mentioned electronic device; it may also exist independently without being assembled into the electronic device.
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备执行上述的本公开的方法。The above-mentioned computer-readable medium carries one or more programs. When the above-mentioned one or more programs are executed by the electronic device, the electronic device is caused to perform the above-mentioned method of the present disclosure.
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。Computer program code for performing the operations of the present disclosure may be written in one or more programming languages, including object-oriented programming languages such as Java, Smalltalk, C++, and conventional Procedural programming language—such as "C" or a similar programming language. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In situations involving remote computers, the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as an Internet service provider through Internet connection).
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行, 这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operations of possible implementations of systems, methods, and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagram may represent a module, segment, or portion of code that contains one or more logic functions that implement the specified executable instructions. It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown one after another may actually execute substantially in parallel, and they may sometimes execute in reverse order, This depends on the functionality involved. It will also be noted that each block of the block diagram and/or flowchart illustration, and combinations of blocks in the block diagram and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or operations. , or can be implemented using a combination of specialized hardware and computer instructions.
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元的名称在某种情况下并不构成对该单元本身的限定。The units involved in the embodiments of the present disclosure can be implemented in software or hardware. Among them, the name of a unit does not constitute a limitation on the unit itself under certain circumstances.
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、片上系统(SOC)、复杂可编程逻辑设备(CPLD)等等。The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, and without limitation, exemplary types of hardware logic components that may be used include: Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), Systems on Chips (SOCs), Complex Programmable Logical device (CPLD) and so on.
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。In the context of this disclosure, a machine-readable medium may be a tangible medium that may contain or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. Machine-readable media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices or devices, or any suitable combination of the foregoing. Examples of machine-readable storage media would include one or more wire-based electrical connections, laptop disks, hard drives, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM) or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
根据本公开的一个或多个实施例,提供了一种信息交互方法,包括:确定连续发送的两个以上目标消息中首个发送的目标消息的移动终点;基于所述首个发送的目标消息的移动终点确定所述两个以上目标消息中其余目标消息的移动终点;基于所确定的目标消息的移动终点发送对应的目标消息。According to one or more embodiments of the present disclosure, an information interaction method is provided, including: determining the moving end point of the first target message sent among more than two target messages sent continuously; based on the first sent target message The mobile terminal determines the mobile terminal of the remaining target messages in the two or more target messages; and sends the corresponding target message based on the determined mobile terminal of the target message.
根据本公开的一个或多个实施例,确定所述首个发送的目标消息的移动终点,包括:在虚拟现实空间中设置的目标消息显示空间中,随机确定所述首个发送的目标消息的移动终点。According to one or more embodiments of the present disclosure, determining the moving end point of the first sent target message includes: randomly determining the moving end point of the first sent target message in a target message display space set in the virtual reality space. Move the end point.
根据本公开的一个或多个实施例,所述其余目标消息的移动终点与所述首个发送的目标消息的移动终点之间具有预设的位置关系。According to one or more embodiments of the present disclosure, there is a preset position relationship between the mobile end points of the remaining target messages and the mobile end point of the first sent target message.
根据本公开的一个或多个实施例,所述基于所述首个发送的目标消息的移动终点确定所述两个以上目标消息中其余目标消息的移动终点,包括:在以所述首个发送的目标消息的移动终点为中心的预设区域内,随机确定所述其余目标消息的移动终点。According to one or more embodiments of the present disclosure, determining the mobile endpoints of the remaining target messages in the two or more target messages based on the mobile endpoint of the first sent target message includes: Within the preset area centered on the moving end point of the target message, the moving end points of the remaining target messages are randomly determined.
根据本公开的一个或多个实施例提供的信息交互方法,还包括:在所述两个以上 目标消息中的第一目标消息移动至所确定的移动终点后,在所述第一目标消息的预设位置处显示用于表示所述第一目标消息在所述两个以上目标消息中的发送次序的次序标识。The information interaction method provided according to one or more embodiments of the present disclosure also includes: in the two or more After the first target message among the target messages moves to the determined moving end point, a display is displayed at the preset position of the first target message to indicate that the first target message is sent among the two or more target messages. The sequence identifier of the sequence.
根据本公开的一个或多个实施例,所述次序标识的持续显示时长不超过所述第一目标消息在所述移动终点位置处的持续显示时长。According to one or more embodiments of the present disclosure, the continuous display duration of the sequence identification does not exceed the continuous display duration of the first target message at the mobile end position.
根据本公开的一个或多个实施例提供的信息交互方法,还包括:在所述目标消息的发送过程中和/或移动至所确定的移动终点后,在所述目标消息的预设位置处显示用于表示所述目标消息的发送者信息的用户标识。The information interaction method provided according to one or more embodiments of the present disclosure further includes: during the sending process of the target message and/or after moving to the determined mobile end point, at the preset position of the target message A user identification representing the sender information of the target message is displayed.
根据本公开的一个或多个实施例,不同类型的目标消息与所述用户标识之间具有不同的预设位置关系。According to one or more embodiments of the present disclosure, different types of target messages have different preset position relationships with the user identification.
根据本公开的一个或多个实施例提供的信息交互方法,还包括:在所述目标消息移动至对应的移动终点的过程中,显示所述目标消息关联的第一特效;和/或在所述目标消息在移动至对应移动终点后,显示所述目标消息关联的第二特效。The information interaction method provided according to one or more embodiments of the present disclosure further includes: displaying the first special effect associated with the target message during the movement of the target message to the corresponding mobile end point; and/or in the process of moving the target message to the corresponding moving end point; After the target message is moved to the corresponding moving end point, the second special effect associated with the target message is displayed.
根据本公开的一个或多个实施例,提供了一种信息交互装置,包括:第一终点确定单元,用于确定连续发送的两个以上目标消息中首个发送的目标消息的移动终点;第二终点确定单元,用于基于所述首个发送的目标消息的移动终点确定所述两个以上目标消息中其余目标消息的移动终点;消息显示单元,用于基于所确定的目标消息的移动终点发送对应的目标消息。According to one or more embodiments of the present disclosure, an information interaction device is provided, including: a first end point determination unit, configured to determine the moving end point of the first target message sent among more than two target messages sent continuously; Two end point determination units, used to determine the moving end points of the remaining target messages in the two or more target messages based on the moving end point of the first sent target message; a message display unit, used to determine the moving end points of the determined target message based on Send the corresponding target message.
根据本公开的一个或多个实施例,提供了一种电子设备,包括:至少一个存储器和至少一个处理器;其中,所述存储器用于存储程序代码,所述处理器用于调用所述存储器所存储的程序代码以使所述电子设备执行根据本公开的一个或多个实施例提供的信息交互方法。According to one or more embodiments of the present disclosure, an electronic device is provided, including: at least one memory and at least one processor; wherein the memory is used to store program code, and the processor is used to call the memory. The stored program code causes the electronic device to execute the information interaction method provided according to one or more embodiments of the present disclosure.
根据本公开的一个或多个实施例,提供了一种非暂态计算机存储介质,所述非暂态计算机存储介质存储有程序代码,所述程序代码被计算机设备执行时,使得所述计算机设备执行根据本公开的一个或多个实施例提供的信息交互方法。According to one or more embodiments of the present disclosure, a non-transitory computer storage medium is provided, the non-transitory computer storage medium stores program code, and when the program code is executed by a computer device, the computer device causes The information interaction method provided according to one or more embodiments of the present disclosure is executed.
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。 The above description is only a description of the preferred embodiments of the present disclosure and the technical principles applied. Those skilled in the art should understand that the disclosure scope involved in the present disclosure is not limited to technical solutions composed of specific combinations of the above technical features, but should also cover solutions composed of the above technical features or without departing from the above disclosed concept. Other technical solutions formed by any combination of equivalent features. For example, a technical solution is formed by replacing the above features with technical features with similar functions disclosed in this disclosure (but not limited to).
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。Furthermore, although operations are depicted in a specific order, this should not be understood as requiring that these operations be performed in the specific order shown or performed in a sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, although several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are merely example forms of implementing the claims.
如前所述,在虚拟直播场景下,无法确保虚拟对象在虚拟空间内的互动趣味性。As mentioned before, in a virtual live broadcast scenario, the interactive fun of virtual objects in the virtual space cannot be ensured.
本公开提供一种人机交互方法、装置、设备和存储介质,确保虚拟空间内虚拟对象互动的直观性和趣味性,调动虚拟空间内的直播积极性。The present disclosure provides a human-computer interaction method, device, equipment and storage medium to ensure the intuitiveness and interest of virtual object interaction in a virtual space, and to mobilize the enthusiasm for live broadcast in the virtual space.
为了避免在虚拟空间内向主播投掷任一虚拟对象时,无法确保虚拟对象在虚拟空间内的互动直观性的问题,本公开的技术方案包括:在虚拟空间内预先划定一个可投掷区域。进而,在虚拟空间内投掷任一虚拟对象时,首先会确定该虚拟对象的投掷位置。然后,根据该投掷位置和已划定的可投掷区域,在虚拟空间内呈现该虚拟对象的投掷特效,从而确保虚拟空间内投掷虚拟对象的直观性和准确性,增强虚拟空间内虚拟对象的互动趣味性和用户互动氛围。In order to avoid the problem of being unable to ensure the intuitive interaction of virtual objects in the virtual space when throwing any virtual object to the anchor in the virtual space, the technical solution of the present disclosure includes: pre-defining a throwable area in the virtual space. Furthermore, when any virtual object is thrown in the virtual space, the throwing position of the virtual object is first determined. Then, based on the throwing position and the demarcated throwable area, the throwing special effect of the virtual object is presented in the virtual space, thereby ensuring the intuitiveness and accuracy of throwing the virtual object in the virtual space and enhancing the interaction of the virtual objects in the virtual space. Interesting and user-interactive atmosphere.
图7为本公开实施例提供的一种人机交互方法的流程图,该方法可以应用于XR设备中,但不限于此。该方法可以由本公开提供的人机交互装置来执行,其中,人机交互装置可以通过任意的软件和/或硬件的方式实现。示例性地,人机交互装置可配置于AR/VR/MR等能够模拟虚拟场景的电子设备,本公开对电子设备的具体类型不作任何限制。Figure 7 is a flow chart of a human-computer interaction method provided by an embodiment of the present disclosure. This method can be applied to XR equipment, but is not limited thereto. This method can be executed by the human-computer interaction device provided by the present disclosure, wherein the human-computer interaction device can be implemented by any software and/or hardware. For example, the human-computer interaction device can be configured in electronic equipment capable of simulating virtual scenes such as AR/VR/MR. This disclosure does not place any restrictions on the specific type of electronic equipment.
在一些实施例中,如图1所示,该方法可以包括如下步骤:In some embodiments, as shown in Figure 1, the method may include the following steps:
S710,响应于虚拟空间内任一虚拟对象的投掷操作,确定虚拟对象在虚拟空间内的投掷位置。S710. In response to the throwing operation of any virtual object in the virtual space, determine the throwing position of the virtual object in the virtual space.
本公开中,虚拟空间可以为XR设备针对任一用户选择的某一直播场景,模拟出的相应虚拟环境,以便在虚拟空间内显示相应的直播互动信息。例如,支持主播选中某一类型的直播场景,来构建相应的虚拟直播环境,作为本公开中的虚拟空间,使得各个观众进入到该虚拟空间内来实现相应的直播互动。 In this disclosure, the virtual space can be a corresponding virtual environment simulated by the XR device for a certain live broadcast scene selected by any user, so as to display the corresponding live interactive information in the virtual space. For example, the anchor is supported to select a certain type of live broadcast scene to build a corresponding virtual live broadcast environment as the virtual space in this disclosure, so that each audience can enter the virtual space to realize corresponding live broadcast interaction.
其中,在虚拟空间内可以针对不同的直播功能,分别设定有直播屏、操控屏和公屏等多个虚拟屏幕,以分别显示不同的直播内容。如图8所示,在直播屏内可以显示主播端的直播视频流,以便用户观看相应的直播画面。在操控屏内可以显示主播信息、在线观众信息、相关直播推荐列表和当前直播的清晰度选项等信息,以便于用户执行相关的各项直播操作等。在公屏内可以显示当前直播内的各项用户评论信息、点赞、礼物赠送等,以便用户对当前直播进行相关管理。Among them, multiple virtual screens such as live broadcast screen, control screen and public screen can be set up in the virtual space for different live broadcast functions to display different live broadcast contents respectively. As shown in Figure 8, the live video stream of the anchor can be displayed in the live screen so that users can watch the corresponding live screen. The control screen can display host information, online audience information, related live broadcast recommendation lists, and current live broadcast resolution options to facilitate users to perform various related live broadcast operations. Various user comment information, likes, gifts, etc. in the current live broadcast can be displayed on the public screen to facilitate users to manage the current live broadcast.
应当理解的是,直播屏、操控屏和公屏均面向用户,在虚拟空间内的不同位置显示。而且,可以通过对任一虚拟屏幕的位置和样式进行调整,以防止遮挡其他虚拟屏幕。It should be understood that the live screen, control screen and public screen are all facing users and displayed at different locations in the virtual space. Furthermore, the position and style of any virtual screen can be adjusted to prevent it from blocking other virtual screens.
示例性的,在用户投掷的虚拟对象为虚拟礼物时,虚拟空间内会显示有对应的礼物入口。某一用户想要向主播赠送虚拟礼物时,首先会触发该礼物入口,例如通过手柄光标选中该礼物入口,或者控制手部模型点击该礼物入口等。因此,在检测到该礼物入口的触发操作后,会在虚拟空间内显示对应的礼物面板。其中,该礼物面板内会向用户展示出多种不同类型下的虚拟礼物,以便用户来自主选择某一在一些实施例中虚拟礼物来赠送给主播。For example, when the virtual object thrown by the user is a virtual gift, a corresponding gift entrance will be displayed in the virtual space. When a user wants to give a virtual gift to the anchor, the gift portal will be triggered first, for example, by selecting the gift portal with the handle cursor, or controlling the hand model to click on the gift portal, etc. Therefore, after detecting the triggering operation of the gift entrance, the corresponding gift panel will be displayed in the virtual space. Among them, the gift panel will display a variety of different types of virtual gifts to the user, so that the user can independently select a virtual gift to give to the host in some embodiments.
在虚拟空间内,可以通过手柄光标或者手部模型在虚拟空间内选中任一虚拟对象。然后,本公开可以实时检测用户是否对所选中的虚拟对象执行相应的投掷操作,来判断是否需要将该虚拟对象投掷到虚拟空间内。In the virtual space, you can select any virtual object in the virtual space through the handle cursor or hand model. Then, the present disclosure can detect in real time whether the user performs a corresponding throwing operation on the selected virtual object to determine whether the virtual object needs to be thrown into the virtual space.
在检测到虚拟空间内任一虚拟对象的投掷操作后,本公开首先会分析该虚拟对象被投掷给主播时在虚拟空间内的呈现形式,以此确定该虚拟对象被投掷到虚拟空间内时所能抵达的最终位置,作为本公开中的投掷位置。After detecting the throwing operation of any virtual object in the virtual space, the present disclosure will first analyze the presentation form of the virtual object in the virtual space when it is thrown to the host, so as to determine the behavior of the virtual object when it is thrown into the virtual space. The final position that can be reached is used as the throwing position in this disclosure.
S720,根据投掷位置和虚拟空间内已划定的可投掷区域,在虚拟空间内呈现虚拟对象的投掷特效。S720: Based on the throwing position and the defined throwable area in the virtual space, the throwing special effect of the virtual object is presented in the virtual space.
通过本公开技术方案,响应于虚拟空间内任一虚拟对象的投掷操作,首先确定该虚拟对象在虚拟空间内的投掷位置。然后,根据该投掷位置和虚拟空间内已划定的可投掷区域,在虚拟空间内呈现虚拟对象的投掷特效,从而确保虚拟空间内投掷虚拟对象的直观性和准确性,增强虚拟空间内虚拟对象的互动趣味性和用户互动氛围,调动虚拟空间内的用户直播积极性。Through the technical solution of the present disclosure, in response to the throwing operation of any virtual object in the virtual space, the throwing position of the virtual object in the virtual space is first determined. Then, based on the throwing position and the demarcated throwable area in the virtual space, the throwing special effects of the virtual object are presented in the virtual space, thereby ensuring the intuitiveness and accuracy of throwing virtual objects in the virtual space and enhancing the virtual objects in the virtual space. The interactive fun and user interactive atmosphere can mobilize the enthusiasm of users in the virtual space for live broadcasting.
考虑到用户在虚拟空间内面向直播屏,来观看主播的直播视频流时,通常会存在相应的可视区域和盲区。虚拟空间内可能还会存在一些虚拟对象,例如操控屏、公屏、 礼物面板等,还会遮挡用户对于直播视频流的观看视线。而且,对于虚拟空间内的投掷特效,通常需要呈现在用户和主播的直播视频流(也就是直播屏)之间,使得用户能够观看到投掷特效,以便用户判断虚拟对象是否成功投掷。如果某一虚拟对象在虚拟空间内投掷后,却无法使用户观看到该虚拟对象的投掷特效,也就无法保证用户与主播间对于该虚拟对象的互动。Considering that when users face the live screen in the virtual space to watch the host's live video stream, there will usually be corresponding visual areas and blind spots. There may also be some virtual objects in the virtual space, such as control screens, public screens, Gift panels, etc., will also block the user's view of the live video stream. Moreover, the throwing special effects in the virtual space usually need to be presented between the user and the host's live video stream (that is, the live screen), so that the user can watch the throwing special effects, so that the user can judge whether the virtual object is successfully thrown. If a virtual object is thrown in the virtual space, the user cannot see the special effects of the virtual object being thrown, and the interaction between the user and the anchor regarding the virtual object cannot be guaranteed.
所以,为了确保用户能够观看到在虚拟空间内投掷任一虚拟对象后呈现出的互动效果,本公开可以根据用户在虚拟空间内的所处位置和直播屏的位置,在虚拟空间内预先划定出一个虚拟对象的可投掷区域,使得对于投掷后落于该可投掷区域内的虚拟礼物,能够确保用户观看到对应的投掷特效。Therefore, in order to ensure that the user can view the interactive effect after throwing any virtual object in the virtual space, the present disclosure can predetermine the boundaries in the virtual space according to the user's position in the virtual space and the position of the live broadcast screen. A throwable area of the virtual object is created, so that the user can ensure that the user can see the corresponding throwing special effect for the virtual gift that falls within the throwable area after being thrown.
在确定出任一虚拟对象在虚拟空间内的投掷位置后,通过判断该投掷位置是否处于虚拟空间内已划定的可投掷区域内,来判断本次投掷是否能够成功执行。进而,在确定虚拟对象能够被投掷到虚拟空间内的可投掷区域内时,表示本次投掷能够成功执行,即可在虚拟空间内呈现该虚拟对象的投掷特效。After determining the throwing position of any virtual object in the virtual space, it is judged whether the throwing position can be successfully executed by judging whether the throwing position is within the defined throwable area in the virtual space. Furthermore, when it is determined that the virtual object can be thrown into the throwable area in the virtual space, it means that the throw can be successfully executed, and the throwing special effect of the virtual object can be presented in the virtual space.
作为本公开中的一些实施例,本公开可以根据虚拟空间内的用户可视区和预设的投掷范围,来确定虚拟空间内的可投掷区域。As some embodiments of the present disclosure, the present disclosure can determine the throwable area in the virtual space based on the user's visible area in the virtual space and the preset throwing range.
也就是说,本公开可以根据用户在虚拟空间内的所处位置和直播屏位置间的相对位置关系,来判断用户朝向直播屏时的可视区域。而且,由于用户可视区域比较大,为了确保虚拟对象投掷给主播时的特效集中呈现,本公开还可以设定一个投掷范围,例如用户朝向直播屏时处于0-170度内的角度范围。然后,通过判断用户可视区和预设的投掷范围之间的重合区域,即可确定出虚拟空间内的可投掷区域。进而,可以将虚拟空间内除可投掷区域外的其他区域作为虚拟空间内的非投掷区域。That is to say, the present disclosure can determine the visible area when the user faces the live broadcast screen based on the relative positional relationship between the user's position in the virtual space and the position of the live broadcast screen. Moreover, since the user's visible area is relatively large, in order to ensure that the special effects are concentrated when the virtual object is thrown to the anchor, the present disclosure can also set a throwing range, such as an angle range of 0-170 degrees when the user faces the live screen. Then, by judging the overlap area between the user's visual area and the preset throwing range, the throwable area in the virtual space can be determined. Furthermore, other areas in the virtual space except the throwable area can be used as non-throwing areas in the virtual space.
以投掷的虚拟对象为任一虚拟礼物为例,如图9(A)所示的俯视图,以圆形表示虚拟空间时,假设用户处于虚拟空间的中心点。由于任一虚拟对象的投掷特效呈现均会分布在用户和主播所在的直播屏之间,所以可以将直播屏作为虚拟空间的外边界,其中包括操控屏和公屏。那么,本公开可以设定用户朝向直播屏,且处于170度范围内的区域为本公开中的可投掷区域,其他区域为非投掷区域。Taking the thrown virtual object as any virtual gift as an example, as shown in the top view of Figure 9(A), when the virtual space is represented by a circle, it is assumed that the user is at the center point of the virtual space. Since the throwing effects of any virtual object will be distributed between the live screen where the user and the anchor are located, the live screen can be used as the outer boundary of the virtual space, including the control screen and the public screen. Then, the present disclosure can set the area where the user faces the live broadcast screen and is within 170 degrees as the throwable area in the present disclosure, and other areas are non-throwable areas.
如图9(B)所示的侧视图,由于礼物面板在用户近身呈现时,会遮挡用户向下的部分视线,那么视线被礼物面板遮挡后的区域也属于非投掷区域。也就是,虚拟空间内处于用户头顶、脚下、身后等视野盲区和视线遮挡区域均划定为虚拟空间内的非投掷区域。 As shown in the side view in Figure 9(B), since the gift panel will block part of the user's downward line of sight when presented close to the user, the area where the line of sight is blocked by the gift panel also belongs to the non-throwing area. That is to say, the blind spots and sight-obstructed areas in the virtual space above the user's head, feet, and behind the user are all designated as non-throwing areas in the virtual space.
例如,考虑到虚拟空间较大,用户与直播屏间的距离可能也较大,那么如果某一虚拟对象在可投掷区域内距离用户过远,而距离直播屏较近的一个位置处呈现相应的投掷特效,可能导致用户看不清楚该投掷特效。所以,为了确保用户对于虚拟对象的投掷特效的直观清晰观看,本公开可以在可投掷区域内面向用户设定一个投掷边界。该投掷边界相当于一个虚拟墙面,用户在虚拟空间内投掷出一个虚拟对象时,如果该虚拟对象触碰到该投掷边界,那么该虚拟对象不会继续向前飞行,而会在其与该投掷边界的交汇处消失,并呈现出对应的投掷特效。For example, considering that the virtual space is large, the distance between the user and the live screen may also be large, then if a virtual object is too far away from the user in the throwable area, and a corresponding position closer to the live screen is displayed Throwing special effects may cause users to not see the throwing effects clearly. Therefore, in order to ensure that the user can intuitively and clearly view the throwing special effects of virtual objects, the present disclosure can set a throwing boundary for the user within the throwable area. The throwing boundary is equivalent to a virtual wall. When the user throws a virtual object in the virtual space, if the virtual object touches the throwing boundary, the virtual object will not continue to fly forward, but will fly forward between it and the throwing boundary. The intersection of throwing boundaries disappears and the corresponding throwing effects are displayed.
如图10(A)和图10(B)所示,可投掷区域内的投掷边界可以处于距离用户一段较小距离的位置处,且投掷边界的高度可以与直播屏的高度保持一致,确保可投掷区域内的边界完整性。As shown in Figure 10 (A) and Figure 10 (B), the throwing boundary in the throwable area can be located at a small distance from the user, and the height of the throwing boundary can be consistent with the height of the live broadcast screen to ensure that the throwing boundary can be Boundary integrity within the throwing zone.
由上述内容可知,用户在虚拟空间内对任一虚拟对象执行相应的投掷操作后,该虚拟对象在虚拟空间内的投掷位置可以存在如下三种情形:It can be seen from the above that after the user performs a corresponding throwing operation on any virtual object in the virtual space, the throwing position of the virtual object in the virtual space can exist in the following three situations:
情形一,虚拟对象在可投掷区域内的第一落地点。也就是,虚拟对象落在可投掷区域内的地面上,得到第一落地点,作为本公开中虚拟对象的投掷位置。Scenario 1: The first landing point of the virtual object within the throwable area. That is, the virtual object lands on the ground within the throwable area, and the first landing point is obtained as the throwing position of the virtual object in the present disclosure.
情形二,虚拟对象在虚拟空间内已划定的非投掷区域内的第二落地点。也就是,虚拟对象落在非投掷区域内的地面上,得到第二落地点,作为本公开中虚拟对象的投掷位置。Scenario 2: The virtual object lands at the second landing point within the demarcated non-throwing area in the virtual space. That is, the virtual object falls on the ground within the non-throwing area, and a second landing point is obtained, which is the throwing position of the virtual object in the present disclosure.
情形三,虚拟对象与可投掷区域内划定的投掷边界的交汇点。也就是,虚拟对象未落到虚拟空间内的地面上,而在投掷过程中触碰到可投掷区域内的投掷边界上。那么,该虚拟对象与投掷边界发生触碰时的交汇点,处于可投掷区域内,将其作为本公开中虚拟对象的投掷位置。Scenario 3: The intersection point between the virtual object and the throwing boundary defined in the throwable area. That is, the virtual object does not fall to the ground in the virtual space, but touches the throwing boundary within the throwable area during the throwing process. Then, the intersection point when the virtual object touches the throwing boundary is within the throwable area, which is regarded as the throwing position of the virtual object in the present disclosure.
对于上述确定的虚拟对象在虚拟空间内的投掷位置,可以判断该投掷位置是否处于可投掷区域内,来判断虚拟空间内的本次投掷是否能够成功执行。如果该投掷位置处于可投掷区域内,说明本次投掷能够成功执行,所以在虚拟空间内呈现该虚拟对象的投掷特效,该投掷特效可以为通过手部模型将虚拟对象投掷到虚拟空间内的投掷效果,也可以为通过投掷道具将虚拟对象发出到虚拟空间内的道具发出效果。For the above-determined throwing position of the virtual object in the virtual space, it can be determined whether the throwing position is within the throwable area to determine whether the current throw in the virtual space can be successfully executed. If the throwing position is within the throwable area, it means that the throw can be successfully executed, so the throwing special effect of the virtual object is presented in the virtual space. The throwing special effect can be a throwing of the virtual object into the virtual space through the hand model. Effects can also be props that emit virtual objects into the virtual space by throwing props.
然而,如果该投掷位置处于非投掷区域内,说明本次投掷不能成功执行,所以控制虚拟对象在虚拟空间内呈现预期时长后重新折回到在虚拟空间内的原位置处。也就是说,在本次投掷不能成功执行时,为了响应该虚拟对象的投掷操作,本公开也会控制该虚拟对象在虚拟空间内呈现一小段时间。而在虚拟对象的呈现时长达到预期时长 后,并不会播放表示成功投掷的投掷特效,而是控制该虚拟礼物从虚拟空间内重新折回到其在虚拟空间内的原位置处,以指示用户本次投掷并未成功。However, if the throwing position is within the non-throwing area, it means that the throwing cannot be successfully executed, so the virtual object is controlled to return to its original position in the virtual space after being presented in the virtual space for an expected period of time. That is to say, when this throw cannot be successfully executed, in response to the throwing operation of the virtual object, the present disclosure will also control the virtual object to be presented in the virtual space for a short period of time. When the presentation duration of the virtual object reaches the expected duration Afterwards, the throwing special effect indicating a successful throw will not be played, but the virtual gift will be controlled to return from the virtual space to its original position in the virtual space to indicate to the user that the throw was not successful.
此外,在虚拟空间内呈现虚拟对象的投掷特效时,考虑到虚拟对象属于在三维空间内的投掷,存在相应的空间投掷轨迹。所以虚拟对象的投掷特效可以包括但不限于:虚拟对象在虚拟空间内的空间投掷轨迹,以及基于空间投掷轨迹或/和虚拟对象而设定的投掷特效。该投掷特效可以为完成投掷后在最终的投掷点所展示的一个动画效果。In addition, when presenting the throwing special effects of virtual objects in the virtual space, considering that the virtual objects are thrown in the three-dimensional space, there is a corresponding space throwing trajectory. Therefore, the throwing special effects of the virtual object may include but are not limited to: the spatial throwing trajectory of the virtual object in the virtual space, and the throwing special effects set based on the spatial throwing trajectory or/and the virtual object. The throwing special effect can be an animation effect displayed at the final throwing point after the throwing is completed.
本公开实施例提供的技术方案,响应于虚拟空间内任一虚拟对象的投掷操作,首先确定该虚拟对象在虚拟空间内的投掷位置。然后,根据该投掷位置和虚拟空间内已划定的可投掷区域,在虚拟空间内呈现虚拟对象的投掷特效,从而确保虚拟空间内投掷虚拟对象的直观性和准确性,增强虚拟空间内虚拟对象的互动趣味性和用户互动氛围,调动虚拟空间内的用户直播积极性。The technical solution provided by the embodiments of the present disclosure is that in response to the throwing operation of any virtual object in the virtual space, the throwing position of the virtual object in the virtual space is first determined. Then, based on the throwing position and the demarcated throwable area in the virtual space, the throwing special effects of the virtual object are presented in the virtual space, thereby ensuring the intuitiveness and accuracy of throwing virtual objects in the virtual space and enhancing the virtual objects in the virtual space. The interactive fun and user interactive atmosphere can mobilize the enthusiasm of users in the virtual space for live broadcasting.
作为本公开中的一些实施例,针对虚拟空间内任一虚拟对象的投掷操作,如图11所示,本公开可以采用下述步骤对在虚拟空间内投掷任一虚拟对象的过程进行说明:As some embodiments of the present disclosure, regarding the throwing operation of any virtual object in the virtual space, as shown in Figure 11, the present disclosure can use the following steps to illustrate the process of throwing any virtual object in the virtual space:
S510,响应于虚拟空间内手部模型面向任一虚拟对象的握持操作,确定手部模型握持虚拟对象时的运动位姿变化量。S510. In response to the holding operation of the hand model facing any virtual object in the virtual space, determine the change amount of the motion posture of the hand model when holding the virtual object.
在本公开中,为了增加用户在虚拟空间内投掷虚拟对象时的互动操作,避免通过手柄光标和Trigger键来投掷任一虚拟对象时的单一互动,可以通过手柄操作或手势操作等,控制虚拟空间内模拟出的手部模型进行相应运动,来对任一虚拟对象执行相应的握持操作。In the present disclosure, in order to increase the user's interactive operation when throwing virtual objects in the virtual space and avoid a single interaction when throwing any virtual object through the handle cursor and Trigger key, the virtual space can be controlled through handle operation or gesture operation. The hand model simulated in the computer performs corresponding movements to perform corresponding holding operations on any virtual object.
然后,为了增强虚拟空间内投掷任一虚拟对象时的用户互动氛围,会要求手部模型在握持有任一虚拟对象后,能够在虚拟空间内执行一些指示投掷相关的各种运动,以便模拟出实际的投掷过程,增强虚拟空间内投掷虚拟对象时的多样化互动。Then, in order to enhance the user interaction atmosphere when throwing any virtual object in the virtual space, the hand model will be required to perform various motions related to throwing in the virtual space after holding any virtual object in order to simulate the The actual throwing process enhances the diverse interactions when throwing virtual objects in the virtual space.
因此,响应于手部模型面向任一虚拟对象的握持操作,可以支持用户在XR设备内输入相应的用于指示手部模型所执行的具体运动的一些运动信息,例如操控手柄上的各个方向按键,操控手柄执行相应的运动,或者控制手部执行相应的运动手势等,均可以表示手部模型在握持有任一虚拟对象后所需执行的运动。根据此类运动信息,即可生成用户面向手部模型发起的运动指令。Therefore, in response to the hand model's holding operation facing any virtual object, the user can be supported to input corresponding motion information in the XR device to indicate the specific motion performed by the hand model, such as manipulating various directions on the handle. Pressing buttons, controlling the handle to perform corresponding movements, or controlling the hand to perform corresponding movement gestures, etc., can represent the movements that the hand model needs to perform after holding any virtual object. Based on this kind of movement information, movement instructions initiated by the user towards the hand model can be generated.
进而,通过解析用户面向手部模型发起的运动指令,可以确定手部模型握持有虚拟对象后实际需要执行的各项运动信息,以控制手部模型握持着该虚拟对象,在虚拟空间内执行相应的运动。而且,在手部模型在虚拟空间内的实际运动过程中,本公开 还需要实时确定手部模型在握持有虚拟对象后的运动位姿变化量,以便判断该运动位姿变化量是否满足所握持的该虚拟对象的投掷触发条件。Furthermore, by analyzing the movement instructions initiated by the user towards the hand model, it is possible to determine the various movement information that actually needs to be performed after the hand model holds the virtual object, so as to control the hand model to hold the virtual object in the virtual space. Perform the corresponding movement. Moreover, during the actual movement of the hand model in the virtual space, the present disclosure It is also necessary to determine the movement posture change amount of the hand model after holding the virtual object in real time, so as to determine whether the movement posture change amount satisfies the throwing trigger condition of the held virtual object.
此外,针对虚拟对象在虚拟空间内的投掷位置,本公开在确定出手部模型握持虚拟对象时的运动位姿变化量后,可以根据该运动位姿变化量,确定虚拟对象在虚拟空间内的投掷轨迹;根据投掷轨迹确定虚拟对象在虚拟空间内的投掷位置。In addition, for the throwing position of the virtual object in the virtual space, the present disclosure can determine the movement posture change amount of the hand model when holding the virtual object, and can determine the position of the virtual object in the virtual space based on the movement posture change amount. Throwing trajectory; determine the throwing position of the virtual object in the virtual space based on the throwing trajectory.
也就是说,通过分析手部模型在握持虚拟对象后执行的运动位姿变化量,可以确定出手部模型带动所握持的虚拟对象在虚拟空间内执行的运动方向和运动速度等信息。然后,根据该运动位姿变化量表示的运动方向和运动速度等信息,判断该虚拟对象被手部模型取消握持后,在运动惯性作用下还能执行的运动轨迹,作为本公开中的投掷轨迹。进而,按照该投掷轨迹,即可确定出该虚拟对象在虚拟空间内的投掷位置。That is to say, by analyzing the movement posture changes performed by the hand model after holding the virtual object, information such as the movement direction and speed of the hand model driving the held virtual object in the virtual space can be determined. Then, based on the motion direction and motion speed information represented by the motion posture change, it is determined that the motion trajectory that the virtual object can still execute under the action of motion inertia after the hand model is canceled is used as the throwing in the present disclosure. trajectory. Furthermore, according to the throwing trajectory, the throwing position of the virtual object in the virtual space can be determined.
S520,在运动位姿变化量满足虚拟对象的投掷触发条件时,确定虚拟对象的投掷操作。S520: When the movement posture change meets the throwing trigger condition of the virtual object, determine the throwing operation of the virtual object.
为了保证虚拟对象在虚拟空间内的准确投掷,本公开可以针对任一虚拟对象的实际投掷操作,可以预先为该虚拟对象设定一个投掷触发条件。In order to ensure the accurate throwing of virtual objects in the virtual space, the present disclosure can set a throwing trigger condition for the virtual object in advance for the actual throwing operation of any virtual object.
在本公开中,可以将虚拟空间内的虚拟对象分为可投掷对象和不可投掷对象两种,以增强虚拟空间内虚拟对象投掷的多样性。其中,可投掷对象可以为支持手部模型在虚拟空间内通过直接投掷而成功发送给主播的一些虚拟对象,例如单独的表情礼物、心形礼物等。不可投掷对象可以为存于相应的投掷道具内,需要通过手部模型与所握持的该投掷道具来执行相应的交互操作,而成功投掷给主播的另一些虚拟对象,例如通过泡泡棒赠送的泡泡礼物、通过加热装置来启动的热气球等。In the present disclosure, virtual objects in the virtual space can be divided into throwable objects and non-throwable objects to enhance the diversity of virtual object throwing in the virtual space. Among them, the throwable objects can be some virtual objects that can be successfully sent to the host by directly throwing the hand model in the virtual space, such as individual emoticon gifts, heart-shaped gifts, etc. Non-throwable objects can be other virtual objects that are stored in the corresponding throwing props and require the hand model to perform corresponding interactive operations with the throwing props held, and are successfully thrown to the host, such as gifting through bubble wands. Bubble gifts, hot air balloons activated by heating devices, etc.
在一些可实现方式中,针对虚拟对象中的可投掷对象,可以设定可投掷对象的投掷触发条件为:手部模型在执行完一次投掷运动或连续投掷运动后处于握持取消位姿下。也就是说,通过手部模型在握持可投掷对象后执行的运动位姿变化量,判断手部模型是否执行过一次投掷运动或者连续投掷运动。而且,在执行完上述运动后,是否取消对该可投掷对象的握持,使手部模型最终处于握持取消位姿下。如果满足上述可投掷对象的投掷触发条件,则说明通过手部模型将所握持的虚拟对象在虚拟空间内投掷出去,也就表示在虚拟空间内需要向主播执行该可投掷对象的相应互动操作。In some possible implementations, for a throwable object in a virtual object, the throwing trigger condition of the throwable object can be set as follows: the hand model is in the grip cancellation posture after executing a throwing motion or a continuous throwing motion. That is to say, based on the movement posture changes performed by the hand model after holding the throwable object, it is determined whether the hand model has performed a throwing movement or a continuous throwing movement. Moreover, after executing the above movement, whether to cancel the grip of the throwable object so that the hand model is finally in the grip cancellation posture. If the above throwing trigger conditions of the throwable object are met, it means that the held virtual object is thrown in the virtual space through the hand model, which means that the host needs to perform the corresponding interactive operation of the throwable object in the virtual space. .
在另一些可实现方式中,针对虚拟对象中的不可投掷对象,可以设定不可投掷对象的投掷触发条件为:手部模型中与投掷道具互动的目标部位执行投掷道具设定的投掷操作。其中,手部模型在握持不可投掷对象时,会表示为手部模型握持用于存入不 可投掷对象的投掷道具。例如,通过泡泡棒赠送的泡泡礼物,通过手部模型会从礼物面板中握持该泡泡棒,以发出相应的泡泡礼物。然后,通过手部模型在握持不可投掷对象后执行的运动位姿变化量,判断手部模型中与投掷道具互动的目标部位是否执行该投掷道具设定的投掷操作。如果满足上述不可投掷对象的投掷触发条件,则说明通过手部模型中的目标部位与投掷道具进行相应互动,可以将投掷道具内的不可投掷对象在虚拟空间内发出去,也就表示在虚拟空间内需要向主播执行该不可投掷对象的相应互动操作。In other implementation methods, for non-throwable objects in the virtual object, the throwing trigger condition of the non-throwable object can be set as follows: the target part of the hand model that interacts with the throwing prop performs the throwing operation set by the throwing prop. Among them, when the hand model is holding a non-throwable object, it will be represented as a hand model holding a non-throwable object. A throwing prop for throwable objects. For example, if you give a bubble gift through a bubble wand, the hand model will hold the bubble wand from the gift panel to emit the corresponding bubble gift. Then, based on the movement posture changes performed by the hand model after holding the non-throwable object, it is determined whether the target part of the hand model that interacts with the throwing prop performs the throwing operation set by the throwing prop. If the above throwing trigger conditions for non-throwable objects are met, it means that the non-throwable objects in the throwing props can be sent out in the virtual space through the corresponding interaction between the target part in the hand model and the throwing props, which means that in the virtual space You need to perform the corresponding interactive operation on the non-throwable object to the host.
作为本公开中的一些实施例,在确定出手部模型握持虚拟对象后所执行的运动位姿变化量后,本公开首先确定出所握持的该虚拟对象的投掷触发条件。然后,通过判断手部模型在握持虚拟对象后所执行的运动位姿变化量是否满足该虚拟对象的投掷触发条件,来判断是否需要在虚拟空间内向主播投掷该虚拟对象。在该运动位姿变化量满足所握持的虚拟对象的投掷触发条件时,说明用户指示向主播投掷该虚拟对象,以此确定出该虚拟对象的投掷操作。As some embodiments of the present disclosure, after determining the movement posture change amount performed by the hand model after holding the virtual object, the present disclosure first determines the throwing trigger condition of the held virtual object. Then, by judging whether the movement posture change amount performed by the hand model after holding the virtual object satisfies the throwing trigger condition of the virtual object, it is judged whether it is necessary to throw the virtual object to the host in the virtual space. When the movement posture change meets the throwing trigger condition of the held virtual object, it means that the user instructs to throw the virtual object to the host, thereby determining the throwing operation of the virtual object.
后续响应于该虚拟对象的投掷操作,可以根据该虚拟对象在虚拟空间内的投掷位置和虚拟空间内的可投掷区域,来在虚拟空间内呈现该虚拟对象的投掷特效。Subsequently in response to the throwing operation of the virtual object, the throwing special effect of the virtual object can be presented in the virtual space according to the throwing position of the virtual object in the virtual space and the throwable area in the virtual space.
需要说明的是,考虑到通过手部模型向虚拟空间内的主播投掷相应的虚拟对象时,可以执行一次投掷操作,也可以执行连续投掷操作,使得虚拟礼物存在不同的投掷类型。所以,本公开在虚拟空间内呈现虚拟对象的投掷特效时,还会根据手部模型在握持虚拟对象后的运动位姿变化量,判断手部模型所执行的运动是一次投掷还是连续投掷,从而确定该虚拟对象的投掷类型。然后,在虚拟空间内可以呈现该虚拟对象在相应投掷类型下的投掷特效。It should be noted that when throwing the corresponding virtual object to the anchor in the virtual space through the hand model, the throwing operation can be performed once or continuously, so that there are different throwing types of virtual gifts. Therefore, when the present disclosure presents the throwing special effect of a virtual object in the virtual space, it will also determine whether the movement performed by the hand model is a one-time throw or a continuous throw based on the change in movement posture of the hand model after holding the virtual object. Determines the throwing type of this virtual object. Then, the throwing special effects of the virtual object under the corresponding throwing type can be presented in the virtual space.
以投掷的虚拟对象为任一虚拟礼物为例,通过图12(A)可以表示通过手部模型在虚拟空间内投掷一次虚拟礼物时的特效,通过图12(B)可以表示通过手部模型在虚拟空间内连续投掷多次虚拟礼物时的特效,通过图12(C)可以表示通过手部模型握持礼物赠送道具,而由礼物赠送道具在虚拟空间内连续发出多次虚拟礼物时的特效。Taking the thrown virtual object as any virtual gift as an example, Figure 12(A) can show the special effect when a virtual gift is thrown in the virtual space through the hand model, and Figure 12(B) can show the special effect of throwing a virtual gift through the hand model in the virtual space. The special effects when multiple virtual gifts are continuously thrown in the virtual space. Figure 12(C) can show the special effects when the gift-giving prop is held by the hand model, and the gift-giving prop is used to continuously emit multiple virtual gifts in the virtual space.
此外,为了保证虚拟空间内虚拟对象投掷时的互动趣味性,本公开在通过手部模型与虚拟空间内的任一虚拟对象进行交互时,可以根据手部模型面向该虚拟对象执行的不同交互操作,来控制XR设备执行不同程度的震动。例如,通过手部模型悬停在某一虚拟对象上时,控制XR设备(例如真实手柄)执行轻度的震动。而通过手部模型点击该虚拟对象时,可以控制XR设备执行较大强度的震动。其中,与手部模型进 行交互的虚拟对象可以为礼物入口、礼物面板、礼物面板内的各个虚拟礼物或者相关用户交互控件等。In addition, in order to ensure the fun of interaction when throwing virtual objects in the virtual space, the present disclosure can perform different interactive operations on the virtual object according to the hand model when interacting with any virtual object in the virtual space through the hand model. , to control the XR device to perform different levels of vibration. For example, when the hand model is hovering over a virtual object, the XR device (such as a real handle) is controlled to perform a slight vibration. When the virtual object is clicked through the hand model, the XR device can be controlled to perform a larger intensity vibration. Among them, with the hand model The interactive virtual objects can be the gift entrance, the gift panel, each virtual gift in the gift panel, or related user interaction controls, etc.
以礼物面板内的任一虚拟礼物为例,通过手部模型悬停在该虚拟礼物上时,可以控制XR设备执行轻度的震动,在通过手部模型握持该虚拟礼物时,可以控制XR设备执行较大强度的震动。Take any virtual gift in the gift panel as an example. When hovering over the virtual gift with the hand model, the XR device can be controlled to perform a slight vibration. When the virtual gift is held with the hand model, the XR device can be controlled. The device performs greater intensity vibrations.
在虚拟空间内投掷任一虚拟对象时,可能会存在由于投掷距离不够、投掷区域错误、网络问题等原因而造成虚拟对象在虚拟空间内投掷失败的情况。那么,在虚拟空间内呈现相应投掷特效时,本公开还会实时检测该虚拟对象是否投掷成功。如果该虚拟对象在虚拟空间内投掷失败,而该虚拟对象已经面向主播被投掷到虚拟空间内。所以,本公开可以控制该虚拟对象从虚拟空间内折回到其在虚拟空间内的原位置处。When throwing any virtual object in the virtual space, there may be situations where the virtual object fails to be thrown in the virtual space due to insufficient throwing distance, wrong throwing area, network problems, etc. Then, when the corresponding throwing special effect is presented in the virtual space, the present disclosure will also detect whether the virtual object is successfully thrown in real time. If the virtual object fails to be thrown in the virtual space, the virtual object has already been thrown into the virtual space facing the host. Therefore, the present disclosure can control the virtual object to be folded back from the virtual space to its original position in the virtual space.
此外,在虚拟空间内呈现虚拟对象的投掷特效后,考虑到虚拟空间内还会存在一些公屏、操控屏等各种虚拟对象,所以在虚拟空间内投掷出任一虚拟对象后,可能存在该虚拟对象与任一其他虚拟对象发生碰撞的情况。如果虚拟对象与虚拟空间内的任一其他虚拟对象发生碰撞,本公开可以在虚拟空间内呈现该虚拟对象的碰撞特效。以投掷的虚拟对象为弹性小球为例,在虚拟空间内与公屏发生碰撞后,如图13所示,可以设定该碰撞特效为反弹特效,使得该弹性小球在与公屏碰撞后会被反弹,然后在虚拟空间内消失。而且,该弹性小球表示的虚拟礼物在与公屏发生碰撞后,如果表示该虚拟礼物在虚拟空间内投掷失败,那么在虚拟空间内会呈现出一条“请朝主播方向送出礼物”的投掷失败提示,以便通知用户在虚拟空间内重新执行一次投掷操作,而重新赠送一次礼物,确保虚拟空间内礼物赠送的成功率。In addition, after the throwing special effects of virtual objects are presented in the virtual space, considering that there will also be some public screens, control screens and other virtual objects in the virtual space, after throwing any virtual object in the virtual space, there may be this virtual object. A situation where the object collides with any other virtual object. If a virtual object collides with any other virtual object in the virtual space, the present disclosure can present the collision special effect of the virtual object in the virtual space. Taking the thrown virtual object as an elastic ball as an example, after colliding with the public screen in the virtual space, as shown in Figure 13, the collision special effect can be set to a rebound special effect, so that the elastic ball will will be bounced and then disappear in the virtual space. Moreover, after the virtual gift represented by the elastic ball collides with the public screen, if the virtual gift fails to be thrown in the virtual space, a message "Please send the gift in the direction of the host" will appear in the virtual space. A prompt is provided to notify the user to re-perform the throwing operation in the virtual space and re-gift the gift to ensure the success rate of gift giving in the virtual space.
S530,在运动位姿变化量满足虚拟对象的归位条件时,在虚拟空间内,控制虚拟对象从手部模型折回到在虚拟空间内的原位置处。S530: When the change amount of the motion posture meets the return condition of the virtual object, in the virtual space, control the virtual object to return from the hand model to its original position in the virtual space.
由于通过手部模型握持任一虚拟对象,而带动该虚拟对象在虚拟空间内执行相应的运动后,可能会存在用户主动放弃向主播投掷该虚拟对象的情况。那么,针对用户主动放弃向主播投掷该虚拟对象的这种情况,在还未向主播投掷所握持的虚拟对象时,便会通过手部模型取消对该虚拟对象的握持,来表示用户主动放弃投掷该虚拟对象。所以,本公开可以通过判断手部模型在并未执行相应投掷操作的情况下,是否取消对虚拟对象的握持,来设定一个归位条件。Since any virtual object is held by the hand model and the virtual object is driven to perform corresponding movements in the virtual space, there may be situations where the user actively gives up throwing the virtual object to the anchor. Then, in the case where the user actively gives up throwing the virtual object to the anchor, before throwing the virtual object to the anchor, the hand model will cancel the grip of the virtual object to indicate that the user actively Give up throwing the virtual object. Therefore, the present disclosure can set a return condition by determining whether the hand model cancels the grip of the virtual object without performing the corresponding throwing operation.
进而,在确定出手部模型握持该虚拟对象时的运动位姿变化量时,本公开可以判断该运动位姿变化量是否满足该虚拟对象的归位条件,来判断用户是否主动放弃投掷 该虚拟对象。在该运动位姿变化量表示手部模型在并未执行投掷虚拟对象的操作的情况下,却执行握持取消操作时,说明该运动位姿变化量满足该虚拟对象的归位条件。所以,在虚拟空间内,可以控制不再被手部模型握持的虚拟对象从手部模型所在的位置处折回到该虚拟对象在虚拟空间内的原位置处。Furthermore, when determining the movement posture change amount when the hand model holds the virtual object, the present disclosure can determine whether the movement posture change amount satisfies the return condition of the virtual object, and determine whether the user actively gives up throwing. the virtual object. When the change amount of the movement posture indicates that the hand model performs the grip cancellation operation without performing the operation of throwing the virtual object, it means that the change amount of the movement posture meets the return condition of the virtual object. Therefore, in the virtual space, the virtual object that is no longer held by the hand model can be controlled to return from the position of the hand model to the original position of the virtual object in the virtual space.
作为本公开中的一些实施例,考虑到手部模型取消对虚拟对象的握持,如图14所示,可能存在如下两种情况:1)通过手部模型在执行相应运动后的任一运动点下,直接取消对虚拟对象的握持,也就是手部模型执行相应运动后,在任一运动点原地执行手部模型对虚拟对象的握持取消操作。2)通过手部模型带动所握持的虚拟对象执行相应运动,而返回到在虚拟空间内的原位置上方后,取消对虚拟对象的握持,也就是通过手部模型带动虚拟对象从任一运动点返回到其在虚拟空间内的原位置上方后,再执行手部模型对虚拟对象的握持取消操作。As some embodiments of the present disclosure, considering that the hand model cancels the grip of the virtual object, as shown in Figure 14, there may be the following two situations: 1) Any movement point after the hand model performs the corresponding movement Next, directly cancel the grip of the virtual object, that is, after the hand model performs the corresponding movement, the hand model's grip cancellation operation on the virtual object is performed in situ at any movement point. 2) Use the hand model to drive the held virtual object to perform corresponding movements, and after returning to the original position in the virtual space, cancel the grip of the virtual object, that is, use the hand model to drive the virtual object from any After the motion point returns to its original position in the virtual space, the hand model's grip cancellation operation on the virtual object is then performed.
那么,针对上述第一种情况,本公开可以设定一个第一归位条件,该第一归位条件可以为:手部模型在握持虚拟对象时的任一运动点下执行握持取消操作。也就是说,手部模型在握持虚拟对象后运动到任一运动点下,在该运动点原地执行握持取消操作。所以,在手部模型握持虚拟对象时的运动位姿变化量满足该第一归位条件时,本公开可以控制该虚拟对象从手部模型向下执行预设垂直运动后,折回到其在虚拟空间内的原位置处。Then, for the above first situation, the present disclosure can set a first homing condition, and the first homing condition can be: the hand model performs a grip cancellation operation at any motion point when holding the virtual object. That is to say, the hand model moves to any motion point after holding the virtual object, and the grip cancellation operation is performed in situ at the motion point. Therefore, when the movement posture change amount of the hand model when holding the virtual object satisfies the first return condition, the present disclosure can control the virtual object to perform a preset vertical movement downward from the hand model and then return to its original position. The original position in the virtual space.
也就是说,为了模拟出虚拟对象在被手部模型松开后的重力影响,可以控制虚拟对象向下执行一小段时间的预设垂直运动。该预设垂直运动的向下运动距离可以根据手部模型对虚拟对象取消握持时所在的位置高度和原位置所在的位置高度来确定。通常情况下,手部模型对虚拟对象取消握持时所在的位置高度会大于原位置所在的位置高度。以投掷的虚拟对象为虚拟礼物为例,假设手部模型对虚拟礼物取消握持时所在的位置高度为A,礼物面板上原位置所在的位置高度为B,那么,预设垂直运动的向下运动距离可以为0.2*(A-B)。然后,在该虚拟礼物向下执行完该预设垂直运动后,再控制该虚拟礼物折回到其在礼物面板上的原位置处。That is to say, in order to simulate the gravitational influence of the virtual object after being released by the hand model, the virtual object can be controlled to perform a preset vertical movement downward for a short period of time. The downward movement distance of the preset vertical movement can be determined based on the position height of the hand model when it cancels the grip of the virtual object and the position height of the original position. Normally, the height of the hand model when it cancels the grip on the virtual object will be greater than the height of the original position. Take the thrown virtual object as a virtual gift as an example. Assume that the height of the hand model when it cancels the grip of the virtual gift is A, and the height of the original position on the gift panel is B. Then, the downward movement of the preset vertical movement The distance can be 0.2*(A-B). Then, after the virtual gift completes the preset vertical movement downward, the virtual gift is controlled to return to its original position on the gift panel.
针对上述第二种情况,本公开可以设定一个第二归位条件,该第二归位条件可以为:手部模型握持虚拟对象而运动至虚拟对象在虚拟空间内的原位置上方后,执行握持取消操作。也就是说,手部模型在握持虚拟对象后运动而返回到其在虚拟空间内的原位置上方后,在该原位置上方执行握持取消操作。所以,在手部模型握持虚拟对象时的运动位姿变化量满足该第二归位条件时,本公开可以控制该虚拟对象从当前位置 折回到在虚拟空间内的原位置处。For the above second situation, the present disclosure can set a second homing condition. The second homing condition can be: after the hand model holds the virtual object and moves to the virtual object's original position in the virtual space, Perform a hold cancel operation. That is to say, after the hand model moves to return to above its original position in the virtual space after grasping the virtual object, the grip cancellation operation is performed above the original position. Therefore, when the movement posture change amount of the hand model when holding the virtual object satisfies the second homing condition, the present disclosure can control the virtual object to move from the current position to Return to the original position in the virtual space.
也就是说,在手部模型带动虚拟对象返回到其在虚拟空间内的原位置上方时,可以在该虚拟对象被取消握持后,直接控制该虚拟对象从该原位置上方的当前位置处,折回到其在虚拟空间内的原位置处。That is to say, when the hand model drives the virtual object to return to its original position in the virtual space, it can directly control the virtual object from its current position above the original position after the virtual object is cancelled. Return to its original position in virtual space.
图9为本公开实施例提供的一种人机交互装置的示意图,该人机交互装置900可以配置于XR设备中,该人机交互装置900包括:Figure 9 is a schematic diagram of a human-computer interaction device provided by an embodiment of the present disclosure. The human-computer interaction device 900 can be configured in an XR device. The human-computer interaction device 900 includes:
投掷位置确定模块910,用于响应于虚拟空间内任一虚拟对象的投掷操作,确定所述虚拟对象在所述虚拟空间内的投掷位置;The throwing position determination module 910 is used to determine the throwing position of the virtual object in the virtual space in response to the throwing operation of any virtual object in the virtual space;
投掷模块920,用于根据所述投掷位置和所述虚拟空间内已划定的可投掷区域,在所述虚拟空间内呈现所述虚拟对象的投掷特效。The throwing module 920 is configured to present the throwing special effect of the virtual object in the virtual space according to the throwing position and the defined throwable area in the virtual space.
在一些可实现方式中,投掷模块920,可以用于:In some implementations, the throwing module 920 can be used to:
如果所述投掷位置处于所述可投掷区域内,则在所述虚拟空间内呈现所述虚拟对象的投掷特效;If the throwing position is within the throwable area, the throwing special effect of the virtual object is presented in the virtual space;
如果所述投掷位置处于所述虚拟空间内已划定的非投掷区域内,则控制所述虚拟对象在所述虚拟空间内呈现预期时长后重新折回到在所述虚拟空间内的原位置处。If the throwing position is within a demarcated non-throwing area in the virtual space, the virtual object is controlled to return to its original position in the virtual space after being presented in the virtual space for a desired duration.
在一些可实现方式中,所述虚拟对象在所述虚拟空间内的投掷位置,包括如下位置点中的其中一项:In some implementations, the throwing position of the virtual object in the virtual space includes one of the following position points:
所述虚拟对象在所述可投掷区域内的第一落地点;The first landing point of the virtual object within the throwable area;
所述虚拟对象在所述虚拟空间内已划定的非投掷区域内的第二落地点;The second landing point of the virtual object within the demarcated non-throwing area in the virtual space;
所述虚拟对象与所述可投掷区域内划定的投掷边界的交汇点。The intersection point of the virtual object and the throwing boundary defined within the throwable area.
在一些可实现方式中,该虚拟礼物互动装置900,还可以包括:In some implementations, the virtual gift interactive device 900 may also include:
区域划分模块,用于根据所述虚拟空间内的用户可视区和预设的投掷范围,确定所述虚拟空间内的可投掷区域。An area dividing module is used to determine the throwable area in the virtual space according to the user's visible area in the virtual space and the preset throwing range.
在一些可实现方式中,所述虚拟对象的投掷特效包括:所述虚拟对象在所述虚拟空间内的空间投掷轨迹,以及基于所述空间投掷轨迹或/和所述虚拟对象而设定的投掷特效。In some implementations, the throwing special effects of the virtual object include: a spatial throwing trajectory of the virtual object in the virtual space, and a throwing set based on the spatial throwing trajectory or/and the virtual object. special effects.
在一些可实现方式中,所述虚拟空间内任一虚拟对象的投掷操作,通过投掷操作确定模块确定。该投掷操作确定模块,可以用于:In some implementations, the throwing operation of any virtual object in the virtual space is determined by the throwing operation determination module. This throwing operation determines the module and can be used for:
响应于虚拟空间内手部模型面向任一虚拟对象的握持操作,确定所述手部模型握持所述虚拟对象时的运动位姿变化量; In response to the hand model's holding operation facing any virtual object in the virtual space, determine the change amount of the motion pose of the hand model when holding the virtual object;
在所述运动位姿变化量满足所述虚拟对象的投掷触发条件时,确定所述虚拟对象的投掷操作。When the motion posture change meets the throwing trigger condition of the virtual object, the throwing operation of the virtual object is determined.
在一些可实现方式中,投掷位置确定模块910,可以用于:In some implementations, the throwing position determination module 910 can be used to:
根据所述运动位姿变化量,确定所述虚拟对象在所述虚拟空间内的投掷轨迹;Determine the throwing trajectory of the virtual object in the virtual space according to the change amount of the motion posture;
根据所述投掷轨迹确定所述虚拟对象在所述虚拟空间内的投掷位置。The throwing position of the virtual object in the virtual space is determined according to the throwing trajectory.
在一些可实现方式中,所述虚拟对象为可投掷对象时,所述虚拟对象的投掷触发条件至少包括:所述手部模型在执行完一次投掷运动或连续投掷运动后处于握持取消位姿下;In some implementations, when the virtual object is a throwable object, the throwing trigger condition of the virtual object at least includes: the hand model is in a grip cancellation posture after performing a throwing motion or a continuous throwing motion. Down;
所述虚拟对象为利用投掷道具辅助的不可投掷对象时,所述虚拟对象的投掷触发条件至少包括:所述手部模型中与所述投掷道具互动的目标部位执行所述投掷道具设定的投掷操作。When the virtual object is a non-throwable object assisted by a throwing prop, the throwing trigger condition of the virtual object at least includes: the target part of the hand model that interacts with the throwing prop executes the throwing set by the throwing prop. operate.
在一些可实现方式中,该人机交互装置900,还可以包括:In some implementations, the human-computer interaction device 900 may also include:
归位模块,用于在所述运动位姿变化量满足所述虚拟对象的归位条件时,在虚拟空间内,控制所述虚拟对象从所述手部模型折回到在所述虚拟空间内的原位置处。A homing module, configured to control the virtual object to fold back from the hand model to the virtual object in the virtual space when the motion posture change meets the homing condition of the virtual object. at the original position.
在一些可实现方式中,该人机交互装置900,还可以包括:In some implementations, the human-computer interaction device 900 may also include:
投掷失败模块,用于如果所述虚拟对象在所述虚拟空间内投掷失败,则控制所述虚拟礼物从所述虚拟空间内折回到在所述虚拟空间内的原位置处。A throwing failure module, configured to control the virtual gift to return from the virtual space to its original position in the virtual space if the virtual object fails to be thrown in the virtual space.
在一些可实现方式中,该人机交互装置900,还可以包括:In some implementations, the human-computer interaction device 900 may also include:
碰撞模块,用于如果所述虚拟对象与所述虚拟空间内的任一其他虚拟对象发生碰撞,则在所述虚拟空间内呈现所述虚拟对象的碰撞特效。A collision module, configured to present a collision special effect of the virtual object in the virtual space if the virtual object collides with any other virtual object in the virtual space.
在一些可实现方式中,该人机交互装置900,还可以包括:In some implementations, the human-computer interaction device 900 may also include:
震动模块,用于根据所述手部模型面向所述虚拟空间内任一虚拟对象执行的不同交互操作,控制所述XR设备执行不同程度的震动。A vibration module is used to control the XR device to perform different degrees of vibration according to different interactive operations performed by the hand model toward any virtual object in the virtual space.
本公开实施例中,响应于虚拟空间内任一虚拟对象的投掷操作,首先确定该虚拟对象在虚拟空间内的投掷位置。然后,根据该投掷位置和虚拟空间内已划定的可投掷区域,在虚拟空间内呈现虚拟对象的投掷特效,从而确保虚拟空间内投掷虚拟对象的直观性和准确性,增强虚拟空间内虚拟对象的互动趣味性和用户互动氛围,调动虚拟空间内的用户直播积极性。In the embodiment of the present disclosure, in response to the throwing operation of any virtual object in the virtual space, the throwing position of the virtual object in the virtual space is first determined. Then, based on the throwing position and the demarcated throwable area in the virtual space, the throwing special effects of the virtual object are presented in the virtual space, thereby ensuring the intuitiveness and accuracy of throwing virtual objects in the virtual space and enhancing the virtual objects in the virtual space. The interactive fun and user interactive atmosphere can mobilize the enthusiasm of users in the virtual space for live broadcasting.
应理解的是,该装置实施例与本公开中的方法实施例可以相互对应,类似的描述可以参照本公开中的方法实施例。为避免重复,此处不再赘述。 It should be understood that the device embodiment and the method embodiment in this disclosure may correspond to each other, and similar descriptions may refer to the method embodiment in this disclosure. To avoid repetition, they will not be repeated here.
例如,图15所示的装置900可以执行本公开提供的任一方法实施例,并且图15所示的装置900中的各个模块的前述和其它操作和/或功能分别为了实现上述方法实施例的相应流程,为了简洁,在此不再赘述。For example, the device 900 shown in Figure 15 can execute any method embodiment provided by the present disclosure, and the foregoing and other operations and/or functions of each module in the device 900 shown in Figure 15 are respectively to implement the above method embodiments. The corresponding process will not be repeated here for the sake of brevity.
上文中结合附图从功能模块的角度描述了本公开实施例的上述方法实施例。应理解,该功能模块可以通过硬件形式实现,也可以通过软件形式的指令实现,还可以通过硬件和软件模块组合实现。例如,本公开实施例中的方法实施例的各步骤可以通过处理器中的硬件的集成逻辑电路和/或软件形式的指令完成,结合本公开实施例公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。例如,软件模块可以位于随机存储器,闪存、只读存储器、可编程只读存储器、电可擦写可编程存储器、寄存器等本领域的成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法实施例中的步骤。The foregoing method embodiments of the embodiments of the present disclosure are described above from the perspective of functional modules in conjunction with the accompanying drawings. It should be understood that this functional module can be implemented in the form of hardware, can also be implemented through instructions in the form of software, or can also be implemented through a combination of hardware and software modules. For example, each step of the method embodiment in the embodiment of the present disclosure can be completed through the integrated logic circuit of the hardware in the processor and/or instructions in the form of software. The steps of the method disclosed in conjunction with the embodiment of the present disclosure can be directly embodied as a hardware translation. The execution of the code processor is completed, or the execution is completed using a combination of hardware and software modules in the decoding processor. For example, the software module may be located in a mature storage medium in the field such as random access memory, flash memory, read-only memory, programmable read-only memory, electrically erasable programmable memory, register, etc. The storage medium is located in the memory, and the processor reads the information in the memory and completes the steps in the above method embodiment in combination with its hardware.
图10是本公开实施例提供的电子设备的示意性框图。Figure 10 is a schematic block diagram of an electronic device provided by an embodiment of the present disclosure.
如图10所示,该电子设备1000可包括:As shown in Figure 10, the electronic device 1000 may include:
存储器1010和处理器1020,该存储器1010用于存储计算机程序,并将该程序代码传输给该处理器1020。换言之,该处理器1020可以从存储器1010中调用并运行计算机程序,以实现本公开实施例中的方法。Memory 1010 and processor 1020. The memory 1010 is used to store computer programs and transmit the program code to the processor 1020. In other words, the processor 1020 can call and run the computer program from the memory 1010 to implement the method in the embodiment of the present disclosure.
例如,该处理器1020可用于根据该计算机程序中的指令执行上述方法实施例。For example, the processor 1020 may be configured to execute the above method embodiments according to instructions in the computer program.
在本公开的一些实施例中,该处理器1020可以包括但不限于:In some embodiments of the present disclosure, the processor 1020 may include, but is not limited to:
通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等等。General processor, Digital Signal Processor (DSP), Application Specific Integrated Circuit (ASIC), Field Programmable Gate Array (FPGA) or other programmable logic devices, discrete gates Or transistor logic devices, discrete hardware components, etc.
在本公开的一些实施例中,该存储器1010包括但不限于:In some embodiments of the present disclosure, the memory 1010 includes, but is not limited to:
易失性存储器和/或非易失性存储器。其中,非易失性存储器可以是只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable ROM,PROM)、可擦除可编程只读存储器(Erasable PROM,EPROM)、电可擦除可编程只读存储器(Electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(Random Access Memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(Static RAM,SRAM)、 动态随机存取存储器(Dynamic RAM,DRAM)、同步动态随机存取存储器(Synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(Double Data Rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(Enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synch link DRAM,SLDRAM)和直接内存总线随机存取存储器(Direct Rambus RAM,DR RAM)。Volatile memory and/or non-volatile memory. Among them, the non-volatile memory can be read-only memory (Read-Only Memory, ROM), programmable read-only memory (Programmable ROM, PROM), erasable programmable read-only memory (Erasable PROM, EPROM), electrically removable memory. Erase programmable read-only memory (Electrically EPROM, EEPROM) or flash memory. The volatile memory may be random access memory (RAM), which is used as an external cache. By way of illustration, but not limitation, many forms of RAM are available, such as static random access memory (SRAM), Dynamic random access memory (Dynamic RAM, DRAM), synchronous dynamic random access memory (Synchronous DRAM, SDRAM), double data rate synchronous dynamic random access memory (Double Data Rate SDRAM, DDR SDRAM), enhanced synchronous dynamic random access memory Access memory (Enhanced SDRAM, ESDRAM), synchronous link dynamic random access memory (synch link DRAM, SLDRAM) and direct memory bus random access memory (Direct Rambus RAM, DR RAM).
在本公开的一些实施例中,该计算机程序可以被分割成一个或多个模块,该一个或者多个模块被存储在该存储器1010中,并由该处理器1020执行,以完成本公开提供的方法。该一个或多个模块可以是能够完成特定功能的一系列计算机程序指令段,该指令段用于描述该计算机程序在该电子设备1000的执行过程。In some embodiments of the present disclosure, the computer program can be divided into one or more modules, and the one or more modules are stored in the memory 1010 and executed by the processor 1020 to complete the tasks provided by the present disclosure. method. The one or more modules may be a series of computer program instruction segments capable of completing specific functions. The instruction segments are used to describe the execution process of the computer program on the electronic device 1000 .
如图16所示,该电子设备还可包括:As shown in Figure 16, the electronic device may also include:
收发器1030,该收发器1030可连接至该处理器1020或存储器1010。Transceiver 1030, which may be connected to the processor 1020 or the memory 1010.
其中,处理器1020可以控制该收发器1030与其他设备进行通信,例如,可以向其他设备发送信息或数据,或接收其他设备发送的信息或数据。收发器1030可以包括发射机和接收机。收发器1030还可以进一步包括天线,天线的数量可以为一个或多个。The processor 1020 can control the transceiver 1030 to communicate with other devices. For example, it can send information or data to other devices, or receive information or data sent by other devices. Transceiver 1030 may include a transmitter and a receiver. The transceiver 1030 may further include an antenna, and the number of antennas may be one or more.
应当理解,该电子设备1000中的各个组件通过总线系统相连,其中,总线系统除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。It should be understood that various components in the electronic device 1000 are connected through a bus system, where in addition to the data bus, the bus system also includes a power bus, a control bus and a status signal bus.
本公开还提供了一种计算机存储介质,其上存储有计算机程序,该计算机程序被计算机执行时使得该计算机能够执行上述方法实施例的方法。The present disclosure also provides a computer storage medium on which a computer program is stored. When the computer program is executed by a computer, the computer can perform the method of the above method embodiment.
本公开实施例还提供一种包含指令的计算机程序产品,该指令被计算机执行时使得计算机执行上述方法实施例的方法。An embodiment of the present disclosure also provides a computer program product containing instructions, which when executed by a computer causes the computer to perform the method of the above method embodiment.
当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。该计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行该计算机程序指令时,全部或部分地产生按照本公开实施例该的流程或功能。该计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。该计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,该计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。该计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可 用介质集成的服务器、数据中心等数据存储设备。该可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如数字视频光盘(digital video disc,DVD))、或者半导体介质(例如固态硬盘(solid state disk,SSD))等。When implemented using software, it may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the processes or functions according to the embodiments of the present disclosure are generated in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable device. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted over a wired connection from a website, computer, server, or data center (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.) to another website, computer, server or data center. The computer-readable storage medium can be any available medium that can be accessed by the computer or contain one or more Data storage devices such as servers and data centers integrated with media. The available media may be magnetic media (eg, floppy disk, hard disk, tape), optical media (eg, digital video disc (DVD)), or semiconductor media (eg, solid state disk (SSD)), etc.
以上仅为本公开的具体实施方式,但本公开的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本公开的保护范围之内。因此,本公开的保护范围应以该权利要求的保护范围为准。 The above are only specific embodiments of the present disclosure, but the protection scope of the present disclosure is not limited thereto. Any person familiar with the technical field can easily think of changes or substitutions within the technical scope disclosed in the present disclosure, and they should be covered by within the scope of this disclosure. Therefore, the protection scope of the present disclosure should be subject to the protection scope of the claims.
Claims (26)
Applications Claiming Priority (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210992896.0A CN117631921A (en) | 2022-08-18 | 2022-08-18 | Information interaction method, device, electronic equipment and storage medium |
| CN202210992896.0 | 2022-08-18 | ||
| CN202211131665.7A CN117742481A (en) | 2022-09-15 | 2022-09-15 | Human-computer interaction methods, devices, equipment and storage media |
| CN202211131665.7 | 2022-09-15 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2024037559A1 true WO2024037559A1 (en) | 2024-02-22 |
Family
ID=89940729
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2023/113250 Ceased WO2024037559A1 (en) | 2022-08-18 | 2023-08-16 | Information interaction method and apparatus, and human-computer interaction method and apparatus, and electronic device and storage medium |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2024037559A1 (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN118870048A (en) * | 2024-09-26 | 2024-10-29 | 北京达佳互联信息技术有限公司 | Method, device, equipment and storage medium for sending virtual resources |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101515198A (en) * | 2009-03-11 | 2009-08-26 | 上海大学 | Human-computer interaction method for grapping and throwing dummy object and system thereof |
| CN111010585A (en) * | 2019-12-06 | 2020-04-14 | 广州华多网络科技有限公司 | Virtual gift sending method, device, equipment and storage medium |
| CN111202983A (en) * | 2020-01-02 | 2020-05-29 | 腾讯科技(深圳)有限公司 | Method, device, equipment and storage medium for using props in virtual environment |
| CN113041622A (en) * | 2021-04-23 | 2021-06-29 | 腾讯科技(深圳)有限公司 | Virtual throwing object throwing method in virtual environment, terminal and storage medium |
| WO2022057624A1 (en) * | 2020-09-17 | 2022-03-24 | 腾讯科技(深圳)有限公司 | Method and apparatus for controlling virtual object to use virtual prop, and terminal and medium |
-
2023
- 2023-08-16 WO PCT/CN2023/113250 patent/WO2024037559A1/en not_active Ceased
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101515198A (en) * | 2009-03-11 | 2009-08-26 | 上海大学 | Human-computer interaction method for grapping and throwing dummy object and system thereof |
| CN111010585A (en) * | 2019-12-06 | 2020-04-14 | 广州华多网络科技有限公司 | Virtual gift sending method, device, equipment and storage medium |
| CN111202983A (en) * | 2020-01-02 | 2020-05-29 | 腾讯科技(深圳)有限公司 | Method, device, equipment and storage medium for using props in virtual environment |
| WO2022057624A1 (en) * | 2020-09-17 | 2022-03-24 | 腾讯科技(深圳)有限公司 | Method and apparatus for controlling virtual object to use virtual prop, and terminal and medium |
| CN113041622A (en) * | 2021-04-23 | 2021-06-29 | 腾讯科技(深圳)有限公司 | Virtual throwing object throwing method in virtual environment, terminal and storage medium |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN118870048A (en) * | 2024-09-26 | 2024-10-29 | 北京达佳互联信息技术有限公司 | Method, device, equipment and storage medium for sending virtual resources |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10671239B2 (en) | Three dimensional digital content editing in virtual reality | |
| JP7503122B2 (en) | Method and system for directing user attention to a location-based gameplay companion application - Patents.com | |
| US20230308495A1 (en) | Asymmetric Presentation of an Environment | |
| US11816757B1 (en) | Device-side capture of data representative of an artificial reality environment | |
| KR20250145695A (en) | Creating user interfaces that display augmented reality graphics. | |
| KR20230070308A (en) | Location identification of controllable devices using wearable devices | |
| US10846901B2 (en) | Conversion of 2D diagrams to 3D rich immersive content | |
| CN117590935A (en) | Perspective sharing in artificial reality environment between 2D interface and artificial reality interface | |
| WO2024037559A1 (en) | Information interaction method and apparatus, and human-computer interaction method and apparatus, and electronic device and storage medium | |
| EP4575748A1 (en) | Human-computer interaction method, apparatus, device and medium, virtual reality space-based display processing method, apparatus, device and medium, virtual reality space-based model display method, apparatus, device and medium | |
| US20240177435A1 (en) | Virtual interaction methods, devices, and storage media | |
| CN115981544A (en) | Interaction method, device, electronic device and storage medium based on extended reality | |
| CN117666852A (en) | Method, device, equipment and medium for determining target object in virtual reality space | |
| CN117687542A (en) | Information interaction method, device, electronic equipment and storage medium | |
| US12039141B2 (en) | Translating interactions on a two-dimensional interface to an artificial reality experience | |
| CN117631921A (en) | Information interaction method, device, electronic equipment and storage medium | |
| KR20150071824A (en) | Cross-platform augmented reality experience | |
| WO2024012106A1 (en) | Information interaction method and apparatus, electronic device, and storage medium | |
| CN117631904A (en) | Information interaction method, device, electronic equipment and storage medium | |
| JP2018207517A (en) | Method to be executed by computer for controlling display in head-mounted device, program for causing computer to execute the same method, and information processing device | |
| CN118227005A (en) | Information interaction method, device, electronic equipment and storage medium | |
| WO2024016880A1 (en) | Information interaction method and apparatus, and electronic device and storage medium | |
| CN117435041A (en) | Information interaction methods, devices, electronic equipment and storage media | |
| WO2023231666A1 (en) | Information exchange method and apparatus, and electronic device and storage medium | |
| WO2025130816A1 (en) | Virtual object interaction method and apparatus, and device and medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23854459 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 26.05.2025) |