[go: up one dir, main page]

CN118267696A - Man-machine gesture interaction method and device, electronic equipment and storage medium - Google Patents

Man-machine gesture interaction method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN118267696A
CN118267696A CN202410096142.6A CN202410096142A CN118267696A CN 118267696 A CN118267696 A CN 118267696A CN 202410096142 A CN202410096142 A CN 202410096142A CN 118267696 A CN118267696 A CN 118267696A
Authority
CN
China
Prior art keywords
game
gesture
target
interface
prop
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410096142.6A
Other languages
Chinese (zh)
Inventor
何俊乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202410096142.6A priority Critical patent/CN118267696A/en
Publication of CN118267696A publication Critical patent/CN118267696A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/22Setup operations, e.g. calibration, key configuration or button assignment
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a human-computer gesture interaction method, a device, electronic equipment and a storage medium, wherein the gesture interaction method comprises the following steps: displaying a first game interface in the graphical user interface, and controlling dragging of the target game prop in the first game interface in response to a first gesture instruction for the target game prop; when the target game prop is dragged to the edge of the first game interface, responding to a second gesture instruction aiming at the target game prop, and determining a hand gesture corresponding to the second gesture instruction; and controlling the controlled virtual object to execute a game interaction event corresponding to the target hand gesture in the game scene in response to the hand gesture being the target hand gesture. Based on the mode, the problem of misoperation of the user can be effectively prevented, and the man-machine interaction between game interfaces is smoother.

Description

Man-machine gesture interaction method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of virtual reality device interaction technologies, and in particular, to a human-machine gesture interaction method, apparatus, electronic device, and storage medium.
Background
Head mounted displays (i.e., head mounted devices) are receiving increasing attention, and head mounted devices make use of the feature that their space is infinitely extendable, providing the possibility for users to operate in a vast space that is viewable. The head-mounted device may achieve different effects of virtual reality VR, augmented reality AR, and mixed reality MR by sending optical signals to the eyes.
At present, interaction in a virtual reality scene presented by a user and the head display device can be realized through modes of a handle, eye movement tracking and the like, for example, an infrared receiving device or an electromagnetic device and the like are arranged in the handle, so that the head display device can be positioned through modes of laser tracking, active infrared laser positioning, active optical positioning and the like, and rapid reaction operation is realized; in addition, the head-mounted device may be equipped with built-in sensors to read eye-related data, and the user may select a specific button, application, list item, or the like, simply by sight aiming. However, when a plurality of game interfaces for user operation are displayed in a virtual reality scene provided by the head display device, there may be a problem in that operations between the game interfaces are not smooth or are erroneously operated when the head display device is interactively operated through a handle or an eyeball or the like.
Disclosure of Invention
In view of the above, the embodiments of the present application at least provide a method, an apparatus, an electronic device, and a storage medium for man-machine gesture interaction, which not only can effectively prevent the problem of misoperation of a user, but also can enable man-machine interaction between game interfaces to be smoother.
The application mainly comprises the following aspects:
in a first aspect, an embodiment of the present application provides a human-machine gesture interaction method, which provides a graphical user interface of a target game through a head display device, where the gesture interaction method includes:
Displaying a first game interface in the graphical user interface, wherein at least part of game scenes are presented in the first game interface, and the game scenes comprise target game props and controlled virtual objects controlled by the head display equipment;
controlling dragging the target game prop in the first game interface in response to a first gesture instruction for the target game prop;
When the target game prop is dragged to the edge of the first game interface, responding to a second gesture instruction aiming at the target game prop, and determining a hand gesture corresponding to the second gesture instruction;
and controlling the controlled virtual object to execute a game interaction event corresponding to the target hand gesture in the game scene in response to the hand gesture being the target hand gesture.
In a second aspect, the embodiment of the application also provides a human-computer gesture interaction device,
Providing a graphical user interface of a target game through the head display device, wherein the gesture interaction device comprises:
The display module is used for displaying a first game interface in the graphical user interface, wherein at least part of game scenes are displayed in the first game interface, and the game scenes comprise target game props and controlled virtual objects controlled by the head display equipment;
a drag module that controls dragging the target game item in the first game interface in response to a first gesture instruction for the target game item;
the determining module is used for responding to a second gesture instruction aiming at the target game prop when the target game prop is dragged to the edge of the first game interface, and determining a hand gesture corresponding to the second gesture instruction;
And the execution module is used for responding to the hand gesture as a target hand gesture and controlling the controlled virtual object to execute a game interaction event corresponding to the target hand gesture in the game scene.
In a third aspect, an embodiment of the present application further provides an electronic device, including: the system comprises a processor, a memory and a bus, wherein the memory stores machine-readable instructions executable by the processor, the processor and the memory are communicated through the bus when the electronic device is running, and the machine-readable instructions are executed by the processor to perform the steps of the human-computer gesture interaction method in the first aspect.
In a fourth aspect, an embodiment of the present application further provides a computer readable storage medium, where a computer program is stored, where the computer program is executed by a processor to perform the steps of the human-machine gesture interaction method described in the first aspect.
The embodiment of the disclosure provides a human-computer gesture interaction method, a device, an electronic device and a storage medium, wherein a graphical user interface of a target game is provided through head display equipment, and the gesture interaction method comprises the following steps: providing a graphical user interface of a target game through the head display device, wherein the gesture interaction method comprises the following steps: displaying a first game interface in the graphical user interface, wherein at least part of game scenes are presented in the first game interface, and the game scenes comprise target game props and controlled virtual objects controlled by the head display equipment; controlling dragging the target game prop in the first game interface in response to a first gesture instruction for the target game prop; when the target game prop is dragged to the edge of the first game interface, responding to a second gesture instruction aiming at the target game prop, and determining a hand gesture corresponding to the second gesture instruction; and controlling the controlled virtual object to execute a game interaction event corresponding to the target hand gesture in the game scene in response to the hand gesture being the target hand gesture. Based on the mode, the problem of misoperation of the user can be effectively prevented, and the man-machine interaction between game interfaces is smoother.
In order to make the above objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 shows a flowchart of a human-machine gesture interaction method provided by an embodiment of the present application;
FIG. 2 is a schematic diagram of a first game interface provided by an embodiment of the present application;
FIG. 3 illustrates one of the schematic diagrams of a first type of hand gesture provided by embodiments of the present application;
FIG. 4 illustrates a second exemplary diagram of a first type of hand gesture provided by an embodiment of the present application;
FIG. 5 illustrates an example one of a human-machine gesture interaction provided by an embodiment of the present application;
FIG. 6 illustrates one of the second type of hand gestures provided by embodiments of the present application;
FIG. 7 illustrates a second exemplary diagram of a second type of hand gesture provided by an embodiment of the present application;
FIG. 8 illustrates an example two of a human-machine gesture interaction provided by an embodiment of the present application;
FIG. 9 illustrates one of the schematic diagrams of a third type of hand gesture provided by embodiments of the present application;
FIG. 10 illustrates a second exemplary diagram of a third type of hand gesture provided by an embodiment of the present application;
FIG. 11 illustrates a third exemplary diagram of a third type of hand gesture provided by an embodiment of the present application;
FIG. 12 illustrates an example III of a human-machine gesture interaction provided by an embodiment of the present application;
FIG. 13 illustrates an example four of a human-machine gesture interaction provided by an embodiment of the present application;
Fig. 14 is a schematic structural diagram of a human-computer gesture interaction device according to an embodiment of the present application;
fig. 15 shows a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described with reference to the accompanying drawings in the embodiments of the present application, and it should be understood that the drawings in the present application are for the purpose of illustration and description only and are not intended to limit the scope of the present application. In addition, it should be understood that the schematic drawings are not drawn to scale. A flowchart, as used in this disclosure, illustrates operations implemented according to some embodiments of the present application. It should be appreciated that the operations of the flow diagrams may be implemented out of order and that steps without logical context may be performed in reverse order or concurrently. Moreover, one or more other operations may be added to or removed from the flow diagrams by those skilled in the art under the direction of the present disclosure.
In addition, the described embodiments are only some, but not all, embodiments of the application. The components of the embodiments of the present application generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the application, as presented in the figures, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by a person skilled in the art based on embodiments of the application without making any inventive effort, fall within the scope of the application.
In order to enable those skilled in the art to make and use the present disclosure, the following embodiments are provided in connection with a particular application scenario, "man-machine interaction in a virtual reality scenario", and it will be apparent to those skilled in the art that the general principles defined herein may be applied to other embodiments and application scenarios without departing from the spirit and scope of the present disclosure.
The method, the device, the electronic equipment or the computer readable storage medium can be applied to any scene needing human-computer interaction in a virtual reality scene, the embodiment of the application does not limit specific application scenes, and any scheme using the human-computer gesture interaction method and the device provided by the embodiment of the application is within the protection scope of the application.
It is worth noting that, before the present application proposes, a head-mounted display (i.e. a head-mounted device) has received increasing attention, and the head-mounted device makes use of the feature that its space is infinitely extendable, providing a possibility for a user to operate in a visually vast space. Various head-mounted devices may achieve different effects of virtual reality VR, augmented reality AR, mixed reality MR, and the like by sending optical signals to the eyes.
At present, interaction in a virtual reality scene presented by a user and the head display device can be realized through modes of a handle, eye movement tracking and the like, for example, an infrared receiving device or an electromagnetic device and the like are arranged in the handle, so that the head display device can be positioned through modes of laser tracking, active infrared laser positioning, active optical positioning and the like, and rapid reaction operation is realized; in addition, the head-mounted device may be equipped with built-in sensors to read eye-related data, and the user may select a specific button, application, list item, or the like, simply by sight aiming.
However, when the head display device is used in a game, when a plurality of game interfaces are displayed in a virtual reality scene provided by the head display device, there may be a problem in that operations between the game interfaces are not smooth or are erroneously operated when the head display device is interactively operated by a handle or eye tracking or the like.
For example, when a plurality of game interfaces are provided in a virtual reality scene presented by the head display device, a user operates the game interfaces through the handles, if the display area of the game interfaces is small and the displayed content is large, when the user carelessly points out that the range of the handles exceeds the game interfaces, the functions of controls on other game interfaces beyond the game interfaces are likely to be triggered, so that the difficulty of interaction is increased.
For another example, when a plurality of game interfaces are provided in a virtual reality scene presented by the head display device, when a user needs to transfer a control of one game interface to another game interface, in view of the characteristic of three-dimensional extension of the virtual reality scene space, the spanning distance may be far, and then in the transfer process, a situation that repeated operations are required due to the fact that the action of a handle or the action of an eyeball of the user cannot be continuously locked for a long time may occur.
In view of the above problems, an embodiment of the present application provides a human-machine gesture interaction method applied to a head display device, where a graphical user interface of a target game is provided through the head display device, the gesture interaction method includes: displaying a first game interface in the graphical user interface, wherein at least part of game scenes are presented in the first game interface, and the game scenes comprise target game props and controlled virtual objects controlled by the head display equipment; controlling dragging the target game prop in the first game interface in response to a first gesture instruction for the target game prop; when the target game prop is dragged to the edge of the first game interface, responding to a second gesture instruction aiming at the target game prop, and determining a hand gesture corresponding to the second gesture instruction; and controlling the controlled virtual object to execute a game interaction event corresponding to the target hand gesture in the game scene in response to the hand gesture being the target hand gesture. Based on the mode, the problem of misoperation of the user can be effectively prevented, and the man-machine interaction between game interfaces is smoother.
In order to facilitate understanding of the present application, the following detailed description of the technical solution provided by the present application is provided in connection with specific embodiments.
Fig. 1 shows a flowchart of a human-computer gesture interaction method provided by an embodiment of the present application, where the gesture interaction method may be executed on a head display device, and the head display device may be any electronic device configured with a virtual augmented reality AR, a virtual reality VR, a mixed reality MR, and the like, capable of simulating a virtual reality scene through a provided image user interface and capable of interacting through user gestures.
Here, the virtual reality scene is a scene that an application program displays (or provides) on a graphical user interface when running on a head-mounted device. Optionally, the virtual reality scene is a simulation environment for the real world, or a semi-simulated semi-fictional virtual environment, or a purely fictional virtual environment, or a fictional virtual environment in which virtual scene information is presented on an existing scene, or a semi-real semi-fictional virtual environment. The virtual scene is a three-dimensional virtual scene, and the virtual environment can be sky, land, ocean, and the like, wherein the land comprises environmental elements such as deserts, cities, and the like. The virtual scene is a scene of a complete game logic of a controlled virtual object such as user control, for example, in a sandbox 3D shooting game, the virtual scene is a 3D game world for a player to control the controlled virtual object to fight, and an exemplary virtual scene may include: at least one element selected from mountains, flat land, river, lake, ocean, desert, sky, plant, building and vehicle. The virtual reality scene may be a scene requested by the virtual reality device according to other actual demands of the user, for example, a live broadcast scene, a movie viewing scene, and the like, in addition to the game scene.
Accordingly, at least one game interface for user operation is also displayed in the virtual reality scene. For example, when the virtual reality scene is a game scene, the game scene may include a game interface and a three-dimensional game screen for user player interaction, and in alternative embodiments, the game interface may include game controls (e.g., skill controls, movement controls, functionality controls, etc.), indication marks (e.g., direction indication marks, character indication marks, etc.), information presentation areas (e.g., number of clicks, time of play, etc.), or game setting controls (e.g., system settings, store, medal, etc.). In an optional embodiment, the game screen is a display screen corresponding to the virtual scene displayed by the terminal device, and the three-dimensional game screen may include controlled virtual objects such as game characters, NPC characters, AI characters, and the like for executing game logic in the virtual reality scene.
Here, the controlled virtual object refers to a dynamic object that can be controlled in a virtual reality scene. Alternatively, the dynamic object may be a virtual character, a virtual animal, a cartoon character, or the like. The controlled virtual object is a character that a player controls through an input device, or is an artificial intelligence (ARTIFICIAL INTELLIGENCE, AI) set in a virtual environment fight by training, or is a Non-player character (Non-PLAYER CHARACTER, NPC) set in a virtual reality scene fight. Optionally, the controlled virtual object is a virtual character playing an athletic in a virtual reality scene. Optionally, the number of controlled virtual objects in the virtual reality scene combat is preset or dynamically determined according to the number of clients joining the combat, which is not limited by the embodiments of the present disclosure. In one possible implementation, a user can control a controlled virtual object to move in the virtual reality scene, e.g., control the controlled virtual object to run, jump, crawl, etc., as well as control the controlled virtual object to fight with other controlled virtual objects using skills, game props, etc., provided by an application.
As shown in fig. 1, the human-computer gesture interaction method includes the following steps:
Step S101, displaying a first game interface in the graphical user interface, wherein at least part of game scenes are displayed in the first game interface, and the game scenes comprise target game props and controlled virtual objects controlled by the head display equipment.
It should be noted that the graphical user interface is an interface provided by the head display device for the target game, in which a virtual reality scene (e.g., a game scene of the target game, a game candidate scene, etc.) may be presented, and specifically, when the target game Z is selected by the player user, a first game interface of the target game Z may be displayed in the graphical user interface in response to a selection operation of the target game Z by the player user.
Here, at least part of a game scene of the target game Z may be presented in the first game interface, wherein the game scene may include the target game prop and the controlled virtual object controlled by the head display device. In particular, game play items are important tools for increasing the interest of game play, in some games, a player user may control a player character to pick up items in a game scene to play or purchase game play items to play, and in general, in game plays of shooting-type games, a player user may control a player character to search in a game scene to obtain corresponding game items or pick up game items by striking a hostile player character. Typically, game items acquired by a player character in a game may be housed in an item storage space (e.g., virtual backpack or virtual warehouse, etc.) of a controlled virtual object controlled by a player user, e.g., game items picked up from a game scene, game items purchased from a game store, and game items awarded by a system may all be placed in the item storage space, where the item storage space includes at least an item storage area, e.g., the item storage area may include a plurality of storage locations, each of which may store one game item.
FIG. 2 shows a schematic diagram of a graphical user interface provided by an embodiment of the present application. Referring to FIG. 2, a graphical user interface of a target game provided by a head-mounted device is shown in FIG. 2; in the graphical user interface, a first game interface a is presented, wherein at least part of a game scene is presented in the first game interface a, and the first game interface a can comprise a target game prop M and a controlled virtual object I controlled by the head display device.
Step S102, responding to a first gesture instruction aiming at the target game prop, and controlling to drag the target game prop in the first game interface.
Here, the player user may operate the target game play object to control the target game play object. In an example of the present application, the operation manner may be to issue a gesture instruction, in an example of the present application, a player user may operate the target game prop using one hand gesture with respect to the target game prop to generate a first gesture instruction with respect to the target game prop, for example, pick up the target game prop in the first game interface using a single hand pinch gesture with respect to the target game prop, generate the first gesture instruction based on the hand gesture pinched to the target game prop by one hand when picking up the target game prop using the single hand pinch gesture, and then may control the target game prop to drag the target game prop to move along with the movement of the hand gesture in the first game interface in response to the first gesture instruction with respect to the target game prop.
In addition, regarding step S102, in the example of the present application, in controlling the dragging of the target game prop in the first game interface, the game view angle of the controlled virtual object in the game scene may be further controlled and adjusted according to the real-time position change of the target game prop in the first game interface. For example, when the hand gesture is shaken left and right after the target game stage is picked up, the game angle of view of the controlled virtual object also keeps the same along with the moving direction of the hand gesture of the target game stage.
Step S103, when the target game prop is dragged to the edge of the first game interface, determining a hand gesture corresponding to a second gesture instruction for the target game prop in response to the second gesture instruction.
Here, the trigger points of the edge of the first game interface may be preset, where the trigger points of the edge may be all points on the border line of the first game interface, or may be specific points having a certain distance from the border line of the first game interface, and then the edge of the first game interface may be a set of all points on the border line of the first game interface; or may be a collection of specific points at a distance from the border line of the first game interface.
As an example, the hand gesture may be a gesture constituted by the user by using the shape of the palm and finger positions.
Step S104, responding to the hand gesture as a target hand gesture, and controlling the controlled virtual object to execute a game interaction event corresponding to the target hand gesture in the game scene.
As an example, the target hand gesture may include a combination of one or more of the following: a single-hand pinch gesture, a single-hand click gesture, and a two-hand box gesture.
It should be noted that, because one of the technical problems solved by the present application is misoperation between game interfaces, the present application uses points on the edges of the game interfaces as boundaries to trigger and distinguish game interaction events corresponding to different target hand gestures. Specifically, when the target game prop is dragged to the edge of the first game interface, the controlled virtual object can be controlled to execute a game interaction event corresponding to the target gesture instruction in the game scene in response to the second gesture instruction as the target hand gesture.
Specifically, for the omnibearing convenient interaction in the virtual reality scene presented by the head display device, a plurality of game interfaces are usually displayed in the virtual reality scene, and for ensuring the omnibearing convenient interaction of the user in the virtual reality scene, convenient interaction controls, such as various function panels evoked by any function control on any game interface, are usually displayed in the virtual reality scene.
According to the method and the device, corresponding target hand gestures can be defined for various interactive operations supported by the virtual reality scene in advance according to the suitability between gesture operation habits of the player user and interface interactive operations, so that game interactive operations in the virtual reality scene can be accurately triggered by capturing the target hand gestures of the player user in the virtual reality scene.
In this step, if a game interaction event corresponding to the target hand gesture is to be executed, two conditions that the target hand gesture is detected and the target hand gesture is detected to move to the edge trigger point, in other words, the target hand gesture is detected and the trigger point at which the target hand gesture is dragged to the edge of the first game interface is detected, that is, the game interaction event corresponding to the target hand gesture may be triggered.
With regard to step S104, as a possible implementation manner, when the target hand gesture is a first type hand gesture, in a case where the target game stage is dragged to an edge of the first game interface, in response to the hand gesture being the first type hand gesture, the target game stage is continuously displayed in a specified area of the edge of the first game interface according to the second gesture instruction, and a game view angle of the controlled virtual object in the game scene is controlled and adjusted according to a real-time position change of the target game stage in the specified area. Here, the designated area may be all or part of the first game interface.
As an example, the first type of hand gesture may be a preset single-hand pinch gesture, see fig. 3, for example, the first type of hand gesture may be a single-hand thumb and index finger natural pinch, with the remaining three fingers naturally curved placed; referring to fig. 4, the first hand gesture may also be naturally kneaded by thumb and index finger, and the remaining three fingers are placed in a fist. Here, it should be understood that the above two examples are merely exemplary for reference, and the first type of hand gestures in the present application are not limited thereto. For example, the first type of hand gesture may also be a natural pinch of the thumb and index finger, with the remaining three fingers naturally placed, with the hand remaining generally perpendicular to the first game interface. Here, it should be noted that the present application is not limited to any kind of hand gestures, but may be any set hand gesture other than the above two examples.
According to the above, when the single-hand pinch gesture of the user in the virtual reality scene needs to be detected, the gesture information of the thumb, the index finger, the middle finger, the ring finger and the little finger in the single-hand model can be detected directly, so that whether the single-hand model executes the single-hand pinch gesture can be judged, and the specific single-hand pinch gesture can be determined by distinguishing the gesture information of each finger and the whole palm.
Fig. 5 illustrates an example of human-machine gesture interaction provided by an embodiment of the present application. As shown in fig. 5, when a user wears the head display device and visually enters a virtual reality scene presented by a graphical user interface of a target game provided by the head display device, and when a different game interface is displayed in the virtual reality scene, in the foregoing example, it is assumed that the virtual reality scene is a game scene of a combat game, a first game interface C and a second game interface D are displayed in the game scene, wherein a virtual backpack or a virtual warehouse (i.e., a prop storage space) including a controlled virtual object controlled by a player user in the second game interface D includes prop storage areas, each storage location included in the prop storage areas is available for storing game props usable by the controlled virtual object, and wherein the player can select a game prop to be used stored in the prop storage area for display on the first game interface C at a storage location included in the prop storage area presented by the second game interface D.
Assuming that the game item "long gun" in fig. 5 is the target game item, when a player user picks up the target game item "long gun" on the first game interface C by controlling the controlled virtual object (exist but not directly shown in fig. 5) through the hand gestures shown in fig. 5, a first gesture instruction may be generated to control dragging the target game item "long gun" in the first game interface C in response to detecting that the player user picks up the target game item "long gun" through the second type of hand gestures; assuming that the edge of the first game interface is a point on a border line (visible or invisible) of the first game interface, if a second gesture instruction is generated when the target game prop 'long gun' dragged by the hand gesture is detected to touch the edge of the first game interface during the dragging process of the target game prop 'long gun', and when the hand gesture corresponding to the second gesture instruction is determined to be the first type of hand gesture, the target game prop is continuously displayed in a designated area of the edge of the first game interface according to the second gesture instruction, so that the target game prop 'long gun' is prevented from moving to other game interfaces (for example, a second game interface D) in a crossing way. And simultaneously, controlling and adjusting the game view angle of the controlled virtual object in the game scene according to the real-time position change of the target game prop in the designated area. In this way, the player user can be allowed to operate only in the designated game interface.
On the other hand, considering the problem of easy misoperation when the operation distance across the interfaces is far, the application provides a mode capable of supporting interaction between different game interfaces when the interaction requirements are met by the different game interfaces.
With respect to step S104, as a possible implementation manner, when a second game interface is further included in the graphical user interface, when the target hand gesture is a second type hand gesture, and when the target game prop is dragged to the edge of the first game interface, the target game prop may be dragged from the edge of the first game interface into the second game interface according to the second gesture command in response to the hand gesture being the second type hand gesture.
As an example, the second type of hand gesture may also be a preset single-hand pinch gesture, referring to fig. 6, the second type of hand gesture may be a single-hand thumb, index finger, middle finger that is naturally pinch, and the remaining two fingers are naturally placed vertically; referring to fig. 7, the second type of hand gesture may be a natural pinch of the thumb and index finger with the remaining three fingers being placed in a fist. Here, it should be understood that the above two examples are merely exemplary for reference, and the second type of hand gesture in the present application is not limited thereto. For example, the second hand gesture may be a natural pinch of the thumb and index finger, with the remaining three fingers naturally placed, with the entire hand remaining level with the first game interface. The present application is not limited to the second type of hand gesture, and may be any set hand gesture other than the above two examples.
Further, additionally, with respect to step S104, as a possible implementation, after the target game prop is dragged to the second game interface, the controlled virtual object is controlled to execute a game interaction event corresponding to the target game prop and the second game interface in response to the second gesture instruction ending triggered by the occurrence of the change in the second type of hand gesture. Assuming that the second hand gesture is natural pinching of the thumb and the index finger, when the remaining three fingers are placed in fist, the second hand gesture instruction is ended, namely that the thumb and the index finger of the player user are not pinched any more is detected.
As an example, when the second game interface is a prop container interface, regarding the step of controlling the controlled virtual object to execute a game interaction event corresponding to the target game prop and the second game interface in response to ending of the second gesture instruction, the controlled virtual object may be controlled to store the target game prop into a prop storage space displayed by the prop container interface in response to ending of the second gesture instruction when the step is specifically implemented.
Fig. 8 illustrates an example two of man-machine gesture interaction provided by an embodiment of the present application. As shown in fig. 8, when the user wears the head display device and visually enters a virtual reality scene presented by a graphical user interface of a target game provided by the head display device, and when a different game interface is displayed in the virtual reality scene, in the foregoing example, it is assumed that the virtual reality scene is a game scene of a combat game, a first game interface E and a second game interface F are displayed in the game scene, where the second game interface is a prop container interface, a prop storage space (i.e., a virtual knapsack or a virtual warehouse) of a controlled virtual object controlled by a player in the second game interface F includes a prop storage area, and each storage location included in the prop storage area is available for storing game props usable by the controlled virtual object, where the game props can be displayed in an arrangeable manner at storage locations included in the prop storage area presented by the second game interface F, and the player can select game props to be used in the prop storage area, so that the selected game props are displayed on the first game interface for use by the controlled virtual object.
Assuming that the game play object "long gun" in fig. 8 is the target game play object, then a second type of hand gesture may be used when the player user wants to pick up and place the target game play object "long gun" at the first game interface E into the play object storage area by controlling the controlled virtual object (present but not directly shown in fig. 8) with the hand gesture shown in fig. 8. Specifically, when the player selects the target game prop 'long gun' on the first game interface E through the second type of hand gesture shown in fig. 8, a first gesture command may be generated in response to detecting the selection operation of the target game prop 'long gun', then the target game prop 'long gun' may be controlled to be dragged in the first game interface E in response to the first gesture command, when the target game prop is dragged to the edge of the first game interface, a second gesture command is generated, and when the hand gesture corresponding to the second gesture command is determined to be the second type of hand gesture, the target game prop is dragged from the edge of the first game interface E into the second game interface F according to the second gesture command, so that the target game prop is placed at the storage position of the prop storage area of the virtual backpack. Through the gesture operation mode, cross-screen interaction between different game interfaces can be allowed, so that the use experience of a user is improved.
For example, in the example shown in fig. 8, after the target game stage "long gun" is dragged to the target storage position of the stage storage area displayed in the second game interface F using the second hand gesture, when the second hand gesture is detected to be ended, assuming that the second hand gesture is naturally pinch between the thumb and the index finger, and the remaining three fingers are placed in fist, the second hand gesture is ended, that is, the thumb and the index finger of the player user are no longer pinch, at which time the target game stage "long gun" may be placed in the storage position.
On the other hand, considering that when the operation range of one game interface is larger, dragging a target game prop from one position of one game interface to another position of the game interface may touch a pain point with too far movement distance, if the movement distance of the same hand gesture is kept far, the hand gesture may be invalid due to the loosening of hands, or the purpose cannot be accurately achieved, when the distance between the operation position of the game interface and the target position meets the preset condition, for example, the preset condition is that the distance between the two positions is larger than the preset threshold, the obstacle control is more during dragging, and the operation difficulty of the hand gesture is larger, the mapping relationship between the target hand gesture and the mirror image display interaction event may be preset in order to solve the problem.
With respect to step S104, as a possible implementation, when the target hand gesture is a third type of hand gesture, in a case where the target game prop is dragged to an edge of the first game interface, the following steps may be performed:
Firstly, determining an operation area and a mirror image area in the first game interface according to the second gesture command in response to the hand gesture being a third type of hand gesture, wherein the mirror image area and the operation area are arranged in a relative display mode in the first game interface.
Here, the relative display position may be a relative display position where the mirror image area and the operation area are axisymmetric or may be a relative display position where they are centrosymmetric, for example, but it should be understood that the relative display arrangement of the mirror image area and the operation area in the first game interface may not be limited to the above-described examples of the relative positions, but may be any relative position that is set in advance.
Further, as an example, the third type of hand gesture may also be a preset single-hand pinch gesture, see fig. 9, and the third type of hand gesture may be a single-hand thumb, index finger, middle finger, and ring finger natural pinch, with the remaining little fingers naturally standing vertically. Referring to fig. 10, a third type of hand gesture may be a natural pinch of the thumb and index finger with the remaining three fingers lifting up. Referring to fig. 11, the third type of hand gesture may be a natural pinching of all fingers of a single hand. Here, it should be understood that the above three examples are merely exemplary for reference, and the third type of hand gesture in the present application is not limited thereto. For example, the first type of hand gesture may also be a natural pinch of the thumb and index finger, with the remaining three fingers naturally placed, with the entire hand remaining level with the first game interface. The present application is not limited to any kind of hand gesture of the third kind, but may be any set hand gesture other than the above three examples, for example, a two-hand frame selection gesture or the like.
And then, responding to the dragging of the target game prop to the operation area, displaying at least a mirror image mark corresponding to the target game prop in the mirror image area, and controlling and adjusting a second real-time display position of the mirror image mark in the mirror image area according to a first real-time display position of the target game prop in the operation area.
Further, after the mirror image identifier is displayed in the mirror image area in step S104, as a possible implementation manner, the controlled virtual object may be further controlled to execute a game interaction event corresponding to the first game interface and the target game prop in the game scene according to the ending position of the mirror image identifier in the mirror image area in response to the ending of the second gesture instruction.
For example, when the teammate identifier is included in the mirror image area, the second gesture instruction may be responded to end, a target teammate is determined according to the teammate identifier corresponding to the end position of the mirror image identifier in the mirror image area, and the controlled virtual object is controlled to send information for reminding the target teammate to pick up the target game prop.
An example of mirror interactions will be described below with reference to fig. 12:
FIG. 12 illustrates an example III of a human-machine gesture interaction provided by an embodiment of the present application. As shown in fig. 12, when the user wears the head display device, visually enters into a virtual reality scene presented by a graphic user interface of a target game provided by the head display device, as in the previous example, assuming that the virtual reality scene is a game scene of a certain combat game, a first game interface G is displayed in the game scene, when the player wants to drag a target game prop "long gun" on the first game interface G from a current position (i.e., lower right corner) to an upper left corner of the first game interface G, in case that the first game interface G is very large in range, the player needs to maintain the same hand posture for a long time due to a far right-lower left corner distance, and operation will fail once the player cannot keep the same hand posture to reach a desired area.
In order to solve the above problems, in the present application, the remote operation can be accomplished by mirroring. For example, as shown in the example of fig. 12, when the controlled virtual object is to drag the target game prop 'long gun' on the first game interface G from the current position to a target position to implement a corresponding game interaction event, the target game prop 'long gun' may be selected using a third type of hand gesture, a first gesture command is generated in response to detection of a selection operation on the target game prop 'long gun', then, in response to the first gesture command, the target game prop 'long gun' is controlled to drag in the first game interface G, when the target game prop 'long gun' is dragged to the edge of the first game interface, a second gesture command is generated, when the hand gesture corresponding to the second gesture command is determined to be the third type of hand gesture, an operation area a and a mirror image area b are determined in the first game interface G according to the second gesture command, where, as shown in fig. 12, the mirror image area b and the operation area a may be set in an axisymmetric manner in the first game interface G, when the target game prop 'long gun' is dragged to the operation area, the mirror image prop 'long gun' is displayed at least, the mirror image area b is displayed as the mirror image area, and the mirror image area b is displayed as the mirror image area is displayed, and the mirror image area is displayed at least the mirror image area b is displayed as the mirror image area.
And then, controlling and adjusting a second real-time display position of the mirror image mark in the mirror image area b according to a first real-time display position of the target game prop 'long gun' in the operation area a, specifically, presetting a proportionality coefficient of a relative distance, a direction and the like of the first real-time display position and the second real-time display position, and obtaining a mapping relation between the first real-time display position and the second real-time display position according to the proportionality coefficient so as to obtain the second real-time display position under the condition of determining the first real-time display position. For example, when the target game stage "long gun" moves upward in the operation area a by y1, correspondingly, the target game stage "long gun" moves upward in the mirror image area b by y2, so that the target game stage "long gun" can be displayed in the mirror image area b after fine adjustment in the operation area a.
In the example of fig. 12, when a user list including a teammate mark is displayed in the mirror image area b, when the player user observes that the target game prop "long gun" has been dragged to a desired position (for example, the desired position is a certain teammate mark) through the mirror image mark displayed in the mirror image area b, the gesture (the third type of hand gesture) kneaded by the player user is not kneaded any more, and the second gesture instruction is triggered to end, the target teammate is determined in response to the second gesture instruction end, and information prompting the target teammate to pick up the target game prop "long gun" is sent to the target teammate for prompting. Accordingly, the target teammate may pick up the target play object "long gun" after receiving the prompted message.
By the mode, the pain point of the remote operation of the player user can be overcome, and the operation experience of the user is improved.
Furthermore, additionally, as a possible implementation, the positioning cursor of the hand gesture may remain within the operation region in response to the end of the second gesture instruction. For example, when the player user changes from a pinch gesture to no longer pinch, the positioning cursor may return to the original operating position of the pinch gesture, in which way the original operating position may be preserved.
On the other hand, considering that the player user needs to know the corresponding relation between each target hand gesture and the game interaction event before using the head display device, a certain learning cost exists, and how to reduce the learning cost makes the player user need to learn and use the head display device.
Specifically, with respect to step S104, as one possible implementation, when the target hand gesture is a fourth type of hand gesture, in a case where the target game prop is dragged to an edge of the first game interface, first, a presentation area is determined within the first game interface in response to the hand gesture being the fourth type of hand gesture; then, displaying a prompt identifier corresponding to at least one target hand gesture in the display area; wherein the prompt identification comprises game interaction events corresponding to the target hand gesture.
As an example, the fourth type of hand gesture may be a preset hand gesture, which may be a fixed predetermined gesture, or may be a superimposed hand gesture of an additional new hand gesture other than the original hand gesture. For example, a finger pointing gesture, a single-hand pinch gesture, etc., upon a particular implementation, for example, at least one cue identification may be displayed in a presentation area for a sustained predetermined period of time in response to the hand gesture being a fourth type of hand gesture, and the display may be stopped after the duration has arrived. By the method, the player user can know the corresponding relation between the target hand gesture and the game interaction event, and the player user can use the game interaction event directly, so that the learning cost of the player user is reduced.
Fig. 13 shows an example four of man-machine gesture interaction provided by an embodiment of the present application. As shown in fig. 13, when the user wears the head display device, visually enters a virtual reality scene presented by a graphical user interface of a target game provided by the head display device, and the player user picks up a target game prop "long gun" using a hand gesture as shown in fig. 13, a first gesture instruction is generated to control dragging the target game prop "long gun" in the first game interface N; assuming that the edge of the first game interface N is a point on a border line (visible or invisible) of the first game interface N, if a second gesture instruction is generated when the target game prop is detected to touch the edge of the first game interface by using the hand gesture in the process that the target game prop is dragged by the long gun, determining why the current hand gesture is the target hand gesture, and if the current gesture is determined to be the fourth type hand gesture, determining a display area V in the first game interface N; then, displaying a prompt identifier corresponding to at least one target hand gesture in the display area V; wherein the prompt identification comprises game interaction events corresponding to the target hand gesture. For example, event 1 (i.e., a game interaction event) corresponding to hand gesture o in fig. 13 may be, for example, "prohibited from crossing the screen"; event 2 corresponding to hand gesture p may be "allow cross-screen"; event 3 corresponding to hand gesture q may be a "mirror image display". Here, the description of the event correspondence may be a name obtained based on the summary of the functions implemented, and additionally, in the present application, when the player user touches the image of the event, a detailed description of the event correspondence may be additionally displayed, for example, in the form of bubbles at any place around the prompt identifier.
Based on the same application conception, the embodiment of the application also provides a human-computer gesture interaction device corresponding to the human-computer gesture interaction method provided by the embodiment, and because the principle of the human-computer gesture interaction device in the embodiment of the application is similar to that of the human-computer gesture interaction method of the embodiment of the application, the implementation of the device can be referred to the implementation of the method, and the repetition is omitted.
Fig. 14 is a schematic structural diagram of a human-machine gesture interaction device 800 according to an embodiment of the present application, as shown in fig. 14, the human-machine gesture interaction device 800 includes:
A display module 810 for displaying a first game interface in the graphical user interface, wherein at least part of a game scene is presented in the first game interface, and the game scene comprises a target game prop and a controlled virtual object controlled by the head display device;
a drag module 820 that controls dragging the target game item in the first game interface in response to a first gesture instruction for the target game item;
A determining module 830, when the target game prop is dragged to the edge of the first game interface, responding to a second gesture instruction for the target game prop, and determining a hand gesture corresponding to the second gesture instruction;
And an execution module 840, responsive to the hand gesture being a target hand gesture, for controlling the controlled virtual object to execute a game interaction event corresponding to the target hand gesture in the game scene.
In one possible implementation, when the target hand gesture is a first type of hand gesture, the executing module 840 is configured to:
and responding to the hand gestures as the first type of hand gestures, continuously displaying the target game props in a designated area at the edge of the first game interface according to the second gesture instructions, and controlling and adjusting the game view angles of the controlled virtual objects in the game scene according to the real-time position changes of the target game props in the designated area.
In a possible implementation manner, the graphical user interface further includes a second game interface, where the executing module 840 is configured to, when the target hand gesture is a second type of hand gesture:
And responding to the hand gestures as the second type of hand gestures, and dragging the target game prop from the edge of the first game interface to enter the second game interface according to the second gesture instruction.
In one possible implementation, the human-machine gesture interaction device 800 further includes: and after the target game prop is dragged to the second game interface, responding to the ending of the second gesture instruction, and controlling the controlled virtual object to execute a game interaction event corresponding to the target game prop and the second game interface, wherein the ending of the second gesture instruction is triggered by the change of the second type of hand gesture.
In one possible implementation, when the second game interface is a prop container interface, the executing module 840 is configured to:
And responding to the second gesture instruction, and controlling the controlled virtual object to store the target game prop into the prop storage space displayed by the prop container interface.
In one possible implementation, when the target hand gesture is a third type of hand gesture, the executing module 840 is configured to:
responding to the hand gestures as a third type of hand gestures, and determining an operation area and a mirror image area in the first game interface according to the second gesture instructions, wherein the mirror image area and the operation area are relatively displayed in the first game interface;
and responding to the dragging of the target game prop to the operation area, displaying at least a mirror image mark corresponding to the target game prop in the mirror image area, and controlling and adjusting a second real-time display position of the mirror image mark in the mirror image area according to a first real-time display position of the target game prop in the operation area.
In one possible implementation, the human-machine gesture interaction device 800 further includes: a second release module for:
after the mirror image identification is displayed in the mirror image area, responding to the ending of the second gesture instruction, and controlling the controlled virtual object to execute a game interaction event corresponding to the first game interface and the target game prop in the game scene according to the ending position of the mirror image identification in the mirror image area.
In one possible implementation, when the teammate identifier is included in the mirror area, the second releasing module is configured to:
And responding to the ending of the second gesture instruction, determining a target teammate according to the teammate identification corresponding to the ending position of the mirror image identification in the mirror image area, and controlling the controlled virtual object to send information for reminding the target teammate to pick up the target game prop.
In one possible implementation, when the target hand gesture is a fourth type of hand gesture, the executing module 840 is configured to:
Determining a presentation area within the first game interface in response to the hand gesture being a fourth type of hand gesture;
Displaying a prompt identifier corresponding to at least one target hand gesture in the display area; wherein the prompt identification comprises game interaction events corresponding to the target hand gesture.
In one possible implementation, the drag module 820 is configured to:
And controlling and adjusting the game visual angle of the controlled virtual object in the game scene according to the real-time position change of the target game prop in the first game interface.
In one possible implementation, the target hand gesture includes a combination of one or more of the following: a single-hand pinch gesture, a single-hand click gesture, a two-hand box gesture, and a finger pointing gesture.
Referring to fig. 15, fig. 15 is a schematic structural diagram of an electronic device according to an embodiment of the application. As shown in fig. 15, the electronic device 700 includes: a processor 710, a memory 720 and a bus 730, said memory 720 storing machine readable instructions executable by said processor 710, said processor 710 and said memory 720 communicating via bus 730 when the electronic device is running a control method as in the embodiment, said processor 710 executing said machine readable instructions, the preamble of the processor 710 method item to perform the human machine gesture interaction method as described above.
Through the mode, the problem of misoperation of a user can be effectively prevented, and man-machine interaction between game interfaces is smoother.
Based on the same application concept, the embodiment of the present application further provides a computer readable storage medium, on which a computer program is stored, which when being executed by a processor performs the steps of:
Displaying a first game interface in the graphical user interface, wherein at least part of game scenes are presented in the first game interface, and the game scenes comprise target game props and controlled virtual objects controlled by the head display equipment;
controlling dragging the target game prop in the first game interface in response to a first gesture instruction for the target game prop;
When the target game prop is dragged to the edge of the first game interface, responding to a second gesture instruction aiming at the target game prop, and determining a hand gesture corresponding to the second gesture instruction;
and controlling the controlled virtual object to execute a game interaction event corresponding to the target hand gesture in the game scene in response to the hand gesture being the target hand gesture.
In one possible embodiment, the processor further performs:
and responding to the hand gestures as the first type of hand gestures, continuously displaying the target game props in a designated area at the edge of the first game interface according to the second gesture instructions, and controlling and adjusting the game view angles of the controlled virtual objects in the game scene according to the real-time position changes of the target game props in the designated area.
In one possible embodiment, the processor further performs:
And responding to the hand gestures as the second type of hand gestures, and dragging the target game prop from the edge of the first game interface to enter the second game interface according to the second gesture instruction.
In one possible embodiment, the processor further performs:
And after the target game prop is dragged to the second game interface, responding to the ending of the second gesture instruction, and controlling the controlled virtual object to execute a game interaction event corresponding to the target game prop and the second game interface, wherein the ending of the second gesture instruction is triggered by the change of the second type of hand gesture.
In one possible embodiment, the processor further performs:
And responding to the second gesture instruction, and controlling the controlled virtual object to store the target game prop into the prop storage space displayed by the prop container interface.
In one possible embodiment, the processor further performs:
responding to the hand gestures as a third type of hand gestures, and determining an operation area and a mirror image area in the first game interface according to the second gesture instructions, wherein the mirror image area and the operation area are relatively displayed in the first game interface;
and responding to the dragging of the target game prop to the operation area, displaying at least a mirror image mark corresponding to the target game prop in the mirror image area, and controlling and adjusting a second real-time display position of the mirror image mark in the mirror image area according to a first real-time display position of the target game prop in the operation area.
In one possible embodiment, the processor further performs:
after the mirror image identification is displayed in the mirror image area, responding to the ending of the second gesture instruction, and controlling the controlled virtual object to execute a game interaction event corresponding to the first game interface and the target game prop in the game scene according to the ending position of the mirror image identification in the mirror image area.
In one possible embodiment, the processor further performs:
And responding to the ending of the second gesture instruction, determining a target teammate according to the teammate identification corresponding to the ending position of the mirror image identification in the mirror image area, and controlling the controlled virtual object to send information for reminding the target teammate to pick up the target game prop.
In one possible embodiment, the processor further performs:
Determining a presentation area within the first game interface in response to the hand gesture being a fourth type of hand gesture;
Displaying a prompt identifier corresponding to at least one target hand gesture in the display area; wherein the prompt identification comprises game interaction events corresponding to the target hand gesture.
In one possible embodiment, the processor further performs:
And controlling and adjusting the game visual angle of the controlled virtual object in the game scene according to the real-time position change of the target game prop in the first game interface.
Specifically, the storage medium can be a general storage medium, such as a mobile disk, a hard disk, and the like, and when the computer program on the storage medium is run, the above human-computer gesture interaction method can be executed, by the method, the problem of misoperation of a user can be effectively prevented, and the human-computer interaction between game interfaces is smoother.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system and apparatus may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again. In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily appreciate variations or alternatives within the scope of the present application. Therefore, the protection scope of the application is subject to the protection scope of the claims.

Claims (14)

1. A human-machine gesture interaction method, characterized in that a graphical user interface of a target game is provided through a head display device, the gesture interaction method comprising:
Displaying a first game interface in the graphical user interface, wherein at least part of game scenes are presented in the first game interface, and the game scenes comprise target game props and controlled virtual objects controlled by the head display equipment;
controlling dragging the target game prop in the first game interface in response to a first gesture instruction for the target game prop;
When the target game prop is dragged to the edge of the first game interface, responding to a second gesture instruction aiming at the target game prop, and determining a hand gesture corresponding to the second gesture instruction;
and controlling the controlled virtual object to execute a game interaction event corresponding to the target hand gesture in the game scene in response to the hand gesture being the target hand gesture.
2. The human-machine gesture interaction method of claim 1, wherein when the target hand gesture is a first type of hand gesture, the step of controlling the controlled virtual object to perform a game interaction event corresponding to the target hand gesture in the game scene in response to the hand gesture being a target hand gesture comprises:
and responding to the hand gestures as the first type of hand gestures, continuously displaying the target game props in a designated area at the edge of the first game interface according to the second gesture instructions, and controlling and adjusting the game view angles of the controlled virtual objects in the game scene according to the real-time position changes of the target game props in the designated area.
3. The human-machine gesture interaction method of claim 1, wherein the graphical user interface further includes a second game interface, and the step of controlling the controlled virtual object to execute a game interaction event corresponding to the target hand gesture in the game scene in response to the hand gesture being the target hand gesture when the target hand gesture is the second type of hand gesture includes:
And responding to the hand gestures as the second type of hand gestures, and dragging the target game prop from the edge of the first game interface to enter the second game interface according to the second gesture instruction.
4. The human-machine gesture interaction method of claim 3, further comprising:
And after the target game prop is dragged to the second game interface, responding to the ending of the second gesture instruction, and controlling the controlled virtual object to execute a game interaction event corresponding to the target game prop and the second game interface, wherein the ending of the second gesture instruction is triggered by the change of the second type of hand gesture.
5. The human-machine gesture interaction method of claim 4, wherein when the second game interface is a prop container interface, the step of controlling the controlled virtual object to execute a game interaction event corresponding to the target game prop and the second game interface in response to the second gesture instruction ending comprises:
And responding to the second gesture instruction, and controlling the controlled virtual object to store the target game prop into the prop storage space displayed by the prop container interface.
6. The human-machine gesture interaction method of claim 1, wherein when the target hand gesture is a third type of hand gesture, the step of controlling the controlled virtual object to perform a game interaction event corresponding to the target hand gesture in the game scene in response to the hand gesture being the target hand gesture comprises:
responding to the hand gestures as a third type of hand gestures, and determining an operation area and a mirror image area in the first game interface according to the second gesture instructions, wherein the mirror image area and the operation area are relatively displayed in the first game interface;
and responding to the dragging of the target game prop to the operation area, displaying at least a mirror image mark corresponding to the target game prop in the mirror image area, and controlling and adjusting a second real-time display position of the mirror image mark in the mirror image area according to a first real-time display position of the target game prop in the operation area.
7. The human-machine gesture interaction method of claim 6, further comprising:
after the mirror image identification is displayed in the mirror image area, responding to the ending of the second gesture instruction, and controlling the controlled virtual object to execute a game interaction event corresponding to the first game interface and the target game prop in the game scene according to the ending position of the mirror image identification in the mirror image area.
8. The human-machine gesture interaction method of claim 7, wherein when a teammate mark is included in the mirror region, the step of responding to the end of the second gesture instruction, and controlling the controlled virtual object to execute a game interaction event corresponding to the first game interface and the target game prop in the game scene according to the end position of the mirror mark in the mirror region, comprises:
And responding to the ending of the second gesture instruction, determining a target teammate according to the teammate identification corresponding to the ending position of the mirror image identification in the mirror image area, and controlling the controlled virtual object to send information for reminding the target teammate to pick up the target game prop.
9. The human-machine gesture interaction method of claim 1, wherein when the target hand gesture is a fourth type of hand gesture, the step of controlling the controlled virtual object to perform a game interaction event corresponding to the target hand gesture in the game scene in response to the hand gesture being the target hand gesture comprises:
Determining a presentation area within the first game interface in response to the hand gesture being a fourth type of hand gesture;
Displaying a prompt identifier corresponding to at least one target hand gesture in the display area; wherein the prompt identification comprises game interaction events corresponding to the target hand gesture.
10. The human-machine gesture interaction method of claim 1, wherein the step of controlling dragging the target game item in the first game interface in response to a first gesture command for the target game item further comprises:
And controlling and adjusting the game visual angle of the controlled virtual object in the game scene according to the real-time position change of the target game prop in the first game interface.
11. The human-machine gesture interaction method of claim 1, wherein the target hand gesture comprises a combination of one or more of: a single-hand pinch gesture, a single-hand click gesture, a two-hand box gesture, and a finger pointing gesture.
12. A human-machine gesture interaction apparatus, characterized in that a graphical user interface of a target game is provided through a head display device, the gesture interaction apparatus comprising:
The display module is used for displaying a first game interface in the graphical user interface, wherein at least part of game scenes are displayed in the first game interface, and the game scenes comprise target game props and controlled virtual objects controlled by the head display equipment;
a drag module that controls dragging the target game item in the first game interface in response to a first gesture instruction for the target game item;
the determining module is used for responding to a second gesture instruction aiming at the target game prop when the target game prop is dragged to the edge of the first game interface, and determining a hand gesture corresponding to the second gesture instruction;
And the execution module is used for responding to the hand gesture as a target hand gesture and controlling the controlled virtual object to execute a game interaction event corresponding to the target hand gesture in the game scene.
13. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory in communication via the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the human-machine gesture interaction method of any of claims 1 to 11.
14. A computer readable storage medium, characterized in that the computer readable storage medium has stored thereon a computer program which, when executed by a processor, performs the steps of the human gesture interaction method of any of claims 1 to 11.
CN202410096142.6A 2024-01-23 2024-01-23 Man-machine gesture interaction method and device, electronic equipment and storage medium Pending CN118267696A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410096142.6A CN118267696A (en) 2024-01-23 2024-01-23 Man-machine gesture interaction method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410096142.6A CN118267696A (en) 2024-01-23 2024-01-23 Man-machine gesture interaction method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN118267696A true CN118267696A (en) 2024-07-02

Family

ID=91647876

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410096142.6A Pending CN118267696A (en) 2024-01-23 2024-01-23 Man-machine gesture interaction method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN118267696A (en)

Similar Documents

Publication Publication Date Title
US11623142B2 (en) Data processing method and mobile terminal
US11194400B2 (en) Gesture display method and apparatus for virtual reality scene
JP7256283B2 (en) Information processing method, processing device, electronic device and storage medium
CN107890664A (en) Information processing method and device, storage medium, electronic equipment
US11266904B2 (en) Game system, game control device, and information storage medium
US11628365B2 (en) Information processing system, storage medium, information processing apparatus and information processing method
WO2022257653A1 (en) Virtual prop display method and apparatus, electronic device and storage medium
CN110448906A (en) The control method and device at visual angle, touch control terminal in game
JP7408685B2 (en) Methods, devices, equipment and storage media for adjusting the position of controls in application programs
CN116099195A (en) Game display control method, device, electronic device and storage medium
CN113559519B (en) A method, device, electronic device and storage medium for selecting game props
CN108744513A (en) Method of sight, device, electronic equipment in shooting game and storage medium
CN115155052B (en) Method and device for controlling cursor through handle, electronic equipment and storage medium
CN113663326B (en) Aiming method and device for game skills
CN113440835B (en) Virtual unit control method and device, processor and electronic device
CN118267696A (en) Man-machine gesture interaction method and device, electronic equipment and storage medium
WO2023221944A1 (en) Virtual character control method and apparatus, and electronic device and storage medium
CN116726485A (en) Method and device for controlling skills in game and electronic terminal
CN114504812B (en) Virtual character control method and device
JP7163526B1 (en) Information processing system, program and information processing method
US11908097B2 (en) Information processing system, program, and information processing method
JP7286857B2 (en) Information processing system, program and information processing method
JP7286856B2 (en) Information processing system, program and information processing method
JP2019080928A (en) Game system, game control device, and program
CN120502092A (en) Mark control method, apparatus, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination