CN117101122A - Control method, device, terminal and storage medium for virtual interaction object - Google Patents
Control method, device, terminal and storage medium for virtual interaction object Download PDFInfo
- Publication number
- CN117101122A CN117101122A CN202210540722.0A CN202210540722A CN117101122A CN 117101122 A CN117101122 A CN 117101122A CN 202210540722 A CN202210540722 A CN 202210540722A CN 117101122 A CN117101122 A CN 117101122A
- Authority
- CN
- China
- Prior art keywords
- virtual
- target
- target operation
- interaction object
- interactive object
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000003993 interaction Effects 0.000 title claims abstract description 113
- 238000000034 method Methods 0.000 title claims abstract description 82
- 238000003860 storage Methods 0.000 title claims abstract description 29
- 230000002452 interceptive effect Effects 0.000 claims abstract description 197
- 230000006399 behavior Effects 0.000 claims description 43
- 230000004044 response Effects 0.000 claims description 17
- 238000004590 computer program Methods 0.000 claims description 9
- 230000003111 delayed effect Effects 0.000 claims description 6
- 230000000694 effects Effects 0.000 claims description 6
- 230000008569 process Effects 0.000 abstract description 22
- 230000006870 function Effects 0.000 abstract description 13
- 238000004904 shortening Methods 0.000 abstract description 3
- 238000012545 processing Methods 0.000 description 18
- 238000010586 diagram Methods 0.000 description 11
- 238000012549 training Methods 0.000 description 9
- 239000013598 vector Substances 0.000 description 6
- 230000008901 benefit Effects 0.000 description 5
- 230000002093 peripheral effect Effects 0.000 description 5
- 238000006467 substitution reaction Methods 0.000 description 5
- 238000013519 translation Methods 0.000 description 5
- 238000006243 chemical reaction Methods 0.000 description 4
- 239000012634 fragment Substances 0.000 description 4
- 230000009467 reduction Effects 0.000 description 4
- 238000011946 reduction process Methods 0.000 description 3
- 230000001960 triggered effect Effects 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 238000011022 operating instruction Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000009877 rendering Methods 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 208000015041 syndromic microphthalmia 10 Diseases 0.000 description 2
- 241000721047 Danaus plexippus Species 0.000 description 1
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013475 authorization Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 239000004566 building material Substances 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 230000002860 competitive effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000010485 coping Effects 0.000 description 1
- 125000004122 cyclic group Chemical group 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- PCHJSUWPFVWCPO-UHFFFAOYSA-N gold Chemical compound [Au] PCHJSUWPFVWCPO-UHFFFAOYSA-N 0.000 description 1
- 239000010931 gold Substances 0.000 description 1
- 229910052737 gold Inorganic materials 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000007115 recruitment Effects 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 230000001502 supplementing effect Effects 0.000 description 1
- 239000002023 wood Substances 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/40—Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
- A63F13/42—Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/55—Controlling game characters or game objects based on the game progress
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/80—Special adaptations for executing a specific game genre or game mode
- A63F13/822—Strategy games; Role-playing games
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/60—Methods for processing data by generating or executing the game program
- A63F2300/6063—Methods for processing data by generating or executing the game program for sound processing
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/80—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
- A63F2300/807—Role playing or strategy games
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/02—Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the application discloses a control method, a device, a terminal and a storage medium of a virtual interaction object, belonging to the field of man-machine interaction. The method comprises the following steps: displaying a virtual environment picture; responsive to receiving the first control speech, determining a first virtual interactive object of the at least one virtual interactive object; and controlling the first virtual interaction object to execute the target operation under the condition that the first virtual interaction object meets the execution condition of the target operation. The method, the device, the medium and the program product help to flatten the interactive operation, and automatically execute complex condition judgment according to the first control voice and realize corresponding functions, thereby greatly simplifying the interactive process of the complex functions, reducing the operation steps and shortening the operation time.
Description
Technical Field
The embodiment of the application relates to the technical field of man-machine interaction, in particular to a control method, a device, a terminal and a storage medium of a virtual interaction object.
Background
Current games, particularly simulated strategy games (SLGs), can provide diversified, high-fidelity Game functions. However, developers often need to design interaction modes in combination with the operation limitations of the terminal device, and find a balance between interaction operations and game complexity.
The related art optimizes complex interactive operations by means of shortcut keys, prompt messages and the like. For example, a lightweight user-defined operation interface of a player is supported, a commonly used operation control is moved to a display position which is easy to trigger, or a game client performs operation prompt in different stages in the forms of mail, popup window and the like, and after touch feedback of the player is obtained, complex operation contents are assisted to be completed.
However, for some complex operations, the scheme in the related art needs multiple interaction steps to achieve, and the operation is complicated.
Disclosure of Invention
The embodiment of the application provides a control method, a device, a terminal and a storage medium for a virtual interactive object, which can greatly simplify the interaction process of complex functions, reduce operation steps and shorten operation time.
In one aspect, the present application provides a method for controlling a virtual interactive object, where the method includes:
displaying a virtual environment picture, wherein the virtual environment picture comprises at least one virtual interaction object owned by a target virtual camp, and the virtual interaction object is used for executing corresponding operation in the virtual environment based on a received operation instruction;
in response to receiving a first control voice, determining a first virtual interaction object in at least one virtual interaction object, wherein the first virtual interaction object is a virtual interaction object for executing a target operation indicated by the first control voice;
And controlling the first virtual interaction object to execute the target operation under the condition that the first virtual interaction object meets the execution condition of the target operation.
In another aspect, the present application provides a control apparatus for a virtual interactive object, the apparatus comprising:
the display module is used for displaying a virtual environment picture, wherein the virtual environment picture comprises at least one virtual interaction object owned by a target virtual camp, and the virtual interaction object is used for executing corresponding operation in the virtual environment based on the received operation instruction;
a determining module, configured to determine, in response to receiving a first control voice, a first virtual interaction object of at least one virtual interaction object, where the first virtual interaction object is a virtual interaction object for performing a target operation indicated by the first control voice;
and the control module is used for controlling the first virtual interaction object to execute the target operation under the condition that the first virtual interaction object meets the execution condition of the target operation.
In another aspect, the present application provides a terminal comprising a processor and a memory; the memory stores at least one section of program, and the at least one section of program is loaded and executed by the processor to implement the control method of the virtual interactive object according to the above aspect.
In another aspect, embodiments of the present application provide a computer readable storage medium having at least one computer program stored therein, the computer program being loaded and executed by a processor to implement a method for controlling a virtual interactive object as described in the above aspect.
According to one aspect of the present application, there is provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The computer instructions are read from the computer-readable storage medium by a processor of a computer device, and executed by the processor, cause the computer device to perform the control method of the virtual interactive object provided in various alternative implementations of the above aspect.
The technical scheme provided by the embodiment of the application at least comprises the following beneficial effects:
in the embodiment of the application, when the first control voice is received, the first virtual interactive object capable of executing the target operation indicated by the first control voice is determined, the execution condition judgment logic is triggered, whether the first virtual interactive object meets the execution condition of the target operation or not is determined by combining the current virtual environment, the triggering process of the target operation is simulated under the condition that the first virtual interactive object meets the execution condition, the interactive operation is flattened, and the complex condition judgment is automatically executed according to the first control voice and the corresponding function is realized, so that the interactive process of the complex function is greatly simplified, the operation steps are reduced, and the operation duration is shortened.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic illustration of an implementation environment provided by an exemplary embodiment of the present application;
FIG. 2 is a flow chart of a method of controlling a virtual interactive object provided by an exemplary embodiment of the present application;
FIG. 3 is a schematic diagram of a first virtual interactive object performing a target operation based on a first control voice provided by an exemplary embodiment of the present application;
FIG. 4 is a flow chart of a method for controlling a virtual interactive object provided by another exemplary embodiment of the present application;
FIG. 5 is a schematic diagram of a first virtual interactive object performing a target operation based on a first control voice according to another exemplary embodiment of the present application;
FIG. 6 is a schematic diagram of a first virtual interactive object performing a target operation based on a first control voice provided by another exemplary embodiment of the present application;
FIG. 7 is a flowchart of a method for controlling a virtual interactive object provided by another exemplary embodiment of the present application;
FIG. 8 is a schematic diagram of a speech recognition process provided by an exemplary embodiment of the present application;
FIG. 9 is a training schematic of a speech recognition model provided in accordance with an exemplary embodiment of the present application;
FIG. 10 is a training schematic of a speech translation model provided in accordance with an exemplary embodiment of the present application;
FIG. 11 is a training schematic of an intent recognition model provided in accordance with an exemplary embodiment of the present application;
FIG. 12 is a flowchart of intent recognition provided by an exemplary embodiment of the present application;
FIG. 13 is a flowchart of a method for controlling a virtual interactive object provided by another exemplary embodiment of the present application;
FIG. 14 is a schematic diagram of a setup customization operation provided by an exemplary embodiment of the present application;
FIG. 15 is a block diagram of a control device for virtual interactive objects provided in an exemplary embodiment of the present application;
fig. 16 is a block diagram of a terminal according to an exemplary embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
Related art generally optimizes complex interactive operations using shortcuts or hints, etc. For example, a user-defined operation interface is supported to be lightweight by a player, and a commonly used operation control is moved to a display position which is easy to trigger, however, the mode is limited by touch limitation of terminal interaction, and more complicated operation still exists and needs to be realized through a plurality of interaction steps. Or the game client performs operation prompt in different stages in the forms of mail, popup window and the like, and after touch feedback of a player is obtained, the game client assists in completing complex operation contents, but the mode cannot support continuous operation requirements of users, and the established operation contents cannot meet the accurate complex operation requirements.
Referring to FIG. 1, a schematic diagram of an implementation environment provided by an exemplary embodiment of the present application is shown. The implementation environment comprises the following steps: a first terminal 120, a server 140, and a second terminal 160.
The first terminal 120 installs and runs an application supporting a virtual environment. The application may be any one of a virtual reality application, an SLG game, a three-dimensional map program, a multiplayer online tactical competition (Multiplayer Online Battle Arena, MOBA) game. The first terminal 120 is a terminal used by a first user, and the first user uses the first terminal 120 to control a first virtual interactive object located in a virtual environment to perform operations including, but not limited to: at least one of obtaining revenue, upgrading, fight, switching equipment, building. Illustratively, the first virtual interactive object is a first virtual character, such as a simulated character object or a cartoon character object, or the first virtual interactive object is a first virtual object.
The first terminal 120 is connected to the server 140 through a wireless network or a wired network.
Server 140 includes at least one of a server, a plurality of servers, a cloud computing platform, and a virtualization center. Illustratively, the server 140 includes a processor 142 and a memory 144, the processor 142 including an intent recognition module 1421, a central control module 1422, and a match detection module 1423. The server 140 is used to provide background services for applications supporting a three-dimensional virtual environment. Optionally, the server 140 takes on primary computing work, and the first terminal 120 and the second terminal 160 take on secondary computing work; alternatively, the server 140 performs a secondary computing job, and the first terminal 120 and the second terminal 160 perform a primary computing job; alternatively, the server 140, the first terminal 120 and the second terminal 160 perform cooperative computing by using a distributed computing architecture.
The second terminal 160 installs and runs an application supporting a virtual environment. The application may be any one of a virtual reality application, an SLG game, a three-dimensional map program, and a MOBA game. The second terminal 160 is a terminal used by a second user that uses the second terminal 160 to control a second virtual interactive object located in the virtual environment to perform operations including, but not limited to: at least one of obtaining revenue, upgrading, fight, switching equipment, building. Illustratively, the second virtual interactive object is a second virtual character, such as a simulated character object or a cartoon character object, or the second virtual interactive object is a second virtual object.
Optionally, the first virtual interactive object and the second virtual interactive object are in the same virtual environment.
Alternatively, the applications installed on the first terminal 120 and the second terminal 160 are the same, or the applications installed on the two terminals are the same type of application of different control system platforms. The first terminal 120 may refer broadly to one of a plurality of terminals, and the second terminal 160 may refer broadly to one of a plurality of terminals, the present embodiment being illustrated with only the first terminal 120 and the second terminal 160. The device types of the first terminal 120 and the second terminal 160 are the same or different, and include: at least one of a smart phone, a tablet computer, an electronic book reader, an MP3 player, an MP4 player, a laptop portable computer, and a desktop computer. The following embodiments are illustrated with the terminal comprising a smart phone.
Those skilled in the art will recognize that the number of terminals may be greater or lesser. Such as the above-mentioned terminals may be only one, or the above-mentioned terminals may be several tens or hundreds, or more. The embodiment of the application does not limit the number of terminals and the equipment type.
Referring to fig. 2, a flowchart of a method for controlling a virtual interactive object according to an exemplary embodiment of the present application is shown. The embodiment is described by taking the method performed by a terminal supporting a virtual environment as an example, and the method includes the steps of:
Step 201, displaying a virtual environment picture, wherein the virtual environment picture contains at least one virtual interaction object owned by a target virtual camp.
The virtual interactive object is used for executing corresponding operation in the virtual environment based on the received operation instruction. The target virtual camp is a virtual camp controlled by the terminal.
Optionally, the virtual interactive object may be a virtual character in the target virtual camp, or may be a virtual object in the target virtual camp. The operating instructions that different types of virtual objects are capable of executing may be different. For example, the virtual object corresponding to the building upgrade operation instruction is a virtual building, and the virtual object corresponding to the fight operation instruction is a virtual team. The embodiment of the present application is not limited thereto.
Illustratively, the terminal displays a virtual environment screen of the business strategy game, which includes a virtual building capable of generating virtual benefits or virtual products, a virtual citizen for business or construction, a virtual team for fight, and the like.
In response to receiving the first control speech, a first virtual interactive object of the at least one virtual interactive object is determined, the first virtual interactive object being a virtual interactive object for performing a target operation indicated by the first control speech.
In one possible implementation manner, the terminal performs voice recognition on the received first control voice, determines a target operation indicated by the first control voice, and then determines a first virtual interaction object from the virtual interaction objects of the target virtual camp based on the target operation. For example, when a first control voice "upgrade farm" is accepted, the terminal determines "farm" in the target virtual camp as the first virtual interactive object.
Optionally, the terminal starts the microphone to collect voice in real time in the whole course of game running, or the terminal starts the microphone to collect voice when receiving voice input operation and closes the microphone when the voice input operation is finished (for example, the user inputs the first control voice by long-time voice control), or the terminal automatically starts the microphone to collect voice in a specific game stage or game scene. The embodiment of the present application is not limited thereto.
In step 203, in the case that the first virtual interactive object meets the execution condition of the target operation, the first virtual interactive object is controlled to execute the target operation.
For some complex operation instructions, the terminal needs to determine whether the first virtual interactive object can be executed based on the current virtual environment. For example, for a battle operation performed by selecting a team and an opponent, since the same team may include a plurality of virtual teams and the virtual teams can support a plurality of operation instructions, in the related art, a user needs to trigger a team control to open a team selection list, the user determines a target virtual team capable of playing a battle by observing the status of each team in the team selection list, then triggers a target team selection control corresponding to the target virtual team to select the team, and further triggers an fight control in a pop-up target virtual team setting window (the virtual team setting window may further include other controls such as a team reset, a team deletion, a team upgrade, etc.).
In the embodiment of the application, the user can directly input 'select a team to play a battle xx battle', or designate 'team a to play a battle xx battle', the terminal determines the first virtual interactive object according to the first control voice recognition target operation, and judges whether the first virtual interactive object (virtual team) meets the execution condition of the target operation, if so, the first virtual interactive object is automatically controlled to execute the target operation. Therefore, the user can directly instruct the terminal to complete team selection and fight operation through one-time voice input, and the middle complex interaction process can be omitted.
Fig. 3 shows a schematic diagram of executing a target operation instruction based on a first control voice. After the terminal collects the voice content 301 "a monarch" of the first control voice (the first control voice content input by the user in the actual application scene is visible in the virtual environment picture, or the terminal does not display the voice content corresponding to the first control voice), the terminal determines the target operation and the first virtual interactive object based on the first control voice. The terminal determines that the first virtual interactive object meets the execution condition of the target operation, controls the first virtual interactive object to execute the target operation, and feeds back the operation result 302 'good', and opens a recruitment for you.
In summary, in the embodiment of the present application, when the first control voice is received, the first virtual interactive object capable of executing the target operation indicated by the first control voice is determined, and the execution condition judgment logic is triggered, and the current virtual environment is combined to determine whether the first virtual interactive object meets the execution condition of the target operation, and if the first virtual interactive object meets the execution condition, the triggering process of the target operation is simulated, the interactive operation is flattened, and the complex condition judgment is automatically executed according to the first control voice, and the corresponding function is realized, thereby greatly simplifying the interactive process of the complex function, reducing the operation steps and shortening the operation duration.
Because the control method of the virtual interactive object provided by the application flattens the interactive operation, the user omits more interactive processes such as condition judgment, object selection and the like, and the user directly inputs the first control voice based on the required operation, so that the condition that the first virtual interactive object does not meet the execution condition of the target operation may exist. In order to further assist the user to quickly realize the target operation through the voice control instruction, the terminal generates prompt information to prompt the user to select the replacement operation or prompt the user to delay executing the target operation under the condition that the first virtual interaction object does not meet the execution condition of the target operation.
Referring to fig. 4, a flowchart of a method for controlling a virtual interactive object according to another exemplary embodiment of the present application is shown. The embodiment is described by taking the method performed by a terminal supporting a virtual environment as an example, and the method includes the steps of:
step 401, displaying a virtual environment picture, wherein the virtual environment picture comprises at least one virtual interaction object owned by a target virtual camp.
In response to receiving the first control speech, a first virtual interactive object of the at least one virtual interactive object is determined, the first virtual interactive object being a virtual interactive object for performing a target operation indicated by the first control speech, step 402.
For the specific implementation of steps 401 to 402, reference may be made to steps 201 to 202, and the embodiments of the present application are not described herein.
Step 403, obtaining the information of the target virtual camping.
The information of the camping comprises at least one of information of camping resources and information of camping behaviors, wherein the information of the camping resources is used for representing virtual resources owned by a target virtual camping, and the information of the camping behaviors is used for representing the camping behaviors of the target virtual camping in a virtual environment.
In SLG games, many operations require the consumption of virtual resources to be implemented. For example, upgrading a building requires a certain amount of building material and building a team requires a certain amount of force. On the other hand, since the SLG game sets a complex function for various virtual objects, there is a possibility that there are mutually conflicting camping behaviors that cannot be executed in parallel in the game. For example, a building in a built or upgraded state cannot obtain a profit, and a team in a fight state cannot perform operations such as team upgrade or team shape change. Therefore, the terminal needs to determine whether the first virtual interaction object meets the execution condition of the target operation based on the camping information of the target virtual camping.
It should be noted that, in the embodiment shown in fig. 4, the terminal performs step 402 and then performs step 403, and in other possible real-time manners, the terminal may further perform step 402 and step 403 synchronously after receiving the first control voice, or the terminal performs step 403 and then performs step 402.
Step 404, determining whether the first virtual interactive object satisfies the execution condition of the target operation based on the camping information.
The execution conditions corresponding to the different operations are different, and the camping information according to which whether the first virtual interactive object meets the execution conditions is judged may be different. Illustratively, the terminal (or the server) sends a judging condition query instruction to the game control module through the central control module, the game control module calculates game logs, and the executing conditions necessary for executing the target operation are returned.
In one possible implementation, the camping information includes camping resource information, and the terminal determines whether the first virtual interactive object meets the execution condition of the target operation based on the camping resource information. Step 404 specifically includes the following steps 404a to 404c:
in step 404a, the amount of virtual resources required by the first virtual interactive object to perform the target operation is determined.
Optionally, the virtual resource usage is determined by the voice content of the first control voice input by the user, and may also be set by default for the target operation in the game.
For example, for the target operation corresponding to the first control voice of "team with 10 infantries", the virtual resource usage of "10 infantries" is determined by the voice content of the first control voice, and for the target operation corresponding to the first control voice of "upgrade farm", the virtual resource usage of "1000 wood and 10000 gold coin" is set as game default.
In step 404b, in the case that the virtual resource ownership indicated by the lineup resource information is greater than the virtual resource usage, it is determined that the first virtual interactive object satisfies the execution condition of the target operation.
When the virtual resource amount owned by the target virtual camp is larger than the virtual resource amount required by executing the target operation, the target virtual camp can provide enough virtual resources for the first virtual interaction object, and the terminal determines that the first virtual interaction object meets the execution condition of the target operation.
For example, the virtual resource consumption is 10 infantrys, and the virtual resource possession indicated by the array resource information includes 20 infantrys, and then the terminal determines that the first virtual interaction object meets the execution condition of the target operation.
In step 404c, in the case that the virtual resource ownership indicated by the lineup resource information is smaller than the virtual resource usage, it is determined that the first virtual interactive object does not satisfy the execution condition of the target operation.
When the virtual resource amount owned by the target virtual camp is smaller than the virtual resource amount required by executing the target operation, the target virtual camp cannot provide enough virtual resources for the first virtual interaction object, and the terminal determines that the first virtual interaction object does not meet the execution condition of the target operation.
For example, the virtual resource consumption is 10 infantrys, and the virtual resource possession indicated by the array resource information includes 5 infantrys, and then the terminal determines that the first virtual interaction object does not meet the execution condition of the target operation.
Further, the terminal acquires data such as the virtual resource possession of the target virtual camp or the fight state of the target virtual camp in real time (or according to a certain frequency), and carries out operation prompt according to the acquired data.
When the virtual resource owned by the target virtual camp is enough to perform a certain operation, the terminal can remind the user to perform the operation, or when the virtual resource owned by the target virtual camp is lower than the resource amount threshold, the terminal reminds the user to perform the operation of supplementing the virtual resource. For example, when the idle weapon force and school location of the target virtual camp are sufficient, the terminal displays prompt information ' can make more steps of soldiers ' through virtual environment pictures and/or game voice '; when the food amount of the target virtual camp is less than a certain threshold value, the terminal displays prompt information 'such a kind of cheering' through a virtual environment picture and/or game voice.
On the other hand, the terminal acquires hostile information of hostile virtual camping, generates a fight strategy based on the hostile information, and carries out operation prompt based on the fight strategy.
The hostile information includes operations performed by hostile camps, virtual resource possession, fight status information, and the like. In one possible implementation, when an operation is performed by the hostile virtual camp or a virtual resource is changed, the terminal may also remind the user to perform a coping operation. For example, when an operation of building an infantry is performed for an hostile virtual camp, the terminal displays a prompt message "opponent is mass-producing the infantry again through a virtual environment screen and/or game voice, please pay attention. The operation prompt may also include operation options such as "how many infantrys need to be built? Or is the farm upgraded? "etc., and identifies the target operation based on the received control speech. After the terminal displays the reminding information, the environment sound is collected through the microphone, and whether the first control voice exists or not is recognized. The user can instruct the terminal to execute the target operation through voice according to the prompt information generated by the terminal.
In another possible implementation manner, the camping information includes camping behavior information, and the terminal determines whether the first virtual interaction object meets the execution condition of the target operation based on the camping behavior information. Step 404 specifically includes the following steps 404d to 404e:
In step 404d, in the case that the camping behavior indicated by the camping behavior information and the target operation support are executed in parallel, it is determined that the first virtual interactive object satisfies the execution condition of the target operation.
Because the method provided by the embodiment of the application omits more intermediate interaction processes such as condition judgment, object selection and the like (for example, the related technology prompts the user that an unexecutable object or operation exists currently by displaying the control as an untriggerable state), the user directly inputs the first control voice based on the required operation, and therefore, the first virtual interaction object may be in other camping behaviors, and the terminal needs to judge whether the first virtual interaction object can execute the target operation based on the information of the camping behaviors. If the current camping behavior and the target operation support parallel execution, the terminal determines that the first virtual interaction object meets the execution condition of the target operation.
For example, the target operation is to obtain a farm benefit, the first virtual interactive object "farm" does not have other camping activities, or there is a camping activity of "adding crops", but the two may be executed in parallel, and the terminal determines that the first virtual interactive object satisfies the execution condition of the target operation.
In step 404e, in the case that the camping behavior indicated by the camping behavior information and the target operation do not support parallel execution, it is determined that the first virtual interactive object does not satisfy the execution condition of the target operation.
If other camping behaviors exist in the first virtual interaction object currently and the camping behaviors and the target operation cannot be executed in parallel, the terminal determines that the first virtual interaction object does not meet the execution condition of the target operation.
For example, the target operation is to obtain a farm benefit, the first virtual interactive object "farm" has a camping behavior of "farm upgrade", the first virtual interactive object "farm" and the first virtual interactive object cannot be executed in parallel, and the terminal determines that the first virtual interactive object does not meet the execution condition of the target operation.
In step 405, in the case that the first virtual interactive object meets the execution condition of the target operation, the first virtual interactive object is controlled to execute the target operation.
For a specific implementation of step 405, reference may be made to step 203, and the description of this embodiment of the present application is omitted here.
Step 406, prompting at least one substitute operation corresponding to the target operation under the condition that the first virtual interactive object does not meet the execution condition of the target operation.
Wherein the replacement operation is determined based on the virtual resource ownership and the virtual resource usage.
In one possible implementation, the terminal determines whether the first virtual interactive object satisfies an execution condition of the target operation based on the lineup resource information. When the first virtual interactive object cannot execute the target operation due to insufficient virtual resource possession, the terminal determines a substitute operation based on the virtual resource possession and prompts at least one substitute operation in order to further assist the user in realizing the corresponding operation.
As shown in fig. 5, the terminal receives the first control voice and recognizes that the voice content 501 "creates 10 infantries", determines that the virtual resource possession is insufficient based on the camping resource information, and the first virtual interactive object does not satisfy the execution condition. The terminal determines that the alternative operation "makes 5 infantrys" based on the current virtual resource possession, and generates the prompt message 502 "my economy is insufficient, only 5 infantrys can be supported, is the execution of? ".
Optionally, the terminal carries out the substitution operation prompt in a text form through the virtual environment picture, or the terminal carries out the substitution operation prompt through voice. The embodiment of the present application is not limited thereto.
In response to receiving the second control speech, a second virtual interactive object of the at least one virtual interactive object is determined 407.
At step 408, the second virtual interactive object is controlled to perform a replacement operation.
Wherein the second virtual interactive object is a virtual interactive object for performing a replacement operation indicated by the second control voice operation.
In one possible real-time manner, after the terminal prompts based on the substitution operation, the voice information is continuously collected through the microphone, the user can select based on the substitution operation fed back by the terminal, and the terminal controls the second virtual interaction object to execute the substitution operation by inputting the second control voice.
For example, based on the example in step 406, if the terminal receives the second control voice and recognizes that the voice content is "yes", it is determined that the second virtual interactive object is 5 infantrys, and the second virtual interactive object is controlled to perform the replacing operation "make 5 infantrys".
In step 409, when the first virtual interactive object does not satisfy the execution condition of the target operation, a delay execution prompt is performed.
The delayed execution prompt is used to prompt execution of the target operation after the end of the camping activity.
In one possible implementation, the terminal determines whether the first virtual interactive object satisfies an execution condition of the target operation based on the camping behavior information. When the current camping behavior cannot be executed in parallel with the target operation and the first virtual interaction object does not meet the execution condition of the target operation, the terminal carries out delayed execution prompt to remind the user of issuing an instruction for executing the target operation after the camping behavior is finished in order to further assist the user to realize the corresponding operation based on the voice instruction.
As shown in fig. 6, the terminal receives the first control voice and recognizes that the voice content 601 "creates 10 infantries", determines that the current camping behavior "school field upgrade" cannot be executed in parallel with the target operation based on the camping behavior information, and the first virtual interactive object does not satisfy the execution condition. The terminal generates a prompt 602 "school court is upgrading, please build infantries after the upgrade is completed" based on the camping behavior.
In step 410, in the case that the third control voice is received and the camping behavior is over, the first virtual interactive object is controlled to execute the target operation.
In one possible implementation, the user continues to initiate instructions by voice after determining that the camping activity conflicting with the target operation is over. And when the third control voice is received and the camping behavior is finished, the terminal controls the first virtual interactive object to execute the target operation.
For example, the terminal receives the third control voice and recognizes the voice content of "building 10 infantries", determines that there is currently no "school field upgrade" of the camping line which cannot be executed in parallel with the target operation, and then controls the first virtual interactive object to execute the target operation.
In step 411, the execution result of the target operation is prompted, where the prompting mode includes at least one of voice and text.
After the execution of the target operation is completed or the execution of the target operation fails, in order to facilitate the user to know the execution condition of the operation, the terminal prompts the execution result of the target operation (or the alternative operation) in a voice or text mode.
As shown in fig. 3, after the terminal collects the voice content 301 "jingan" of the first control voice through the microphone, it is determined that the first virtual interactive object meets the execution condition of the target operation based on the first control voice. After the terminal controls the first virtual interactive object to execute the target operation, the terminal outputs the prompt message 302' good in voice mode, so that the saluting is started for you.
In the embodiment of the application, under the condition that the first virtual interactive object does not meet the execution condition of the target operation, the terminal can provide the replacement operation based on the virtual resource possession or prompt the user to delay the execution of the target operation, thereby further assisting the user in realizing complex operation contents through a simple voice instruction without manually interacting with the user to change the operation instruction. Under the condition that the target operation cannot be executed, the user is prompted, and the executable alternative operation is provided for the user to select or remind the user to issue the instruction again after the execution condition is met, so that the user can directly perform voice control without confirming whether the target operation can be executed or not, and the convenience of game operation is improved. The method has the advantages that under the condition that the target virtual camp or the hostile virtual camp is changed, the user is prompted to operate, the understanding cost and the operating cost of the user to the SLG game can be reduced, the novice user can be assisted to complete unfamiliar operation, various users are assisted to improve the upper limit of the competitive level, and the activity of the game is improved. In addition, in the related art, the SLG game generally needs to implement a specific function through more combination operations, for example, a soldier needs to set an aggregation point to aggregate teams first to play, then call out team options to indicate sign, and then select a destination.
In one possible implementation, the terminal determines candidate virtual interactive objects supporting the target operation from the virtual interactive objects based on the operations supported by each virtual interactive object in the target virtual camp. For the game operation requiring selection of the first virtual interactive object from the plurality of candidate virtual interactive objects, the user can be realized by combining touch operation and voice control, and can also be realized completely based on the voice control operation.
Referring to fig. 7, a flowchart of a method for controlling a virtual interactive object according to another exemplary embodiment of the present application is shown. The embodiment is described by taking the method performed by a terminal supporting a virtual environment as an example, and the method includes the steps of:
step 701, displaying a virtual environment picture, wherein the virtual environment picture comprises at least one virtual interaction object owned by a target virtual camp.
For specific implementation of step 701, reference may be made to step 201 described above, and the embodiments of the present application are not described herein again.
In response to receiving the first control speech, the first control speech is converted to speech text, step 702.
The terminal collects the environmental sounds through the microphone, and when the voice of the user is recognized from the environmental sounds, the terminal determines that the first control voice is received and converts the first control voice into voice text so as to recognize the target operation based on the voice text.
In one possible implementation, as shown in fig. 8, before voice conversion, the terminal first performs noise reduction and truncation processing on the collected voice stream, so as to obtain clear and accurate voice text. The noise reduction processing comprises two stages of voice recognition and system voice recognition. In the voice recognition stage, the terminal recognizes and obtains the voice of the user from the collected voice stream through a voice model (the user can input voice print characteristics of the user in advance); and in the system sound identification stage, the terminal removes the system sound generated by the terminal from the collected voice stream through reverse sound wave processing and/or sound wave extraction technology, so that the influence of the system sound on human sound identification is reduced. The cut-off processing is a stage of audio frequency processing after the noise reduction processing, and the terminal judges whether the user finishes inputting a sentence or not based on the mute time length. Illustratively, the mute duration may be selected from the interval of 100ms to 200 ms. For example, if the mute duration is 100ms, the terminal cuts when detecting a mute voice stream with a time length greater than or equal to 100ms, and determines that the previous period is ended.
In the speech translation process, a speech escape model of the terminal (or server) recognizes the real-time speech stream, and translates the speech stream of the target user into text (query) by an automatic speech recognition technique (Automatic Speech Recognition, ASR). In the voice translation stage, the terminal (or the server) firstly breaks sentences of the voice stream and outputs audio window fragments, and then performs query translation on the audio window fragments through an acoustic model, a language model and a dictionary.
The training process of the speech escape model is shown in fig. 9. Since a large number of proper nouns are involved in the game scene, and many users may input similar nouns according to their own language habits when inputting instructions, the computer device needs to be preconfigured with hot words or hot sentences in the model training stage so as to correct errors in online translation in the application stage.
Optionally, the noise reduction process, the truncation process and the text conversion process are performed by the terminal, or the terminal sends the voice stream to the background server, the background server performs the noise reduction process, the truncation process and the text conversion process, or the noise reduction process and the truncation process are performed by the background server, the text conversion is performed by the terminal, and the like. The embodiment of the present application is not limited thereto.
In step 703, intention recognition is performed on the voice text, so as to obtain an intention recognition result, wherein the intention recognition result includes the recognized target operation.
The terminal (or the server) inputs the text query into the intention recognition module to perform intention recognition, and an intention recognition result is obtained. The intention recognition result comprises two kinds, one is refusal, the other is successful in recognition, and the result comprises specific intention identification numbers (Identity Document, IDs), namely operation identifications corresponding to the target operation.
In one possible implementation, an offline framework of intent recognition modules is shown in FIG. 10. The main training process of the intention recognition module comprises the following steps: 1. the collocation intention ID. 2. The intention is expanded by similarity. The similarity question is a means for improving robot education, is the object of robot learning as the original corpus, and is a material for providing model training. 3. Word2vec vector model training is carried out on text logs similar to questions and other user voices, wherein N-Gram is an algorithm based on a statistical language model, the basic idea is that sliding window operation with the size of N is carried out on content in a text according to bytes to form a byte fragment sequence with the length of N, each byte fragment is called Gram, statistics is carried out on occurrence frequencies of all the grams, filtering is carried out according to a preset threshold value to form a key Gram list, namely a vector feature space of the text, and each Gram in the list is a feature vector dimension. Word2Vec is a tool used to generate Word vectors that have close relationships to language models, and computer devices can learn models of semantic knowledge from a vast array of text predictions in an unsupervised manner. 4. After the intention classification and the similar query are processed through the vector model, a cyclic neural network (Recurrent Neural Network, RNN) training is performed, and an intention classification model is output.
In the actual application stage, the online intention recognition process is shown in fig. 11. The terminal (or the server) firstly carries out vectorization processing on the text query, then carries out retrieval of vector matching degree and calculation of a deep learning model, and carries out refusal or output intention ID by combining with an identification strategy. The recognition strategy mainly comprises entity recognition, wherein an entity recognition module extracts entity words from the text query, and outputs entity types and entity values after disambiguation processing is carried out on the entity words. The entity words specifically include resource names, resource categories, azimuth information, place names, building names, and the like.
As shown in fig. 12, when the voice recognition module of the background server recognizes the effective intention, that is, after determining the target operation, the central control module initiates a condition query to the office module to obtain the execution condition of the target operation, and then the server sends the specific target operation and the execution condition to the terminal. The terminal updates the log of the game based on the execution of the operation and transmits the log of the game to the server so that the server updates the game log.
Step 704, determining a first virtual interaction object based on the intention recognition result and candidate operations corresponding to each virtual interaction object.
The candidate operation is an operation supported by the virtual interactive object, and the candidate operation corresponding to the first virtual interactive object comprises a target operation.
In one possible implementation, the operating instructions that can be performed by different types of virtual objects may be different. And the terminal determines a first virtual interaction object supporting the target operation based on the corresponding relation between the virtual interaction object and the candidate operation.
Step 704 includes either step 704a or step 704b as follows:
in step 704a, when the candidate operations corresponding to the at least two candidate virtual interactive objects include the target operation, the first virtual interactive object is determined from the at least two candidate virtual interactive objects based on the interface interactive operation received in the target period, where the target period includes a period before the first control voice is received.
In one possible implementation, the user may directly control the game operation by voice, that is, the user may implement a certain game operation by completely relying on the voice command, or may implement the game operation by combining the touch operation and the voice command. For example, the user may select a target virtual team among a plurality of virtual teams, then input a first control voice "fight xxx camping", and the terminal determines a first virtual interaction object, i.e., a target virtual team, for performing the target operation based on the team selection operation and the first control voice.
Schematically, the target period is 10s, the terminal includes the target operation in the candidate operations corresponding to at least two candidate virtual interactive objects, and under the condition that the first control voice does not indicate the first virtual interactive object, based on the history operation record, the terminal obtains the interface interactive operation within 10s before the first control voice is collected, and determines the first virtual interactive object according to the interface interactive operation.
Step 704b, prompting the at least two candidate virtual interactive objects when the candidate operations corresponding to the at least two candidate virtual interactive objects include the target operation; in response to receiving a selection operation of the candidate virtual interactive object, the selected candidate virtual interactive object is determined to be the first virtual interactive object.
In another possible implementation manner, when the candidate operations corresponding to the at least two candidate virtual interactive objects include the target operation and the first control voice does not indicate the first virtual interactive object, the terminal prompts the user to select based on the at least two candidate virtual objects. Optionally, the terminal prompts in a voice mode, or the terminal displays prompt information in a virtual environment picture in a text mode. The user can select the first virtual interactive object through touch operation or input voice control instruction based on the prompt information.
For example, the terminal receives a first control voice "fight xxx camping", determines to obtain two candidate virtual interaction objects, including a virtual team a and a virtual team b, and outputs a prompt message "please select team a or team b for fight" through voice. When receiving the control voice 'team a', the terminal determines the virtual team a as a first virtual interactive object.
Step 705, controlling the first virtual interactive object to execute the target operation if the first virtual interactive object satisfies the execution condition of the target operation.
For the specific implementation of step 705, reference may be made to step 203, and the description of the embodiment of the present application is omitted here.
In the embodiment of the application, the noise reduction and the truncation processing are carried out on the collected voice stream before the voice content is converted into the text content, so that the accuracy of controlling the voice recognition is improved. When at least two candidate virtual interactive objects supporting target operation exist, the terminal can prompt the user to select, the first virtual interactive object can be automatically determined based on interface interactive operation in the target period, the user can control game operation completely by means of voice, and rich interactive modes can be provided by combining interface interactive operation and voice control, so that convenience of game interaction is improved.
In one possible implementation, the terminal supports user-defined voice commands, editing some complex combinations of commands into specific voice commands.
Referring to fig. 13, a flowchart of a method for controlling a virtual interactive object according to another exemplary embodiment of the present application is shown. The embodiment is described by taking the method performed by a terminal supporting a virtual environment as an example, and the method includes the steps of:
step 1301, displaying a virtual environment picture, wherein the virtual environment picture includes at least one virtual interactive object owned by the target virtual camp.
For specific implementation of step 1301, reference may be made to step 201 described above, and the embodiments of the present application are not described herein again.
Step 1302, an operation setting interface is displayed.
In one possible implementation, the game client supports custom operations. The user can perform user-defined setting through the operation setting interface, and input an operation name and an operation instruction.
Step 1303, obtaining an operation name of the user-defined operation input in the operation setting interface.
Optionally, the user can input the text content corresponding to the operation name through touch operation, or the user can input the operation name through voice, and the terminal recognizes the operation name and then displays the operation name through an operation setting interface.
Illustratively, as shown in fig. 14, a voice instruction text box 1402 is displayed in the operation setting interface 1401, and the user inputs an operation name "wild goose-shaped array" in the voice instruction text box 1402.
In step 1304, based on the demonstration operation entered in the operation setting interface, an operation instruction corresponding to the custom operation is generated, where the demonstration operation is used to demonstrate the custom operation.
Optionally, the demonstration operation may be entered through text, and the operation setting interface 1401 shown in fig. 14 includes a game operation text box 1403, and the user inputs a user-defined operation "5 soldiers are located on the middle road, 10 infantry are located on the left side of the middle road, and 10 infantry are located on the right side of the middle road" in the game operation text box 1403. Or for some operations inconvenient to describe by words, the terminal also supports to record the demonstration operation through actual interactive operation, for example, when an operation input command is received, the terminal returns to a virtual environment picture and starts recording interface interactive operation, and when an operation input ending command is received, the terminal stops recording interactive operation and returns to an operation setting interface.
Step 1305, storing the operation name and the operation instruction of the custom operation in an associated manner.
After the terminal carries out association storage on the operation name and the operation instruction of the custom operation, the user can input the name of the custom operation through voice, so that the terminal can complete the corresponding operation instruction. As shown in fig. 14, after the terminal stores the operation name "goose-shaped array" in the voice instruction text box 1402 and the operation instruction in the game operation text box 1403 in association, when recognizing the control voice 1404 "goose-shaped array" containing "goose-shaped array", it executes the corresponding operation instruction and feeds back the operation execution result 1405 "good".
The terminal locally stores the operation name and the operation instruction of the user-defined operation corresponding to the current login account. In one possible implementation manner, since the user may log in the game account in different terminal devices, in order to facilitate the user to use the custom operation at each terminal, after the terminal obtains the newly added custom operation, the following steps are further performed:
and uploading the operation name and the operation instruction corresponding to the custom operation to a server.
In step 1306, in response to receiving the first control speech, a first virtual interactive object of the at least one virtual interactive object is determined, the first virtual interactive object being a virtual interactive object for performing a target operation indicated by the first control speech.
For the specific implementation of step 1306, reference may be made to step 202, and the description of the embodiment of the present application is omitted here.
In step 1307, when the first virtual interactive object meets the execution condition of the target operation, an operation instruction corresponding to the target operation is obtained.
In SLG games, particularly in some complex operation scenarios, an operation may correspond to an operation instruction of a multi-interface interactive operation. In the embodiment of the application, after determining the target operation based on the first control voice of the user, the terminal further determines the operation instruction corresponding to the target operation, and then sequentially controls the first virtual interactive object to complete the interactive operation according to the operation instruction.
In a possible implementation manner, the terminal locally stores an operation name and an operation instruction in association, including an operation name set by a default of the client and a user-defined operation name. In another possible embodiment, step 1307 further comprises the following steps 1307 a-1307 b:
step 1307a, if the operation instruction corresponding to the target operation is not found locally, sending an instruction query request to the server.
Step 1307b, receiving an operation instruction corresponding to the target operation fed back by the server.
If the client running in the terminal is not updated in time or the user adds a custom operation in other terminal equipment, the terminal cannot find an operation instruction corresponding to the target operation locally. At the moment, the terminal sends an instruction inquiry request to the server and receives an operation instruction corresponding to target operation fed back by the server. In one possible implementation manner, after receiving the operation instruction fed back by the server, the terminal updates the local storage and adds the corresponding relationship between the target operation and the operation instruction.
Step 1308, controlling the first virtual interactive object to execute the target operation based on the operation instruction.
And the terminal controls the first virtual interaction object to execute the target operation according to the operation instruction and the sequence of the operation instruction.
In the embodiment of the application, the terminal supports user-defined operation, and the user can combine complex interface interaction operation combinations and input operation names according to own language habits, so that the interaction operation is further simplified and the convenience of game operation is improved. On the other hand, the terminal uploads the related information of the newly added custom operation to the server in time, and obtains the operation instruction from the server under the condition that the operation instruction corresponding to the target operation is not found locally, so that the synchronization of the custom operation among different devices is realized.
Fig. 15 is a block diagram of a control apparatus for a virtual interactive object according to an exemplary embodiment of the present application, the apparatus including the following structures:
the display module 1501 is configured to display a virtual environment screen, where the virtual environment screen includes at least one virtual interaction object owned by a target virtual camp, and the virtual interaction object is configured to execute a corresponding operation in a virtual environment based on a received operation instruction;
a determining module 1502 configured to determine, in response to receiving a first control voice, a first virtual interaction object of at least one of the virtual interaction objects, the first virtual interaction object being a virtual interaction object for performing a target operation indicated by the first control voice;
And a control module 1503, configured to control the first virtual interaction object to execute the target operation if the first virtual interaction object meets an execution condition of the target operation.
Optionally, the apparatus further includes:
the system comprises an acquisition module, a storage module and a control module, wherein the acquisition module is used for acquiring the information of the target virtual camp, the information of the target virtual camp comprises at least one of information of camping resources and information of camping behaviors, the information of the camping resources is used for representing virtual resources owned by the target virtual camp, and the information of the camping behaviors is used for representing the camping behaviors of the target virtual camp in the virtual environment;
the determining module 1502 is further configured to determine, based on the camping information, whether the first virtual interactive object meets an execution condition of the target operation.
Optionally, the camping information includes the camping resource information;
the determining module 1502 is further configured to:
determining the virtual resource consumption required by the first virtual interactive object to execute the target operation;
determining that the first virtual interaction object meets the execution condition of the target operation under the condition that the virtual resource possession indicated by the lineup resource information is larger than the virtual resource consumption;
And under the condition that the virtual resource possession indicated by the lineup resource information is smaller than the virtual resource consumption, determining that the first virtual interaction object does not meet the execution condition of the target operation.
Optionally, the apparatus further includes:
the prompting module is used for prompting at least one substitute operation corresponding to the target operation under the condition that the first virtual interaction object does not meet the execution condition of the target operation, and the substitute operation is determined based on the virtual resource possession and the virtual resource consumption;
the determining module 1502 is further configured to determine, in response to receiving a second control voice, a second virtual interaction object of at least one of the virtual interaction objects, where the second virtual interaction object is a virtual interaction object for performing a replacement operation indicated by the second control voice operation;
the control module 1503 is further configured to control the second virtual interactive object to perform the replacing operation.
Optionally, the camping information includes the camping behavior information;
the determining module 1502 is further configured to:
under the condition that the camping behavior indicated by the camping behavior information and the target operation support are executed in parallel, determining that the first virtual interaction object meets the execution condition of the target operation;
And under the condition that the camping behavior indicated by the camping behavior information and the target operation do not support parallel execution, determining that the first virtual interactive object does not meet the execution condition of the target operation.
Optionally, the prompting module is further configured to perform a delayed execution prompt when the first virtual interaction object does not meet an execution condition of the target operation, where the delayed execution prompt is used to prompt that the target operation is executed after the camping behavior ends;
the control module 1503 is further configured to control the first virtual interaction object to execute the target operation when a third control voice is received and the camping behavior is ended.
Optionally, the determining module 1502 is further configured to:
converting the first control speech to speech text in response to receiving the first control speech;
performing intention recognition on the voice text to obtain an intention recognition result, wherein the intention recognition result comprises the recognized target operation;
and determining the first virtual interaction object based on the intention recognition result and candidate operations corresponding to the virtual interaction objects, wherein the candidate operations are operations supported by the virtual interaction object, and the candidate operations corresponding to the first virtual interaction object comprise the target operation.
Optionally, the determining module 1502 is further configured to:
determining the first virtual interactive object from at least two candidate virtual interactive objects based on the interface interactive operation received in a target period when the target operation is included in the candidate operations corresponding to the at least two candidate virtual interactive objects, wherein the target period comprises a period before the first control voice is received;
or,
prompting at least two candidate virtual interactive objects under the condition that the target operation is included in candidate operations corresponding to the at least two candidate virtual interactive objects; and in response to receiving a selection operation of the candidate virtual interactive object, determining the selected candidate virtual interactive object as the first virtual interactive object.
Optionally, the control module 1503 is further configured to:
acquiring an operation instruction corresponding to the target operation;
and controlling the first virtual interaction object to execute the target operation based on the operation instruction.
Optionally, in the case where the target operation is a custom operation,
the display module 1501 is further configured to display an operation setting interface;
the acquisition module is further used for acquiring the operation name of the user-defined operation input in the operation setting interface;
The device also comprises a generation module, a display module and a display module, wherein the generation module is used for generating an operation instruction corresponding to the custom operation based on the demonstration operation recorded in the operation setting interface, and the demonstration operation is used for demonstrating the custom operation;
the device also comprises a storage module, wherein the storage module is used for carrying out associated storage on the operation name and the operation instruction of the custom operation.
Optionally, the apparatus further includes:
the transmission module is used for uploading the operation name and the operation instruction corresponding to the custom operation to a server;
the acquisition module is further configured to:
under the condition that an operation instruction corresponding to the target operation is not found locally, sending an instruction query request to the server;
and receiving the operation instruction corresponding to the target operation fed back by the server.
Optionally, the prompting module is further configured to:
and prompting the execution result of the target operation, wherein the prompting mode comprises at least one of voice and characters.
It should be noted that: the apparatus provided in the above embodiment is only exemplified by the division of the above functional modules, and in practical application, the above functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the apparatus is divided into different functional modules, so as to perform all or part of the functions described above. In addition, the apparatus and the method embodiments provided in the foregoing embodiments belong to the same concept, and detailed implementation processes of the method embodiments are described in the method embodiments, which are not repeated herein.
In summary, in the embodiment of the present application, when the first control voice is received, the first virtual interactive object capable of executing the target operation indicated by the first control voice is determined, and the execution condition judgment logic is triggered, and the current virtual environment is combined to determine whether the first virtual interactive object meets the execution condition of the target operation, and if the first virtual interactive object meets the execution condition, the triggering process of the target operation is simulated, the interactive operation is flattened, and the complex condition judgment is automatically executed according to the first control voice, and the corresponding function is realized, thereby greatly simplifying the interactive process of the complex function, reducing the operation steps and shortening the operation duration.
Referring to fig. 16, a block diagram of a terminal 1600 provided in an exemplary embodiment of the present application is shown. The terminal 1600 may be a portable mobile terminal such as: smart phones, tablet computers, dynamic video expert compression standard audio layer 3 (Moving Picture Experts Group Audio Layer III, MP 3) players, dynamic video expert compression standard audio layer 4 (Moving Picture Experts Group Audio Layer IV, MP 4) players. Terminal 1600 may also be referred to as a user device, portable terminal, or the like.
In general, terminal 1600 includes: a processor 1601, and a memory 1602.
Processor 1601 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 1601 may be implemented in hardware in at least one of digital signal processing (Digital Signal Processing, DSP), field programmable gate array (Field-Programmable Gate Array, FPGA), programmable logic array (Programmable Logic Array, PLA). The processor 1601 may also include a host processor, which is a processor for processing data in an awake state, also referred to as a central processor (Central Processing Unit, CPU), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 1601 may be integrated with an image processor (Graphics Processing Unit, GPU) for use in connection with rendering and rendering of content to be displayed by the display screen. In some embodiments, the processor 1601 may also include an artificial intelligence (Artificial Intelligence, AI) processor for processing computing operations related to machine learning.
Memory 1602 may include one or more computer-readable storage media, which may be tangible and non-transitory. Memory 1602 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1602 is used to store at least one instruction for execution by processor 1601 to implement a method provided by an embodiment of the present application.
In some embodiments, terminal 1600 may also optionally include: peripheral interface 1603.
Peripheral interface 1603 may be used to connect Input/Output (I/O) related at least one peripheral to processor 1601 and memory 1602. In some embodiments, the processor 1601, memory 1602, and peripheral interface 1603 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 1601, memory 1602, and peripheral interface 1603 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The embodiment of the application also provides a computer readable storage medium, wherein at least one instruction is stored in the readable storage medium, and the at least one instruction is loaded and executed by a processor to realize the control method of the virtual interactive object.
Alternatively, the computer-readable storage medium may include: ROM, RAM, solid state disk (SSD, solid State Drives), or optical disk, etc. The RAM may include, among other things, resistive random access memory (ReRAM, resistance Random Access Memory) and dynamic random access memory (DRAM, dynamic Random Access Memory).
Embodiments of the present application provide a computer program product comprising computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the control method of the virtual interactive object described in the above embodiment.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
It should be noted that, the information (including but not limited to user equipment information, user personal information, etc.), data (including but not limited to data for analysis, stored data, presented data, etc.), and signals related to the present application are all authorized by the user or are fully authorized by the parties, and the collection, use, and processing of the related data is required to comply with the relevant laws and regulations and standards of the relevant countries and regions. For example, the first control voice, the lineup information, the game log, the history operation record, the voiceprint feature and other information related to the application are all acquired under the condition of full authorization.
The foregoing description of the preferred embodiments of the present application is not intended to limit the application, but is intended to cover all modifications, equivalents, alternatives, and improvements falling within the spirit and principles of the application.
Claims (16)
1. A method for controlling a virtual interactive object, the method comprising:
displaying a virtual environment picture, wherein the virtual environment picture comprises at least one virtual interaction object owned by a target virtual camp, and the virtual interaction object is used for executing corresponding operation in the virtual environment based on a received operation instruction;
In response to receiving a first control voice, determining a first virtual interaction object in at least one virtual interaction object, wherein the first virtual interaction object is a virtual interaction object for executing a target operation indicated by the first control voice;
and controlling the first virtual interaction object to execute the target operation under the condition that the first virtual interaction object meets the execution condition of the target operation.
2. The method according to claim 1, wherein the method further comprises:
the method comprises the steps of obtaining the information of the target virtual camp, wherein the information of the camp comprises at least one of information of camp resources and information of camp behaviors, the information of the camp resources is used for representing virtual resources owned by the target virtual camp, and the information of the camp behaviors is used for representing the camp behaviors of the target virtual camp in the virtual environment;
and determining whether the first virtual interaction object meets the execution condition of the target operation or not based on the lineup information.
3. The method of claim 2, wherein the camping information comprises the camping resource information;
the determining whether the first virtual interactive object meets the execution condition of the target operation based on the camping information comprises the following steps:
Determining the virtual resource consumption required by the first virtual interactive object to execute the target operation;
determining that the first virtual interaction object meets the execution condition of the target operation under the condition that the virtual resource possession indicated by the lineup resource information is larger than the virtual resource consumption;
and under the condition that the virtual resource possession indicated by the lineup resource information is smaller than the virtual resource consumption, determining that the first virtual interaction object does not meet the execution condition of the target operation.
4. A method according to claim 3, characterized in that the method further comprises:
prompting at least one substitute operation corresponding to the target operation under the condition that the first virtual interaction object does not meet the execution condition of the target operation, wherein the substitute operation is determined based on the virtual resource possession and the virtual resource consumption;
responsive to receiving a second control voice, determining a second virtual interactive object of at least one of the virtual interactive objects, the second virtual interactive object being a virtual interactive object for performing a replacement operation indicated by the second control voice operation;
And controlling the second virtual interactive object to execute the replacing operation.
5. The method of claim 2, wherein the camping information comprises the camping activity information;
the determining whether the first virtual interactive object meets the execution condition of the target operation based on the camping information comprises the following steps:
under the condition that the camping behavior indicated by the camping behavior information and the target operation support are executed in parallel, determining that the first virtual interaction object meets the execution condition of the target operation;
and under the condition that the camping behavior indicated by the camping behavior information and the target operation do not support parallel execution, determining that the first virtual interactive object does not meet the execution condition of the target operation.
6. The method of claim 5, wherein the method further comprises:
performing delayed execution prompt when the first virtual interaction object does not meet the execution condition of the target operation, wherein the delayed execution prompt is used for prompting the target operation to be executed after the camping behavior is finished;
and under the condition that a third control voice is received and the camping behavior is ended, controlling the first virtual interactive object to execute the target operation.
7. The method of any of claims 1 to 6, wherein determining a first virtual interactive object of at least one of the virtual interactive objects in response to receiving a first control voice comprises:
converting the first control speech to speech text in response to receiving the first control speech;
performing intention recognition on the voice text to obtain an intention recognition result, wherein the intention recognition result comprises the recognized target operation;
and determining the first virtual interaction object based on the intention recognition result and candidate operations corresponding to the virtual interaction objects, wherein the candidate operations are operations supported by the virtual interaction object, and the candidate operations corresponding to the first virtual interaction object comprise the target operation.
8. The method of claim 7, wherein the determining the first virtual interactive object based on the intent recognition result and candidate operations corresponding to each of the virtual interactive objects comprises:
determining the first virtual interactive object from at least two candidate virtual interactive objects based on the interface interactive operation received in a target period when the target operation is included in the candidate operations corresponding to the at least two candidate virtual interactive objects, wherein the target period comprises a period before the first control voice is received;
Or,
prompting at least two candidate virtual interactive objects under the condition that the target operation is included in candidate operations corresponding to the at least two candidate virtual interactive objects; and in response to receiving a selection operation of the candidate virtual interactive object, determining the selected candidate virtual interactive object as the first virtual interactive object.
9. The method of any of claims 1 to 6, wherein the controlling the first virtual interactive object to perform the target operation comprises:
acquiring an operation instruction corresponding to the target operation;
and controlling the first virtual interaction object to execute the target operation based on the operation instruction.
10. The method of claim 9, wherein in the event that the target operation is a custom operation, the method further comprises:
displaying an operation setting interface;
acquiring an operation name of the custom operation input in the operation setting interface;
generating an operation instruction corresponding to the custom operation based on the demonstration operation recorded in the operation setting interface, wherein the demonstration operation is used for demonstrating the custom operation;
And carrying out association storage on the operation name and the operation instruction of the custom operation.
11. The method according to claim 10, wherein the method further comprises:
uploading the operation name and the operation instruction corresponding to the custom operation to a server;
the obtaining the operation instruction corresponding to the target operation includes:
under the condition that an operation instruction corresponding to the target operation is not found locally, sending an instruction query request to the server;
and receiving the operation instruction corresponding to the target operation fed back by the server.
12. The method according to any one of claims 1 to 6, further comprising:
and prompting the execution result of the target operation, wherein the prompting mode comprises at least one of voice and characters.
13. A control device for a virtual interactive object, the device comprising:
the display module is used for displaying a virtual environment picture, wherein the virtual environment picture comprises at least one virtual interaction object owned by a target virtual camp, and the virtual interaction object is used for executing corresponding operation in the virtual environment based on the received operation instruction;
A determining module, configured to determine, in response to receiving a first control voice, a first virtual interaction object of at least one virtual interaction object, where the first virtual interaction object is a virtual interaction object for performing a target operation indicated by the first control voice;
and the control module is used for controlling the first virtual interaction object to execute the target operation under the condition that the first virtual interaction object meets the execution condition of the target operation.
14. A terminal, the terminal comprising a processor and a memory; the memory stores at least one program loaded and executed by the processor to implement the control method of the virtual interactive object according to any one of claims 1 to 12.
15. A computer readable storage medium, characterized in that at least one computer program is stored in the computer readable storage medium, which computer program is loaded and executed by a processor to implement a method of controlling a virtual interactive object according to any one of claims 1 to 12.
16. A computer program product, the computer program product comprising computer instructions stored in a computer readable storage medium; a processor of a computer device reads the computer instructions from the computer-readable storage medium, the processor executing the computer instructions, causing the computer device to perform the control method of a virtual interactive object according to any one of claims 1 to 12.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210540722.0A CN117101122A (en) | 2022-05-17 | 2022-05-17 | Control method, device, terminal and storage medium for virtual interaction object |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210540722.0A CN117101122A (en) | 2022-05-17 | 2022-05-17 | Control method, device, terminal and storage medium for virtual interaction object |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN117101122A true CN117101122A (en) | 2023-11-24 |
Family
ID=88798941
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202210540722.0A Pending CN117101122A (en) | 2022-05-17 | 2022-05-17 | Control method, device, terminal and storage medium for virtual interaction object |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN117101122A (en) |
-
2022
- 2022-05-17 CN CN202210540722.0A patent/CN117101122A/en active Pending
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN109491564B (en) | Virtual robot interaction method, device, storage medium and electronic device | |
| US20220301250A1 (en) | Avatar-based interaction service method and apparatus | |
| CN112221139B (en) | Information interaction method and device for game and computer readable storage medium | |
| GB2571199A (en) | Method and system for training a chatbot | |
| CN111816168B (en) | A model training method, a voice playback method, a device and a storage medium | |
| CN107040452B (en) | Information processing method and device and computer readable storage medium | |
| CN110808038B (en) | Mandarin evaluating method, device, equipment and storage medium | |
| CN117253478A (en) | Voice interaction method and related device | |
| CN110955818A (en) | Searching method, searching device, terminal equipment and storage medium | |
| CN118312044A (en) | Interactive teaching method, device, related equipment and computer program product | |
| CN112307166B (en) | Intelligent question-answering method and device, storage medium and computer equipment | |
| CN110781329A (en) | Image searching method and device, terminal equipment and storage medium | |
| CN113797540A (en) | Card prompting voice determination method and device, computer equipment and medium | |
| CN110882541A (en) | Game character control system, server, and game character control method | |
| CN112138410B (en) | Interaction method of virtual objects and related device | |
| CN112860995B (en) | Interaction method, device, client, server and storage medium | |
| CN119886339A (en) | Dialogue text generation method, related device, equipment and storage medium | |
| CN117101122A (en) | Control method, device, terminal and storage medium for virtual interaction object | |
| CN114707823B (en) | Interactive content scoring method and device, electronic equipment and storage medium | |
| KR20250032251A (en) | Method and apparatus for providing content in metaverse space | |
| CN118433437A (en) | Live broadcasting room voice live broadcasting method and device, live broadcasting system, electronic equipment and medium | |
| CN118714397A (en) | Method, device, equipment and medium for generating video | |
| CN117972015A (en) | Plot phrase generation method, device, electronic device and storage medium | |
| KR20200029852A (en) | System, sever and method for providing game character motion guide information | |
| KR20200112796A (en) | System, sever and method for providing game character motion guide information |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 40098955 Country of ref document: HK |