CN110136701B - Voice interaction service processing method, device and equipment - Google Patents
Voice interaction service processing method, device and equipment Download PDFInfo
- Publication number
- CN110136701B CN110136701B CN201810134247.0A CN201810134247A CN110136701B CN 110136701 B CN110136701 B CN 110136701B CN 201810134247 A CN201810134247 A CN 201810134247A CN 110136701 B CN110136701 B CN 110136701B
- Authority
- CN
- China
- Prior art keywords
- service
- service subsystem
- user
- subsystems
- candidate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000003993 interaction Effects 0.000 title claims abstract description 59
- 238000003672 processing method Methods 0.000 title claims description 14
- 230000002452 interceptive effect Effects 0.000 claims abstract description 122
- 238000012545 processing Methods 0.000 claims abstract description 44
- 238000000034 method Methods 0.000 claims abstract description 32
- 230000001960 triggered effect Effects 0.000 claims abstract description 18
- 238000001914 filtration Methods 0.000 claims description 23
- 238000013145 classification model Methods 0.000 claims description 13
- 230000004044 response Effects 0.000 claims description 12
- 238000004590 computer program Methods 0.000 claims description 10
- 238000012937 correction Methods 0.000 claims description 10
- 238000010586 diagram Methods 0.000 description 12
- 238000012986 modification Methods 0.000 description 10
- 230000004048 modification Effects 0.000 description 10
- 230000008569 process Effects 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 230000002093 peripheral effect Effects 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 3
- 238000012549 training Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 235000010149 Brassica rapa subsp chinensis Nutrition 0.000 description 1
- 235000000536 Brassica rapa subsp pekinensis Nutrition 0.000 description 1
- 241000499436 Brassica rapa subsp. pekinensis Species 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 235000012054 meals Nutrition 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/16—Speech classification or search using artificial neural networks
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/223—Execution procedure of a spoken command
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/226—Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics
- G10L2015/227—Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics of the speaker; Human-factor methodology
Landscapes
- Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Telephonic Communication Services (AREA)
Abstract
The embodiment of the invention provides a method, a device and equipment for processing voice interaction service, wherein the method comprises the following steps: responding to interactive voice triggered by a user, and determining a candidate service subsystem set corresponding to the interactive voice from the plurality of service subsystems according to the matching degree between the interactive voice and the plurality of service subsystems; according to the user characteristic information of the user, correcting the candidate service subsystem set; and if the corrected candidate service subsystem set only comprises one service subsystem, responding the interactive voice by the service subsystem. By modifying the service subsystem set according to the user characteristic information, candidate service subsystems which are more targeted and reasonable for the user can be obtained to provide better voice interaction service for the user.
Description
Technical Field
The present invention relates to the field of internet technologies, and in particular, to a method, an apparatus, and a device for processing a voice interaction service.
Background
With the continuous development of internet technology and artificial intelligence technology, intelligent voice interaction systems have been configured in various intelligent electronic devices to provide various voice interaction services for users.
For example, when a user shops, the user can input information such as height, weight and type of a needed commodity through voice, so that the shopping platform can recommend a commodity list meeting the requirements of the user to the user. For another example, in the intelligent vehicle-mounted platform, the user may also perform weather query, song search, and the like by means of voice input.
When a certain system or platform supports multi-domain voice interaction services, i.e., provides a plurality of voice interaction service subsystems, for example, the vehicle-mounted platform can provide service subsystems such as weather query, song search, and the like, how to match a reasonable service subsystem to respond to the voice input of a user is an urgent problem to be solved.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method, an apparatus, and a device for processing a voice interaction service, so as to provide a more reasonable and more targeted service subsystem with a voice interaction service for a user.
In a first aspect, an embodiment of the present invention provides a method for processing a voice interaction service, including:
responding to interactive voice triggered by a user, and determining a candidate service subsystem set corresponding to the interactive voice from a plurality of service subsystems according to the matching degree between the interactive voice and the service subsystems;
according to the user characteristic information of the user, correcting the candidate service subsystem set;
and if the corrected candidate service subsystem set only comprises one service subsystem, responding the interactive voice by the service subsystem.
In a second aspect, an embodiment of the present invention provides a voice interaction service processing apparatus, including:
the determining module is used for responding to interactive voice triggered by a user, and determining a candidate service subsystem set corresponding to the interactive voice from a plurality of service subsystems according to the matching degree between the interactive voice and the service subsystems;
the correction module is used for correcting the candidate service subsystem set according to the user characteristic information of the user;
and the response processing module is used for responding the interactive voice by using one service subsystem if the corrected candidate service subsystem set only comprises one service subsystem.
In a third aspect, an embodiment of the present invention provides an electronic device, which includes a processor and a memory, where the memory is configured to store one or more computer instructions, and when executed by the processor, the one or more computer instructions implement the method for processing a voice interaction service in the first aspect. The electronic device may also include a communication interface for communicating with other devices or a communication network.
An embodiment of the present invention provides a computer storage medium, configured to store a computer program, where the computer program enables a computer to implement the method for processing a voice interaction service in the first aspect when executed.
According to the voice interaction service processing method, the voice interaction service processing device and the voice interaction service processing equipment, after a voice interaction service system supporting a plurality of service subsystems receives interaction voice triggered by a user, a candidate service subsystem set can be determined from the plurality of service subsystems according to the matching degree between the interaction voice and the plurality of service subsystems. And further, the candidate service subsystem set is corrected by combining the user characteristic information of the user, so that the service subsystems in the candidate service subsystem set can better accord with the use preference of the user. And finally, if the corrected candidate service subsystem set only comprises one service subsystem, directly responding the interactive voice of the user by the service subsystem. By modifying the service subsystem set, candidate service subsystems which are more targeted and reasonable for the user can be obtained to provide better voice interaction service for the user.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
Fig. 1 is a flowchart of a voice interaction service processing method according to a first embodiment of the present invention;
FIG. 2 is a diagram illustrating an execution scenario when the embodiment shown in FIG. 1 is executed;
FIG. 3 is a schematic diagram of another implementation scenario performed by the embodiment shown in FIG. 1;
FIG. 4 is a schematic diagram of yet another implementation scenario performed by the embodiment shown in FIG. 1;
fig. 5 is a flowchart of a second voice interaction service processing method according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a voice interaction service processing apparatus according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of an electronic device corresponding to the voice interaction service processing apparatus provided in the embodiment shown in fig. 6.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the examples of the present invention and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, and "a" and "an" generally include at least two, but do not exclude at least one, unless the context clearly dictates otherwise.
It should be understood that the term "and/or" as used herein is merely one type of association that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
It should be understood that although the terms first, second, third, etc. may be used to describe XXX in embodiments of the present invention, these XXX should not be limited to these terms. These terms are used only to distinguish XXX. For example, a first XXX may also be referred to as a second XXX, and similarly, a second XXX may also be referred to as a first XXX, without departing from the scope of embodiments of the present invention.
The words "if", as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a good or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such good or system. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a good or system that comprises the element.
In addition, the sequence of steps in each method embodiment described below is only an example and is not strictly limited.
Fig. 1 is a flowchart of a first embodiment of a voice interaction service processing method according to an embodiment of the present invention, where the voice interaction service processing method provided in this embodiment may be executed by a voice interaction service processing apparatus, and the voice interaction service processing apparatus may be implemented as software, or implemented as a combination of software and hardware. The voice interaction service processing means may be integrally provided in a user terminal device or a server supporting the voice interaction system. As shown in fig. 1, the method comprises the steps of:
101. and responding to interactive voice triggered by a user, and determining a candidate service subsystem set corresponding to the interactive voice from the plurality of service subsystems according to the matching degrees between the interactive voice and the plurality of service subsystems.
In some occasions, for example, when the user has two hands inconvenient to perform the man-machine interaction operation, the voice interaction mode provides great convenience for the user, and therefore, more and more products supporting the voice interaction service are continuously presented.
In the embodiment of the present invention, it is assumed that an application system or an electronic device providing a voice interaction service may provide a multi-domain voice interaction service for a user, that is, support multiple service subsystems, where it can be understood that each service subsystem corresponds to a service domain, and each service subsystem completes a specific voice interaction service. For example, the weather service subsystem is responsible for answering queries of users about weather conditions; the music service subsystem can provide services such as playing songs and inquiring singer information for a user; the takeout service subsystem can be used for voice meal ordering of the user; the Kendeji service subsystem can be used for the user to order Kendeji house emergently; the peripheral service subsystem can be used for the user to inquire the peripheral information, and the like.
In practical applications, there may be more than one service subsystem that can satisfy the user's needs for the ambiguous speech input of a certain user. For example, if the user expresses the wish of "i want to drink cola", he may want to inquire whether there is a convenience store around the user so that he can buy the cola, or may want to order a takeaway to send the cola to home, and in this case, the service subsystems that can satisfy the user's needs may include, as exemplified by the above service subsystems: a takeaway service subsystem, a Kendeki service subsystem and a peripheral service subsystem.
Therefore, in the face of the interactive voice currently triggered by the user, firstly, a candidate service subsystem set which can be used for responding to the interactive voice, such as a candidate service subsystem set including the takeout service subsystem, the kendyr service subsystem and the peripheral service subsystem, needs to be screened out from the multiple service subsystems; then, a service subsystem finally used for responding the interactive voice is selected through a certain processing process.
Specifically, after receiving an interactive voice triggered by a user, a candidate service subsystem set corresponding to the interactive voice may be determined from the multiple service subsystems according to matching degrees between the interactive voice and the multiple service subsystems, respectively.
The candidate service subsystem set is a set of service subsystems whose matching degree with interactive voice meets certain requirements, and generally, the service subsystems included in the candidate service subsystem set are all service subsystems which meet the requirements of the user, namely, which match with the intention of the user, so the acquisition process of the candidate service subsystems can also be understood as a process of identifying the intention of the user. Intention identification, namely understanding the intention of the user, what the user wants to do, for example, the intention of the user saying "how today is weather" is to inquire weather, and the candidate service subsystem set should include the weather service subsystem; if the user says "i want to listen to a song" the intention is to play music, the set of candidate service subsystems should include the music service subsystem.
The user's intention identifying or saying the acquisition of the candidate service subsystem set may optionally be achieved by:
and inputting the interactive voice into a preset service subsystem classification model to obtain similarity scores between the interactive voice and the service subsystems, so as to determine that the service subsystems with the similarity scores larger than a preset score form a candidate service subsystem set.
Optionally, before obtaining the similarity score, the interactive voice may be converted into a text by a voice recognition technology, and the text may be input into the classification model.
The preset service subsystem classification model may be obtained by performing classification training on a neural network in advance through a large number of training samples, where the neural network is, for example, a convolutional neural network, a deep neural network, or the like. The training samples can be obtained by collecting or constructing common interactive statements corresponding to the service subsystems.
In an optional embodiment, after the interactive voice or the text corresponding to the interactive voice is input to the preset service subsystem classification model, the output form of the classification model may be: the probability with which the interactive voice belongs to a certain service subsystem, for example, the probability with which the interactive voice belongs to the takeaway service subsystem is 80%, and in this case, the probability is the above-mentioned similarity score. A certain preset score may be set such that the set of candidate service subsystems is made up of service subsystems having a similarity score greater than the preset score.
Besides the candidate service subsystem set corresponding to the interactive voice can be obtained through the classification model, optionally, the candidate service subsystem set can be obtained through the following processes:
and calculating similarity scores between the interactive voice and corpus samples corresponding to the service subsystems respectively, so as to determine that the service subsystems with the similarity scores larger than a preset score form a candidate service subsystem set.
The common interactive sentences corresponding to any service subsystem can be collected or constructed in advance, and redundancy removal processing can be performed on the obtained common interactive sentences, so that the obtained common interactive sentences can be simply and directly used as the corpus samples of any service subsystem. The corpus sample corresponding to one service subsystem may be one or more. Therefore, for any service subsystem, the similarity score between the interactive voice and the corpus sample corresponding to the service subsystem may be a highest value or an average value of the similarity scores between the interactive voice and the corpus samples corresponding to the service subsystem, and the highest value or the average value is used to evaluate the matching degree between the interactive voice and the service subsystem, so that the service subsystem whose corresponding highest value or average value is greater than a preset score may be added to the candidate service subsystem set. The similarity calculation can be realized by adopting an algorithm for calculating the semantic similarity between two sentences.
For example, assuming that the voice interactive system comprises six service subsystems of weather, music, peripheral, taobao, takeaway and Kendeji, the similarity score between each service subsystem and the user input is as follows when the interactive voice input by the user is as follows:
at this time, the service subsystems capable of responding to the user input are screened through a preset score threshold value to form a corresponding candidate service subsystem set. For example, if the preset score is 0.90, the service subsystem with a score greater than the preset score (indicated by bold italics in the table) may be added to the candidate service subsystem set corresponding to the user input.
Optionally, the obtaining of the candidate service subsystem set may be further implemented by the following processes:
and determining whether the description rule template corresponding to the interactive voice exists in the description rule templates corresponding to the service subsystems, and if so, determining that the candidate service subsystem set consists of the service subsystems of which the description rule templates corresponding to the interactive voice exist in the service subsystems.
The description rule template corresponding to a certain service subsystem reflects the expression habit of the common interactive voice corresponding to the service subsystem, and therefore, the description rule template may also be referred to as a similar term such as an expression form template. For example, a certain description rule template corresponding to the takeaway service subsystem is: i want to eat. Therefore, if the interactive voice input by the user accords with the expression form that the user wants to eat, the interactive voice is considered to hit the takeout service subsystem, and the takeout service subsystem is added into the candidate service subsystem. For example, if the interactive voice of the user is that today, i want to eat a chinese cabbage, i want to eat a steamed stuffed bun, etc., all the interactive voice is considered to hit the takeout service subsystem.
It should be noted that one service subsystem may correspond to a plurality of different description rule templates, and the same description rule template may also correspond to different service subsystems.
102. And correcting the candidate service subsystem set according to the user characteristic information of the user.
In the embodiment of the present invention, after the candidate service subsystem set is initially obtained, it does not mean that the service subsystems for responding to the interactive voice of the user are from the candidate service subsystem set at this time, because, in a possible situation, even if the number of the service subsystems included in the current candidate service subsystem set is multiple, for example, 2 or 3, and the similarity score between the service subsystems and the interactive voice is higher than a certain preset score, it does not mean that the service subsystems are all reasonable, i.e., are all targeted for the user. Therefore, on the basis of initially obtaining the candidate service subsystem set, in order to ensure that more reasonable candidate service subsystems for responding to the interactive voice can be obtained, the initially obtained candidate service subsystem set can be further corrected by combining with the user feature information of the user.
The necessity of performing the modification processing on the candidate service subsystem set can be embodied in the case that the initially obtained candidate service subsystem set contains a plurality of service subsystems, and can also be embodied in the case that the initially obtained candidate service subsystem set is an empty set, that is, a service subsystem matched with the interactive voice is not obtained. Because the user experience is not good if no response is given to the user when the candidate service subsystem set is an empty set, a certain service subsystem can be recommended to the user as a pocket to respond to the interactive voice of the user at the moment, and adding the service subsystem as a pocket scheme into the candidate service subsystem set is also a scheme for correcting the candidate service subsystem set.
For the sake of convenience of differentiation, the candidate service subsystem set obtained through step 101 will be referred to as an initial candidate service subsystem set hereinafter. In the embodiment of the present invention, the modification processing on the initial candidate service subsystem set may be embodied as two layers:
first, if the number of service subsystems included in the initial candidate service subsystem set is less than or equal to a preset value, the initial candidate service subsystem set is expanded according to the user feature information of the user, as shown in fig. 2 and 3.
Secondly, if the number of the service subsystems included in the initial candidate service subsystem set is greater than a preset value, filtering the initial candidate service subsystem set according to the user feature information of the user, as shown in fig. 4.
The purpose of the two layers is as follows: and obtaining a candidate service subsystem set consisting of more reasonable candidate service subsystems so as to obtain the service subsystem which finally responds to the interactive voice of the user, thereby realizing the targeted interactive response to the user.
The preset value may be set to a reasonable value, which is not too large, for example, set to 1, so that when the initial candidate service subsystem set is an empty set or only includes one service subsystem, the initial candidate service subsystem set may be expanded; and when the number of the service subsystems included in the service subsystem is greater than 1, for example, 2 or 3, the initial candidate service subsystem set may be subjected to filtering processing.
Assuming that the preset value is 1, the following respectively introduces an optional implementation manner of performing expansion and filtering processing on the initial candidate service subsystem set according to the user feature information:
optionally, if the number of the service subsystems in the initial candidate service subsystem set is 0, which is an empty set, the bottom-of-pocket service subsystem preset by the user or the service subsystem with the highest user frequency is expanded to the initial candidate service subsystem set. The user can preset which service subsystem is used as a bottom-pocket scheme to respond if the service subsystem corresponding to the interactive voice input by the user cannot be matched; alternatively, it is also possible to determine which service subsystem should be currently used to respond to the user's interactive voice according to the frequency of usage of the plurality of service subsystems provided to the user for a period of time by the user. For example, assuming that the user used the a service subsystem 10 times and the B service subsystem 7 times in the last week, the a service subsystem is selected to be extended to the initial candidate service subsystem set, so as to complete the modification of the initial candidate service subsystem set, as shown in fig. 2.
Optionally, if the number of the service subsystems in the initial candidate service subsystem set is 1, extending the service subsystems that belong to the same group as the service subsystems in the initial candidate service subsystem set to the initial candidate service subsystem set, or extending the service subsystems that belong to the same group as the service subsystems in the initial candidate service subsystem set and satisfy the usage frequency requirement to the initial candidate service subsystem set. The extended purpose of this approach can be understood in connection with situations that may be encountered in practice as follows: in terms of taxi taking service, it is assumed that all the service subsystems A, B and C provided at present can provide taxi taking service for users, and when voice interaction service is initially configured, the three service subsystems providing taxi taking service can be divided into one group. Assuming that the initial candidate service subsystem set obtained in some manner as described above only includes the a service subsystem for the current interactive voice of the user, at this time, in order to provide more optional space for the user, the B service subsystem and the C service subsystem belonging to the same group as the a service subsystem may be extended into the initial candidate service subsystem set, as shown in fig. 3; or, whether the B service subsystem and the C service subsystem need to be expanded to the initial service subsystem set is determined by further considering the use frequency of the B service subsystem and the C service subsystem by the user in a certain period of time. For example, if it is assumed that the user has used the B service subsystem 5 times, used the C service subsystem 1 time, and used the preset number threshold 2 times in the last month, the B service subsystem is expanded to the initial service subsystem set based on the number of times, and thus the corrected service subsystem set includes the a service subsystem and the B service subsystem.
In the two optional implementation manners for expanding the initial candidate service subsystem set, the user characteristic information includes: the user sets the bottom-pocket service subsystem in advance, and the user can use the frequency of the service subsystems.
And when the number of the service subsystems in the initial candidate service subsystem set is greater than the preset value, filtering and correcting the initial candidate service subsystem set to filter out the service subsystems which are possibly unreasonable.
Optionally, if the number of service subsystems included in the initial candidate service subsystem set is greater than 1, the initial candidate service subsystem set may be filtered by adopting any one or a combination of three or more of the following manners:
and filtering out the service subsystems which do not support the user position in the initial candidate service subsystem set according to the user position. For example, the initial candidate service subsystem may include a kendirk service subsystem, but if there is no kendirk storefront near the current user location, the kendirk service subsystem should be filtered out.
And filtering the service subsystems which are not subscribed by the user in the initial candidate service subsystem set according to the service subsystem subscription information of the user. For example, the user only subscribes to the a service subsystem and the B service subsystem provided by the voice interaction service system, and if the initial candidate service subsystem set comprises the C service subsystem, the C service subsystem is filtered out.
If the historical service subsystem set corresponding to the initial candidate service subsystem set exists, determining a user preference service subsystem corresponding to the initial candidate service subsystem set according to the selection operation of the historical service subsystem set on the user history, and filtering out the service subsystems in the initial candidate service subsystem set except the user preference service subsystem, wherein the historical service subsystem set is the same service subsystem set which appears in history and is the same as the initial candidate service subsystem set.
For example, the initial candidate service subsystem set is composed of an a service subsystem and a B service subsystem, and then the corresponding historical service subsystem set is also a set composed of the a service subsystem and the B service subsystem, the historical service subsystem set may appear once or many times in history, it may be counted that in each occurrence, a service subsystem selected by a user from the historical service subsystem set to obtain, for example, a service subsystem selected by the user with the highest number of times is a user preference service subsystem that is a service subsystem preferred by the user when the user appears in the selection situation, and assuming that the number of times the a service subsystem is selected is higher than the number of times the B service subsystem is selected, the a service subsystem is used as the user preference service subsystem, so that the B service subsystem in the initial candidate service subsystem set can be filtered out, as shown in fig. 4.
The history may be a set history period. The above-mentioned selection times are only the highest as the evaluation index of the user preference service subsystem, but not limited to this, for example, a certain threshold may be set, and the user preference service subsystem may be all selected times exceeding the threshold.
In the process of filtering and correcting the initial candidate service subsystem, the user characteristic information includes: the user position, the user subscription condition to the service subsystem and the user historical selection behavior to the service subsystem.
103. And if the corrected candidate service subsystem set only comprises one service subsystem, responding the interactive voice by using the service subsystem.
104. And if the corrected candidate service subsystem set comprises at least two service subsystems, outputting a selection instruction to the user, and responding to a selection operation triggered by the user to the corrected candidate service subsystem set according to the selection instruction so as to respond the interactive voice by the selected service subsystem.
After the initially obtained candidate service subsystem set is expanded or filtered and corrected according to the user feature information of the user, the corrected candidate service subsystem set may only include one service subsystem or may include more than one service subsystem. When the corrected candidate service subsystem set only comprises one service subsystem, directly responding to the interactive voice by the service subsystem; when the modified candidate service subsystem set includes more than one service subsystem, in order to avoid the multiple service subsystems from repeatedly responding to the interactive voice, a selection instruction may be output to the user, for example, a query voice may be output to the user, so that the service subsystem selected by the user responds to the interactive voice according to a selection operation triggered by the user on the modified candidate service subsystem set. For example, when the corrected candidate service subsystem set includes the a service subsystem and the B service subsystem, the user is asked to: asking you to select the A service subsystem or the B service subsystem, and if the user replies that the A service subsystem is selected, responding the interactive voice of the user by using the A service subsystem.
The selection prompt can be output in the form of interface display or in the form of voice.
The process of responding to interactive voice follows the processing logic of the service subsystem itself. For example, if the interactive voice is to ask what the weather is today, the weather service subsystem responding to the interactive voice can reply the weather, temperature, air quality and other relevant information of the user today. For another example, if the interactive voice is that i want to take a car, the car-taking service subsystem responding to the interactive voice may reply to the reply voice of the user "ask where you are now and where you need to go", as shown in fig. 3.
In summary, after the voice interaction service system supporting multiple service subsystems receives the interactive voice triggered by the user, a candidate service subsystem set may be determined from the multiple service subsystems according to the matching degrees between the interactive voice and the multiple service subsystems, respectively. And then, the candidate service subsystem set is corrected by combining the user characteristic information of the user, so that the service subsystems in the candidate service subsystem set can better accord with the use preference of the user, namely, better alternative items are obtained, and therefore, the user can be ensured to obtain a more targeted and reasonable service subsystem to provide better voice interaction service for the user.
Fig. 5 is a flowchart of a second embodiment of a processing method for a voice interaction service according to an embodiment of the present invention, and as shown in fig. 5, the processing method may include the following steps:
201. and responding to the interactive voice triggered by the user, determining whether a description rule template corresponding to the interactive voice exists in the description rule templates corresponding to the service subsystems, if so, executing step 202, and if not, executing step 203.
202. And determining that the candidate service subsystem set is formed by the service subsystems of the plurality of service subsystems, wherein the description rule templates corresponding to the interactive voice exist.
203. Inputting the interactive voice into a preset service subsystem classification model to obtain similarity scores between the interactive voice and the service subsystems respectively, or calculating similarity scores between the interactive voice and corpus samples corresponding to the service subsystems respectively; and determining that the candidate service subsystem set is formed by the service subsystems with the similarity scores larger than the preset score.
In this embodiment, a combined use scheme is provided for the three ways of obtaining an initial candidate service subsystem set introduced in the embodiment shown in fig. 1, that is, first, obtaining a candidate service subsystem set according to a description rule template, and if a candidate service subsystem set including at least one service subsystem is not obtained at this time, that is, a description rule template corresponding to interactive speech does not exist in description rule templates corresponding to multiple service subsystems, and then, obtaining a candidate service subsystem set according to a preset service subsystem classification model or a way of calculating similarity. This is because the description rule template has a more definite directionality than the preset service subsystem classification model or the way of calculating the similarity, or the intention recognition for the user interaction speech is more accurate.
204. And if the number of the service subsystems contained in the candidate service subsystem set is less than or equal to a preset value, performing expansion processing on the candidate service subsystem set according to the user characteristic information of the user.
205. And if the number of the service subsystems contained in the candidate service subsystem set is greater than a preset value, filtering the candidate service subsystem set according to the user characteristic information of the user.
206. And if the current candidate service subsystem set only comprises one service subsystem, responding the interactive voice by using the service subsystem.
207. If the current candidate service subsystem set comprises at least two service subsystems, outputting a selection instruction to a user; responding to a selection operation triggered by the user according to the selection indication, and responding to the interactive voice by the selected service subsystem.
208. And recording the corresponding relation between the current candidate service subsystem set and the selected service subsystem.
If the candidate service subsystem set after the expansion or filtering correction includes at least two service subsystems, after the user selects a desired service subsystem from the candidate service subsystem set, the corresponding relationship between the candidate service subsystem set after the expansion or filtering correction and the selected service subsystem may be recorded, so that when an initial candidate service subsystem set identical to the candidate service subsystem set after the correction appears again in the future, the initial candidate service subsystem set after the correction may be filtered based on the corresponding relationship, specifically refer to the description of the third processing method for filtering the candidate service subsystem set in the foregoing embodiment, and at this time, the candidate service subsystem set after the correction will be used as one of the historical service subsystem sets in the foregoing embodiment.
A voice interaction service processing apparatus according to one or more embodiments of the present invention will be described in detail below. Those skilled in the art will appreciate that the voice interaction service processing means may be constructed by configuring through the steps taught by the present solution using commercially available hardware components.
Fig. 6 is a schematic structural diagram of a voice interaction service processing apparatus according to an embodiment of the present invention, and as shown in fig. 6, the apparatus includes: a determination module 11, a correction module 12 and a response processing module 13.
The determining module 11 is configured to determine, in response to an interactive voice triggered by a user, a candidate service subsystem set corresponding to the interactive voice from the multiple service subsystems according to matching degrees between the interactive voice and the multiple service subsystems, respectively.
And a correcting module 12, configured to perform correction processing on the candidate service subsystem set according to the user feature information of the user.
A response processing module 13, configured to respond to the interactive voice with one service subsystem if the corrected candidate service subsystem set only includes one service subsystem.
Optionally, the response processing module 13 is further configured to: if the corrected candidate service subsystem set comprises at least two service subsystems, outputting a selection instruction to the user; responding to a selection operation triggered by the user on the corrected candidate service subsystem set according to the selection indication, and responding to the interactive voice by the selected service subsystem.
Optionally, the apparatus further comprises: a recording module 14.
And a recording module 14 for recording the corresponding relationship between the corrected candidate service subsystem set and the selected service subsystem.
Optionally, the modification module 12 is configured to: and if the number of the service subsystems contained in the candidate service subsystem set is less than or equal to a preset numerical value, performing expansion processing on the candidate service subsystem set according to the user characteristic information of the user.
Optionally, the modification module 12 is configured to: if the number of the service subsystems is zero, expanding the bottom-of-pocket service subsystems preset by the user or the service subsystems with the highest user frequency to the candidate service subsystem set;
if the number of the service subsystems is smaller than or equal to the preset value and larger than zero, extending the service subsystems which belong to the same group with the service subsystems in the candidate service subsystem set to the candidate service subsystem set, or extending the service subsystems which belong to the same group with the service subsystems in the candidate service subsystem set and meet the requirement of use frequency to the candidate service subsystem set, wherein the preset value is larger than or equal to 1.
Optionally, the modification module 12 is configured to: and if the number of the service subsystems contained in the candidate service subsystem set is greater than a preset value, filtering the candidate service subsystem set according to the user characteristic information of the user.
Optionally, the modification module 12 is configured to: and filtering out the service subsystems which do not support the position in the candidate service subsystem set according to the position of the user.
Optionally, the modification module 12 is configured to: and filtering the candidate service subsystems to collect service subsystems which are not subscribed by the user according to the service subsystem subscription information of the user.
Optionally, the modification module 12 is configured to: determining whether a historical set of service subsystems corresponding to the candidate set of service subsystems exists, wherein the historical set of service subsystems is a historical set of service subsystems which are the same as the candidate set of service subsystems; if yes, determining a user preference service subsystem corresponding to the candidate service subsystem set according to the selection operation of the user on the historical service subsystem set in the history; and filtering out the service subsystems in the candidate service subsystem set except the user preference service subsystem.
Optionally, the determining module 11 is configured to: inputting the interactive voice into a preset service subsystem classification model to obtain similarity scores between the interactive voice and the service subsystems respectively; or calculating similarity scores between the interactive voice and corpus samples corresponding to the multiple service subsystems respectively; and determining that the candidate service subsystem set is formed by the service subsystems with the similarity scores larger than the preset scores.
Optionally, the determining module 11 is configured to: determining whether a description rule template corresponding to the interactive voice exists in description rule templates corresponding to the service subsystems; if the description rule template corresponding to the interactive voice exists, determining that the candidate service subsystem set consists of service subsystems of which the description rule templates corresponding to the interactive voice exist in the service subsystems; if the description rule template corresponding to the interactive voice does not exist, inputting the interactive voice into a preset service subsystem classification model to obtain similarity scores between the interactive voice and the service subsystems respectively; or calculating similarity scores between the interactive voice and the corpus samples corresponding to the service subsystems respectively; and determining that the candidate service subsystem set is formed by the service subsystems with the similarity scores larger than the preset scores.
The apparatus shown in fig. 6 can perform the method of the embodiments shown in fig. 1 and fig. 5, and reference may be made to the related description of the embodiments shown in fig. 1 and fig. 5 for parts of this embodiment that are not described in detail. The implementation process and technical effect of the technical solution refer to the descriptions in the embodiments shown in fig. 1 and fig. 5, which are not described herein again.
Having described the internal functions and structure of the voice interaction service processing apparatus, in one possible design, the structure of the voice interaction service processing apparatus may be implemented as an electronic device, such as a terminal device, as shown in fig. 7, which may include: a processor 21 and a memory 22. Wherein, the memory 22 is used for storing a program supporting the voice interaction service processing apparatus to execute the voice interaction service processing method provided in the embodiment shown in fig. 1 and fig. 5, and the processor 21 is configured to execute the program stored in the memory 22.
The program comprises one or more computer instructions which, when executed by the processor 21, are capable of performing the steps of:
responding to interactive voice triggered by a user, and determining a candidate service subsystem set corresponding to the interactive voice from a plurality of service subsystems according to the matching degree between the interactive voice and the service subsystems;
according to the user characteristic information of the user, correcting the candidate service subsystem set;
and if the corrected candidate service subsystem set only comprises one service subsystem, responding the interactive voice by the service subsystem.
Optionally, the processor 21 is further configured to perform all or part of the steps in the foregoing embodiments shown in fig. 1 and fig. 3.
The structure of the voice interaction service processing apparatus may further include a communication interface 23, which is used for the voice interaction service processing apparatus to communicate with other devices or a communication network.
In addition, an embodiment of the present invention provides a computer storage medium for storing computer software instructions for a voice interaction service processing apparatus, which includes a program for executing the voice interaction service processing method in the method embodiments shown in fig. 1 and fig. 3.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by adding a necessary general hardware platform, and of course, can also be implemented by a combination of hardware and software. With this understanding in mind, the above-described solutions and/or portions thereof that are prior art may be embodied in the form of a computer program product, which may be embodied on one or more computer-usable storage media having computer-usable program code embodied therein (including but not limited to disk storage, CD-ROM, optical storage, etc.).
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (13)
1. A method for processing voice interaction services, comprising:
responding to interactive voice triggered by a user, and determining a candidate service subsystem set corresponding to the interactive voice from a plurality of service subsystems according to the matching degree between the interactive voice and the service subsystems;
according to the user characteristic information of the user, correcting the candidate service subsystem set;
if the corrected candidate service subsystem set only comprises one service subsystem, responding the interactive voice by the service subsystem;
wherein, the modifying the candidate service subsystem set according to the user characteristic information of the user includes:
if the number of the service subsystems is zero, expanding the bottom-pocketed service subsystem preset by the user or the service subsystem with the highest use frequency of the user into the candidate service subsystem set;
if the number of the service subsystems is smaller than or equal to a preset value and larger than zero, extending the service subsystems which belong to the same group with the service subsystems in the candidate service subsystem set to the candidate service subsystem set, or extending the service subsystems which belong to the same group with the service subsystems in the candidate service subsystem set and meet the requirement of use frequency to the candidate service subsystem set, wherein the preset value is larger than or equal to 1.
2. The method of claim 1, further comprising:
if the corrected candidate service subsystem set comprises at least two service subsystems, outputting a selection instruction to the user;
responding to a selection operation triggered by the user on the corrected candidate service subsystem set according to the selection indication, and responding to the interactive voice by the selected service subsystem.
3. The method of claim 2, further comprising:
and recording the corresponding relation between the corrected candidate service subsystem set and the selected service subsystem.
4. The method of claim 1, wherein the modifying the set of candidate service subsystems according to the user characteristic information of the user comprises:
and if the number of the service subsystems contained in the candidate service subsystem set is greater than a preset value, filtering the candidate service subsystem set according to the user characteristic information of the user.
5. The method of claim 4, wherein the filtering the candidate service subsystem set according to the user feature information of the user comprises:
and filtering out the service subsystems which do not support the position in the candidate service subsystem set according to the position of the user.
6. The method of claim 4, wherein the filtering the candidate service subsystem set according to the user feature information of the user comprises:
and filtering the candidate service subsystems to collect service subsystems which are not subscribed by the user according to the service subsystem subscription information of the user.
7. The method according to claim 4, wherein the filtering the candidate service subsystem set according to the user feature information of the user comprises:
determining whether a historical set of service subsystems corresponding to the candidate set of service subsystems exists, wherein the historical set of service subsystems is a historical set of service subsystems which are the same as the candidate set of service subsystems;
if yes, determining a user preference service subsystem corresponding to the candidate service subsystem set according to the selection operation of the user on the historical service subsystem set in the history;
and filtering out the service subsystems in the candidate service subsystem set except the user preference service subsystem.
8. The method according to any one of claims 1 to 7, wherein the determining a set of candidate service subsystems corresponding to the interactive voice from a plurality of service subsystems according to matching degrees between the interactive voice and the service subsystems comprises:
inputting the interactive voice into a preset service subsystem classification model to obtain similarity scores between the interactive voice and the service subsystems respectively; or calculating similarity scores between the interactive voice and the corpus samples corresponding to the service subsystems respectively;
and determining that the candidate service subsystem set is formed by the service subsystems with the similarity scores larger than the preset scores.
9. The method according to any one of claims 1 to 7, wherein the determining a set of candidate service subsystems corresponding to the interactive voice from a plurality of service subsystems according to matching degrees between the interactive voice and the service subsystems comprises:
determining whether a description rule template corresponding to the interactive voice exists in description rule templates corresponding to the service subsystems;
and if the description rule template corresponding to the interactive voice exists, determining that the candidate service subsystem set consists of the service subsystems of which the description rule templates corresponding to the interactive voice exist in the plurality of service subsystems.
10. The method of claim 9, further comprising:
if the description rule template corresponding to the interactive voice does not exist, inputting the interactive voice into a preset service subsystem classification model to obtain similarity scores between the interactive voice and the service subsystems respectively; or calculating similarity scores between the interactive voice and the corpus samples corresponding to the service subsystems respectively;
and determining that the candidate service subsystem set is formed by the service subsystems with the similarity scores larger than the preset scores.
11. A voice interaction service processing apparatus, comprising:
the determining module is used for responding to interactive voice triggered by a user, and determining a candidate service subsystem set corresponding to the interactive voice from a plurality of service subsystems according to the matching degree between the interactive voice and the service subsystems;
the correction module is used for correcting the candidate service subsystem set according to the user characteristic information of the user;
a response processing module, configured to respond to the interactive voice with one service subsystem if the corrected candidate service subsystem set only includes the one service subsystem;
the correction module is specifically configured to, if the number of the service subsystems is zero, expand the pre-set bottom-of-pocket service subsystem of the user or the service subsystem with the highest user frequency to the candidate service subsystem set; if the number of the service subsystems is smaller than or equal to a preset value and larger than zero, extending the service subsystems which belong to the same group with the service subsystems in the candidate service subsystem set to the candidate service subsystem set, or extending the service subsystems which belong to the same group with the service subsystems in the candidate service subsystem set and meet the requirement of use frequency to the candidate service subsystem set, wherein the preset value is larger than or equal to 1.
12. An electronic device comprising a memory and a processor; wherein,
the memory is configured to store one or more computer instructions, wherein the one or more computer instructions, when executed by the processor, implement the voice interaction service processing method of any of claims 1 to 10.
13. A computer-readable storage medium storing a computer program, wherein the computer program causes a computer to implement the voice interaction service processing method according to any one of claims 1 to 10 when executed.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201810134247.0A CN110136701B (en) | 2018-02-09 | 2018-02-09 | Voice interaction service processing method, device and equipment |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201810134247.0A CN110136701B (en) | 2018-02-09 | 2018-02-09 | Voice interaction service processing method, device and equipment |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN110136701A CN110136701A (en) | 2019-08-16 |
| CN110136701B true CN110136701B (en) | 2023-03-31 |
Family
ID=67567956
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201810134247.0A Active CN110136701B (en) | 2018-02-09 | 2018-02-09 | Voice interaction service processing method, device and equipment |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN110136701B (en) |
Families Citing this family (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113282264A (en) * | 2020-02-20 | 2021-08-20 | 阿里巴巴集团控股有限公司 | Data processing method and device, intelligent equipment and computer storage medium |
| CN111737577A (en) * | 2020-06-22 | 2020-10-02 | 平安医疗健康管理股份有限公司 | Data query method, device, equipment and medium based on service platform |
| CN115079882B (en) * | 2022-06-16 | 2024-04-05 | 广州国威文化科技有限公司 | Human-computer interaction processing method and system based on virtual reality |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103645876A (en) * | 2013-12-06 | 2014-03-19 | 百度在线网络技术(北京)有限公司 | Voice inputting method and device |
| CN105068661A (en) * | 2015-09-07 | 2015-11-18 | 百度在线网络技术(北京)有限公司 | Man-machine interaction method and system based on artificial intelligence |
| CN105224278A (en) * | 2015-08-21 | 2016-01-06 | 百度在线网络技术(北京)有限公司 | Interactive voice service processing method and device |
| CN106486120A (en) * | 2016-10-21 | 2017-03-08 | 上海智臻智能网络科技股份有限公司 | Interactive voice response method and answering system |
| CN107092609A (en) * | 2016-05-10 | 2017-08-25 | 口碑控股有限公司 | A kind of information-pushing method and device |
| CN107316643A (en) * | 2017-07-04 | 2017-11-03 | 科大讯飞股份有限公司 | Voice interactive method and device |
Family Cites Families (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20060143007A1 (en) * | 2000-07-24 | 2006-06-29 | Koh V E | User interaction with voice information services |
| US10282218B2 (en) * | 2016-06-07 | 2019-05-07 | Google Llc | Nondeterministic task initiation by a personal assistant module |
-
2018
- 2018-02-09 CN CN201810134247.0A patent/CN110136701B/en active Active
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103645876A (en) * | 2013-12-06 | 2014-03-19 | 百度在线网络技术(北京)有限公司 | Voice inputting method and device |
| CN105224278A (en) * | 2015-08-21 | 2016-01-06 | 百度在线网络技术(北京)有限公司 | Interactive voice service processing method and device |
| CN105068661A (en) * | 2015-09-07 | 2015-11-18 | 百度在线网络技术(北京)有限公司 | Man-machine interaction method and system based on artificial intelligence |
| CN107092609A (en) * | 2016-05-10 | 2017-08-25 | 口碑控股有限公司 | A kind of information-pushing method and device |
| CN106486120A (en) * | 2016-10-21 | 2017-03-08 | 上海智臻智能网络科技股份有限公司 | Interactive voice response method and answering system |
| CN107316643A (en) * | 2017-07-04 | 2017-11-03 | 科大讯飞股份有限公司 | Voice interactive method and device |
Also Published As
| Publication number | Publication date |
|---|---|
| CN110136701A (en) | 2019-08-16 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20230306052A1 (en) | Method and system for entity extraction and disambiguation | |
| US10540666B2 (en) | Method and system for updating an intent space and estimating intent based on an intent space | |
| CN108121737B (en) | Method, device and system for generating business object attribute identifier | |
| US9582547B2 (en) | Generalized graph, rule, and spatial structure based recommendation engine | |
| US20140172415A1 (en) | Apparatus, system, and method of providing sentiment analysis result based on text | |
| US8793260B2 (en) | Related pivoted search queries | |
| CN110502738A (en) | Chinese name entity recognition method, device, equipment and inquiry system | |
| CA3059929C (en) | Text searching method, apparatus, and non-transitory computer-readable storage medium | |
| WO2019056661A1 (en) | Search term pushing method and device, and terminal | |
| CN107092609B (en) | A kind of information push method and device | |
| CN111475714A (en) | Information recommendation method, device, equipment and medium | |
| CN109582847B (en) | An information processing method and device, and a storage medium | |
| CN109241451B (en) | Content combination recommendation method and device and readable storage medium | |
| CN110136701B (en) | Voice interaction service processing method, device and equipment | |
| CN113010640B (en) | Service execution method and device | |
| CN112307199A (en) | Information identification method, data processing method, device and equipment, information interaction method | |
| CN111104536A (en) | Picture searching method, device, terminal and storage medium | |
| KR20140015653A (en) | Contents recommendation system and contents recommendation method | |
| CN116932735A (en) | A text comparison method, device, medium and equipment | |
| CN106708871A (en) | Method and device for identifying social service characteristics user | |
| CN113627509B (en) | Data classification method, device, computer equipment and computer readable storage medium | |
| US20120271844A1 (en) | Providng relevant information for a term in a user message | |
| CN114691990A (en) | Recommendation method, device, server, storage medium and product of query options | |
| CN116468096B (en) | A model training method, device, equipment and readable storage medium | |
| CN116542737A (en) | Big data processing method and system of cross-border e-commerce platform |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |