US20180090147A1 - Apparatus and methods for dynamically changing a language model based on recognized text - Google Patents
Apparatus and methods for dynamically changing a language model based on recognized text Download PDFInfo
- Publication number
- US20180090147A1 US20180090147A1 US15/805,456 US201715805456A US2018090147A1 US 20180090147 A1 US20180090147 A1 US 20180090147A1 US 201715805456 A US201715805456 A US 201715805456A US 2018090147 A1 US2018090147 A1 US 2018090147A1
- Authority
- US
- United States
- Prior art keywords
- text
- audio
- interim
- language model
- trigger
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 title claims abstract description 36
- 230000000977 initiatory effect Effects 0.000 claims 3
- 238000005516 engineering process Methods 0.000 abstract description 31
- 230000008859 change Effects 0.000 abstract description 8
- 238000001514 detection method Methods 0.000 abstract 1
- 238000013479 data entry Methods 0.000 description 12
- 238000013518 transcription Methods 0.000 description 11
- 230000035897 transcription Effects 0.000 description 11
- 230000008569 process Effects 0.000 description 10
- 238000012545 processing Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 7
- 230000036541 health Effects 0.000 description 5
- 230000001413 cellular effect Effects 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 3
- 230000002596 correlated effect Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 208000024891 symptom Diseases 0.000 description 3
- 210000000115 thoracic cavity Anatomy 0.000 description 3
- 230000000875 corresponding effect Effects 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 208000030159 metabolic disease Diseases 0.000 description 2
- 208000012902 Nervous system disease Diseases 0.000 description 1
- 238000013130 cardiovascular surgery Methods 0.000 description 1
- 239000003795 chemical substances by application Substances 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 230000002503 metabolic effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000000135 prohibitive effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000013515 script Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000000547 structure data Methods 0.000 description 1
- 210000003813 thumb Anatomy 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/18—Speech classification or search using natural language modelling
- G10L15/183—Speech classification or search using natural language modelling using context dependencies, e.g. language models
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/18—Speech classification or search using natural language modelling
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/18—Speech classification or search using natural language modelling
- G10L15/183—Speech classification or search using natural language modelling using context dependencies, e.g. language models
- G10L15/19—Grammatical context, e.g. disambiguation of the recognition hypotheses based on word sequence rules
- G10L15/197—Probabilistic grammars, e.g. word n-grams
Definitions
- the technology of the present application relates generally to speech recognition systems, and more particular, to apparatuses and methods to allow for dynamically changing application resources, such as a language model, while using speech recognition to generate text.
- Speech (or voice) recognition and speech (or voice) to text engines such as are available from Microsoft, Inc., are becoming ubiquitous for the generation of text from user audio or audio from text.
- the text may be used to generate word documents, such as, for example, this patent application, or populate fields in a user interface and/or database, such as an Electronic Health Record or a Customer Relationship Management Database, or the like.
- the speech recognition systems are machine specific.
- the machine includes the language model, speech recognition engine, and user profile for the user (or users) of the machine.
- These conventional speech recognition engines may be considered thick or fat clients where a bulk of the processing is accomplished on the local machines.
- the system is locked to a single user and a single language model.
- the audio file of the user is streamed or batched to a remote processor from a local device.
- the local device may be a workstation, conventional telephone, voice over internet protocol telephone (VoIP), cellular telephone, smartphone, handheld device, or the like.
- VoIP voice over internet protocol telephone
- the remote processor performs the conversion (speech to text or text to speech) and returns the converted file to the user.
- a user at a desktop computer may produce an audio file that is sent to a speech to text device that returns a Word document to the desktop.
- a user on a mobile device may transmit a text message to a text to speech device that returns an audio file that is played through the speakers on the mobile device.
- the returned file (audio or text) may be stored for later retrieval, similar to a batch system, or sent to a user account, such as, e-mail or the like.
- the method, apparatus, and system provides data from a client workstation regarding a first speech application and a first set of speech resources being used by the first speech application, such as, for example, a user name and account.
- Audio whether a streamed audio or a batch audio, is received from the client workstation and converted to text by the speech recognition engine using the first set of speech resources, which includes a first language model.
- a text recognizer compares the text to a database of triggers, which triggers may include words, clauses, or phrases.
- the text recognizer on textually recognizing the trigger, sends a command to the speech recognition engine to dynamically replace the first set of speech resources, which may include a language model, with the second set of speech resources, which may include a second language model, and to convert the audio to text using the second set of speech resources.
- the speech resources relate to dictation resources for a natural language processor.
- the speech resources may include a plurality of language models.
- the speech resources may include shortcuts and inserts for use by the system to make transcriptions.
- the apparatus may pause (or cache) the audio when the text recognizer recognizes a trigger.
- the speech to text engine will begin using a second language model based on the trigger. Once the second language model is loaded, the apparatus will resume feeding the audio to the speech recognition engine.
- the apparatus will both pause the audio and repoint the audio to the first utterance after the trigger, using a tag or index in the audio that corresponds to the text string. This effectively re-winds the audio to the point where the language model should have been switched.
- FIG. 1 is a functional block diagram of a distributed speech recognition system consistent with the technology of the present application
- FIG. 2 is a functional block diagram of a cloud computing network consistent with the distributed speech recognition system of FIG. 1 ;
- FIG. 3 is a functional block diagram of a computing device consistent with the technology of the present application.
- FIG. 4 is a functional block diagram of an apparatus consistent with the technology of the present application.
- FIG. 5 is a diagram of a graphical user interface usable with the technology of the present application.
- FIG. 6 is functional block diagram of a workstation of FIG. 1 consistent with the technology of the present application.
- speech recognition systems may be considered isolated applications of a speech system (whether a thick or thin application).
- a speech system when a user invokes or launches a speech recognition application, the system loads or accesses the language model and user profile associated with the unique user identification or with that deployment of the speech recognition software, hardware, or combination thereof.
- speech recognition becomes ubiquitous, however, individuals may have multiple uses for the speech recognition. The uses may be related, but typically they will differ.
- a natural language speech recognition engine may not require a user profile if the language model is sufficiently correlated to the particular audio or speech predicted.
- a language model is tied to a user profile, and the language model cannot be updated as the user moves to different tasks.
- an electronic health record currently provides a user with a single language model for dictation/transcription services.
- certain fields of the electronic health record may require generic language application such that the patient can describe symptoms and specific medical application for specific disorders or the like, such as metabolic or neurologic disorders.
- the speech recognition engine would function more efficiently (e.g., with generally better accuracy) if the language model could be updated for the various specific applications as the doctor or healthcare provider moves through the electronic health record.
- the different tasks or fields associated with the user will generally require a new set of resources.
- the new set of resources will include a change of a language model, but may include other functionality such as, for example, new shortcuts, a new (or at least different) user profile, and the like (generically referred to as resources).
- resources such as, for example, new shortcuts, a new (or at least different) user profile, and the like.
- the user must close out of an existing operation and reopen the speech recognition application using different information, such as a different user profile identification, to allow access to different resources and functionality.
- Continually shutting down and reopening an application is tedious and time consuming. Additionally, the accuracy increase by changing language models typically is outweighed by the time lost in the process.
- the technology of the present application therefore, provides a distributed speech recognition system that allows a user or administrator to manage resources dynamically and seamlessly. Additionally, the technology of the present application provides a mechanism to allow a user to navigate between resources using voice commands. In certain applications, the speech recognition system may identify a resource and load appropriate resources in lieu of being commanded to do so.
- distributed speech recognition system 100 may provide transcription of dictation in real-time or near real-time allowing for delays associated with transmission time, processing, and the like. Of course, delay could be built into the system to allow, for example, a user the ability to select either real-time or batch transcription services.
- distributed speech recognition system 100 includes one or more client stations 102 (dictation clients 1 -n) that are connected to a dictation manager 104 by a first network connection 106 .
- dictation manager 104 may be generically referred to as a resource manager.
- First network connection 106 can be any number of protocols to allow transmission of data or audio information, such as, for example, using a standard internet protocol.
- the first network connection 106 may be associated with a “Cloud” based network.
- a Cloud based network or Cloud computing is generally the delivery of computing, processing, or the like by resources connected by a network.
- the network is an internet based network but could be any public or private network.
- the resources may include, for example, both applications and data.
- a conventional cloud computing system will be further explained herein below with reference to FIG. 2 .
- client station 102 receives audio for transcription from a user via a microphone 108 or the like.
- microphone 108 may be integrated into client station 102 , such as, for example, a cellular phone, tablet computer, or the like. Also, while shown as a monitor with input/output interfaces or a computer station, client station 102 may be a wireless device, such as a WiFi enabled computer, a cellular telephone, a PDA, a smart phone, or the like.
- Dictation manager 104 is connected to one or more dictation services hosted by dictation servers 110 (dictation servers 1 -n) by a second network connection 112 .
- dictation servers 110 are provided in this exemplary distributed speech recognition system 100 , but resource servers may alternatively be provided to provide access to functionality other than speech recognition, which includes both speech to text services and text to speech services in some aspects.
- Second network connection 112 may be the same as first network connection 106 , which may be a cloud computing system also.
- Dictation manager 104 and dictation server(s) 110 may be a single integrated unit connected by a bus, such as a PCI or PCI express protocol.
- Each dictation server 110 incorporates or accesses a natural language or continuous speech recognition engine as is generally understood in the art.
- the dictation manager 104 receives an audio file for transcription from a client station 102 .
- Dictation manager 104 selects an appropriate dictation server 110 , using conventional load balancing or the like, and transmits the audio file to the dictation server 110 .
- the dictation server 110 would have a processor that uses the appropriate algorithms to transcribe the speech using a natural language or continuous speech to text processor.
- the dictation manager 104 uploads a user profile to the dictation server 110 and the processing algorithms include an appropriate language model.
- the user profile modifies the speech to text processer for the user's particular dialect, speech patterns, or the like based on conventional training techniques.
- the language model is tailored for the expected language.
- a data or text file created from the audio is returned to the client station 102 once transcribed by the dictation server 110 .
- the data or text file may be created as the data or text is processed from the audio such that speaking “I am dictating a patent application” will display on a monitor of the speaker's workstation as each word is converted to text.
- the transcription or data file may be saved for retrieval by the user at a convenient time and place.
- the dictation server 110 conventionally would be loaded with a single language profile for use with the identified user profile or client account to convert the audio from the user to text.
- a single language model for a speech recognition engine may not be sufficiently robust.
- the technology of the present application provides the speech recognition engine with access to a plurality of language models.
- the plurality of language models may be referred to as a resource or a set of resources.
- Different language models may be distinguished by, for example, indicating a first language model or resource and a second language model or resource.
- cloud computing system 200 is arranged and configured to deliver computing and processing as a service of resources shared over a network.
- Clients access the Cloud using a network browser, such as, for example, Internet Explorer® from Microsoft, Inc. for internet based cloud systems.
- the network browser may be available on a processor, such as a desktop computer 202 , a laptop computer 204 or other mobile processor such as a smart phone 206 , a tablet 208 , or more robust devices such as servers 210 , or the like.
- the cloud may provide a number of different computing or processing services including infrastructure services 212 , platform services 214 , and software services 216 .
- Infrastructure services 212 may include physical or virtual machines, storage devices, and network connections.
- Platform services 214 may include computing platforms, operating systems, application execution environments, databases, and the like.
- Software services 216 may include applications accessible through the cloud such as speech-to-text engines and text-to-speech engines and the like.
- client station 102 (which may be referred to as a dictation station, client dictation station, or the like) is shown in more detail.
- the client station 102 may include a laptop computer, a desktop computer, a server, a mobile computing device, a handheld computer, a PDA, a cellular telephone, a smart phone, a tablet or the like.
- the client station 102 includes a processor 302 , such as a microprocessor, chipsets, field programmable gate array logic, or the like, that controls the major functions of the client station 102 , such as, for example, obtaining a user profile with respect to a user of client station 102 or the like.
- Processor 302 also processes various inputs and/or data that may be required to operate the client station 102 .
- the client station 102 also includes a memory 304 that is interconnected with processor 302 .
- Memory 304 may be remotely located or co-located with processor 302 .
- the memory 304 stores processing instructions to be executed by processor 302 .
- the memory 304 also may store data necessary or convenient for operation of the distributed speech recognition system 100 .
- memory 304 may store the audio file for the client so that the audio file may be processed later.
- a portion of memory 304 may include user profiles 305 associated with user(s) workstation 102 .
- the memory 304 also may include the plurality of language models that may be need to be accessed for the user during the conversion of the user audio to text, which language models and user profiles may be associated with a specific user as identified below.
- the user(s) may have multiple language models and user profiles depending on the tasks the user is performing.
- the user profiles 305 and the plurality of language models also may be stored in a memory associated with dictation manager 104 or dictation servers 110 in a distributed system. In this fashion, the user profiles and language models may be uploaded to the processor that requires the plurality of resources for a particular functionality. Also, this would be convenient for systems where the users may change workstations 102 .
- the user profiles 305 and the plurality of language models may be associated with individual users by a pass code, user identification number, biometric information or the like and is usable by dictation servers 110 to facilitate the speech transcription engine in converting the audio to text. Associating users and user profiles using a database or relational memory is not further explained except in the context of the present application as linking fields in a database is generally understood in the art.
- Memory 304 may be any conventional media and may include either or both volatile or nonvolatile memory.
- the client station 102 generally includes a user interface 306 that is interconnected with processor 302 . Such user interface 306 could include speakers, microphones, visual display screens, physical input devices such as a keyboard, mouse or touch screen, track wheels, cams, optical pens, special input buttons, etc.
- the interface 306 may include a graphical user interface.
- the client stations 102 have a network interface 308 (as would the dictation manager and the dictation server of this exemplary embodiment) to allow transmissions and reception of data (text, audio, or the like).
- Dictation manager 104 and dictation servers 110 may have structure similar to the client station 102 described herein.
- the various components necessary for a speech recognition system may be incorporated into a single client station 102 .
- the dictation manager may be optional or the functionality of the dictation manager may be incorporated into the processor as the dictation server and speech to text/text to speech components are the components associated with the invoked application.
- the dictation server 110 will include a natural language speech recognizer 402 , such as is available from Microsoft, Inc., International Business Machines, Inc., or the like.
- the natural language speech recognizer 402 may be referred to as a continuous speech recognizer, and the terms natural language speech recognizer (or engine) and continuous speech recognizer (or engine) are used interchangeably herein.
- the speech recognizer 402 receives audio 404 as an input.
- the natural language recognizer 402 is loaded with a user profile and an initial language model when a user accesses the speech recognizer 402 to process the audio 404 .
- the initial language model (or any loaded language model) may be referred to as the first language model as will be clear from the below.
- the first language model may be loaded based on the initial logon of a user to the distributed speech recognition system 100 . Even more generically, the language model and user profile may be considered as resources necessary for the speech recognizer 402 to function.
- the speech recognizer 402 uses the user profile and the language model to process the audio 404 and output interim text 406 .
- the audio 402 as processed by the speech recognizer may be indexed with marks 403 and the interim text 406 may be indexed with tags 407 .
- the marks 403 and tags 407 are correlated such that words spoken in the audio and the words transcribed in the text may be matched, ideally in a word for word manner although different word intervals or time stamps may be used to name but two alternative correlating methods. For example, pauses between utterances indicative of one clause to the next may be used to mark an audio segment.
- the marks 403 and tags 407 may be associated with endpointing metadata generated by the speech recognizer 402 as it processes the audio 404 and outputs the interim text 406 .
- the audio marks 403 and the text tags 407 are generated by the speech recognizer taking a large audio file and splitting the large audio file into a plurality of small audio files.
- Each of the plurality of small audio files is transcribed by the speech recognizer into a corresponding small text file (which is a one to one correspondence).
- Each of the small audio files and corresponding small text files may be called a text and audio pair.
- the text at this stage is generally true text or verbatim text.
- the plurality of small text files are normalized and concatenated into a final text file in most cases.
- the plurality of small audio files and the plurality of small text files may be stored in a memory such as memory 405 along with the audio marks 403 and the text tags 407 .
- the interim text 406 is received as an input by a text recognizer 408 .
- the text recognizer 408 includes a memory or has access to a memory, such as memory 405 , associated with the dictation server 110 containing keys or triggers, which may be words, phrases, or clauses.
- Each of the one or more triggers is linked to a language model, or more generically a resource for operation of the application. While each trigger should be linked to a single language model, any particular language model may be linked to multiple triggers.
- the text recognizer determines whether any of the interim text 406 is a trigger by using conventional text recognition techniques, which include for example, pattern matching.
- the text recognizer 408 determines that the interim text 406 does not include a trigger
- the text recognizer outputs the interim text as recognized text 410 .
- the recognized text 410 may be stored, used by a subsequent process, or transmitted back to the user. As mentioned above, the recognized text 410 is eventually normalized from true text.
- the text recognizer 408 determines that the interim text 406 does include a trigger
- the text recognizer (or an associated processor) sends a command 412 to the speech recognizer 402 .
- the command 412 causes the speech recognizer 402 to pause the recognition of audio 404 .
- the command 412 further causes the speech recognizer 402 (or an associated processor) to fetch the language model to which the trigger is linked and load, invoke, or activate the identified language model.
- the speech recognizer 402 continues transcribing audio 404 to interim text 406 until the text recognizer 408 identifies the next trigger.
- the audio 404 may not contain any triggers in which case the loaded resources are used for the remaining or entire transcription.
- the text recognizer 408 recognizers a trigger subsequent to the speech recognizer generating the interim text 406 .
- a text tag 407 is identified, which text tag 407 may be associated with endpointing metadata.
- the text tag 407 is the next word or utterance subsequent to the end of the trigger utterance or the end of the trigger itself.
- the beginning of the trigger may be a component of the final text product as well.
- the associated or correlated audio marker 403 is identified and the audio from that point is re-input to the speech recognizer 402 for conversion to interim text 406 using the identified language model, which language model may be referred to as the second language model, the subsequent language model, or the new language model.
- the text recognizer 408 may be inhibited from any particular trigger two times (2 ⁇ ) in succession. For example, if the text recognizer identified “trigger A”, “trigger A”, “trigger B”, the recognized text 410 would be TRIGGER A as the second input of interim text 406 of trigger A would not initiate a language model switch.
- interim text 406 that may have been generated prior to the pausing of the audio is deleted or overwritten.
- interim text 406 subsequent to the text tag 407 is deleted prior to the audio being restarted for processing using the second language model. Deleting may simply mean overwriting or the like.
- FIG. 5 only shows part of an exemplary graphical user interface (GUI) 500 for entry of information at a client station 102 .
- GUI graphical user interface
- the GUI 500 displayed on a monitor or screen is divided at about the centerline into a structure data entry portion 502 and an unstructured data entry portion 504 .
- the GUI 500 may have, for example, a plurality of data entry windows 506 .
- the data entry windows may be labeled for identification.
- data entry window 506 L is for the location of the condition or symptom.
- data entry window 506 Q may be labeled Quality
- data entry window 506 D may be labeled Duration
- data entry window 506 MF may be labeled Modifying Factors.
- the back office speech recognition system may load a language model tailored to the type of language expected to be entered in the various data entry windows 506 .
- the back office speech recognition system may load, invoke or activate a specific language model.
- the healthcare provider would place the cursor in the data entry window 508 .
- the back office speech recognition system would be provided with a general or initial language model (and other resources).
- the initial language model may be, for example, the language model associated with the Location data entry window 506 L as the Location information is expected to be the initial dictation.
- the initial language model may be a more generic language model for the case where the healthcare provider neglects to use the triggers as defined above. As the healthcare provider dictates, the provider would enunciate the trigger for the item to be input.
- the healthcare provider would enunciate the location of the symptom, such as, by for example, stating: “LOCATION chest cavity” and the back office speech recognition system would, as explained above, first convert the audio to interim text.
- the text recognizer would next recognize the trigger word LOCATION and pause the audio to either (1) confirm the LOCATION language model and resources are operating or (2) load the LOCATION language model and resources to the speech recognizer. If (1) confirmed, the process continues with the loaded language model. If (2) loaded, the process continues subsequent to the replacement of the previous language model with the subsequent language model.
- the system would next convert the audio of chest cavity to interim text and recognized text as chest cavity is not a trigger.
- the provider would next say, for example, “DURATION one minute and fifty six seconds.”
- the speech recognizer would generate the interim text that the text recognizer would recognize as a trigger causing the audio to pause while the speech recognizer switched from the LOCATION language model to the DURATION language model. Once switched, the system would generate text of 1 minute and 56 seconds (normalized). Notice the normalization may occur as part of generating the interim text 406 or as part of generating the recognized text 410 .
- the trigger may be included in the transcribed text and in others the trigger may not be included.
- the phonetics associated with the user may change as well as other of the resources used for dictation.
- the user may in some instance need to switch languages.
- the lawyer may have a dictation system for obtaining incoming information about a new client.
- the intake may be initially be designed for American English (in the United States of America), but the lawyer may have an opportunity to represent a Spanish only speaking client.
- the intake may use a trigger such as Espanol or Spanish, which may cause a change in the user profile, the language model, and the phonetics associated with the speech recognition.
- tailoring the resources closely to the speech recognition tends to increase the accuracy of the recognizer and decrease the recognizer's dependency on the user profile.
- One added benefit is the applicability to use more relatively less expensive speech recognition engines in areas that may otherwise require a very expensive speech recognition engine to process a language model that would be applicable over an entire field. This is particularly relevant in the medial, engineering, accounting, or scientific fields as a recognition engine to promptly process a language model designed to cover broad swaths of the language used in the very precise, complex, and technical fields would be prohibitive in many applications.
- the technology, while usable with a other speech recognition engines is particular suitable for trigram recognition engines.
- the technology of the present application relates to changing commands and responses by the processor as well.
- the above examples relate to dictation/transcription where an acoustic model maps sounds into phonemes and a lexicon that maps the phonemes to words coupled with a language model that turns the words into sentences, with the associated grammar models (such as syntax, capitalization, tense, etc.), other resources may be used.
- the system may allow for inserts of “boiler plate” or common phrases. The inserts may require an audio trigger, a keystroke trigger, or a command entry to trigger the boiler plate insertion into the document.
- aspects may provide for a navigation tool where a trigger is associated with a unique resource locator, which URLs could be associated with a private or public network. Still other aspects may provide for other scripts, macros, application execution, or the like by pairing the commands with trigger audio, keystrokes, or commands similar to the above.
- FIG. 6 a functional block diagram of a typical client station 102 , dictation manager 104 , or dictation server 110 is shown generically as computer 800 .
- the computer 800 is shown as a single, contained unit, such as, for example, a desktop, laptop, handheld, or mobile processor, but computer 800 may comprise portions that are remote and connectable via network connection such as via a LAN, a WAN, a WLAN, a WiFi Network, Internet, or the like.
- computer 800 includes a processor 802 (such as a central processing unit, a field programmable gate array, a chipset, or the like), a system memory 804 , and a system bus 806 .
- System bus 806 couples the various system components and allows data and control signals to be exchanged between the components.
- System bus 806 could operate on any number of conventional bus protocols.
- System memory 804 generally comprises both a random access memory (RAM) 808 and a read only memory (ROM) 810 .
- ROM 810 generally stores a basic operating information system such as a basic input/output system (BIOS) 812 .
- BIOS basic input/output system
- RAM 808 often contains the basic operating system (OS) 814 , application software 816 and 818 , and data 820 .
- System memory 804 contains the code for executing the functions and processing the data as described herein to allow the present technology of the present application to function as described.
- Computer 800 generally includes one or more of a hard disk drive 822 (which also includes flash drives, thumb drives, zip sticks, solid state drives, etc., as well as other volatile and non-volatile memory configurations), a magnetic disk drive 824 , or an optical disk drive 826 .
- the drives also may include flash drives and other portable devices with memory capability.
- the drives are connected to the bus 806 via a hard disk drive interface 828 , a magnetic disk drive interface 830 and an optical disk drive interface 832 , etc.
- Application modules and data may be stored on a disk, such as, for example, a hard disk installed in the hard disk drive (not shown).
- the computer 800 has network connection 834 to connect to a local area network (LAN), a wireless network, an Ethernet, the Internet, or the like, as well as one or more serial port interfaces 836 to connect to peripherals, such as a mouse, keyboard, modem, or printer.
- Computer 800 also may have USB ports or wireless components, not shown.
- Computer 800 typically has a display or monitor 838 connected to bus 806 through an appropriate interface, such as a video adapter 840 .
- Monitor 838 may be used as an input mechanism using a touch screen, a light pen, or the like.
- DSP Digital Signal Processor
- ASIC Application Specific Integrated Circuit
- FPGA Field Programmable Gate Array
- a general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine.
- a processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
Landscapes
- Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Machine Translation (AREA)
Abstract
The technology of the present application provides a method and apparatus to manage speech resources. The method includes using a text recognizer to detect a change in a speech application that requires the use of different resources. On detection of the change, the method loads the different resources without the user needing to exit the currently executing speech application.
Description
- The present patent application is a continuation of U.S. Non-Provisional Application Ser. No. 14/638,619, filed on Mar. 4, 2015 (now U.S. Pat. No. 9,812,130), which claims priority to U.S. Provisional Patent Application Ser. No. 61/951,400, filed Mar. 11, 2014, the disclosure of which is incorporated herein by reference as if set out in full.
- The present application is related to U.S. patent application Ser. No. 13/495,406, titled Apparatus and methods for managing resources for a system using voice recognition, filed Jun. 13, 2012, the disclosure of which is incorporated herein by reference as if set out in full.
- The technology of the present application relates generally to speech recognition systems, and more particular, to apparatuses and methods to allow for dynamically changing application resources, such as a language model, while using speech recognition to generate text.
- Speech (or voice) recognition and speech (or voice) to text engines such as are available from Microsoft, Inc., are becoming ubiquitous for the generation of text from user audio or audio from text. The text may be used to generate word documents, such as, for example, this patent application, or populate fields in a user interface and/or database, such as an Electronic Health Record or a Customer Relationship Management Database, or the like. Conventionally, the speech recognition systems are machine specific. The machine includes the language model, speech recognition engine, and user profile for the user (or users) of the machine. These conventional speech recognition engines may be considered thick or fat clients where a bulk of the processing is accomplished on the local machines. Generally, once actively engaged with a speech recognition system, the system is locked to a single user and a single language model.
- More recently, companies such as nVoq Incorporated located in Boulder, Colorado have developed technology to provide a distributed speech recognition system using the Cloud. In these cases, the audio file of the user is streamed or batched to a remote processor from a local device. The local device may be a workstation, conventional telephone, voice over internet protocol telephone (VoIP), cellular telephone, smartphone, handheld device, or the like. The remote processor performs the conversion (speech to text or text to speech) and returns the converted file to the user. For example, a user at a desktop computer may produce an audio file that is sent to a speech to text device that returns a Word document to the desktop. In another example, a user on a mobile device may transmit a text message to a text to speech device that returns an audio file that is played through the speakers on the mobile device. In some embodiments, the returned file (audio or text) may be stored for later retrieval, similar to a batch system, or sent to a user account, such as, e-mail or the like.
- As speech recognition becomes more commonplace and robust, clients will use speech recognition in multiple settings, such as, for example, job related tasks, personal tasks, or the like. As can be appreciated, the language models used for the various tasks may be different. Even in a job setting, the language model for various tasks may vary drastically. For example, a client may transcribe documents for medical specialties such as cardiovascular surgery and metabolic disorders. The language model, shortcuts, and user profiles for the vastly different, but related, transcriptions require the client to have different language models to effectively use speech recognition. Conventionally, to have access to different language models, a client would need a completely separate account and identification. To change accounts, the client would need to close out of the first account and logon to the second account, which is tedious and time consuming. Moreover, commands to change language models are difficult to convey in conventional computing systems as speech recognition engines has a difficult time distinguishing between dictation audio and command audio.
- Thus, against this background, it is desirable to develop improved apparatuses and methods for dynamically changing application resources, and specifically language models, for speech recognition engines.
- To attain the advantages, and in accordance with the purpose of the technology of the present application, methods and apparatus to allow speech applications to load speech resources specific to the application without the need for a client to terminate an existing logon are provided. In particular, the method, apparatus, and system provides data from a client workstation regarding a first speech application and a first set of speech resources being used by the first speech application, such as, for example, a user name and account. Audio, whether a streamed audio or a batch audio, is received from the client workstation and converted to text by the speech recognition engine using the first set of speech resources, which includes a first language model. A text recognizer compares the text to a database of triggers, which triggers may include words, clauses, or phrases. The text recognizer, on textually recognizing the trigger, sends a command to the speech recognition engine to dynamically replace the first set of speech resources, which may include a language model, with the second set of speech resources, which may include a second language model, and to convert the audio to text using the second set of speech resources.
- In certain aspects, the speech resources relate to dictation resources for a natural language processor. In particular, the speech resources may include a plurality of language models. In other aspects, the speech resources may include shortcuts and inserts for use by the system to make transcriptions.
- In other aspects, the apparatus may pause (or cache) the audio when the text recognizer recognizes a trigger. The speech to text engine will begin using a second language model based on the trigger. Once the second language model is loaded, the apparatus will resume feeding the audio to the speech recognition engine. In other aspects, the apparatus will both pause the audio and repoint the audio to the first utterance after the trigger, using a tag or index in the audio that corresponds to the text string. This effectively re-winds the audio to the point where the language model should have been switched.
- The foregoing and other features, utilities and advantages of the invention will be apparent from the following more particular description of a preferred embodiment of the invention as illustrated in the accompanying drawings.
- Various examples of the technology of the present application will be discussed with reference to the appended drawings. These drawings depict only illustrative examples of the technology and are not to be considered limiting of its scope, which is defined by the claims.
-
FIG. 1 is a functional block diagram of a distributed speech recognition system consistent with the technology of the present application; -
FIG. 2 is a functional block diagram of a cloud computing network consistent with the distributed speech recognition system ofFIG. 1 ; -
FIG. 3 is a functional block diagram of a computing device consistent with the technology of the present application; -
FIG. 4 is a functional block diagram of an apparatus consistent with the technology of the present application; -
FIG. 5 is a diagram of a graphical user interface usable with the technology of the present application; and -
FIG. 6 is functional block diagram of a workstation ofFIG. 1 consistent with the technology of the present application. - The technology of the present application will now be explained with reference to the figures. While the technology of the present application is described with relation to a speech recognition system using natural language or continuous speech recognition, one of ordinary skill in the art will recognize on reading the disclosure that other configurations are possible. Moreover, the technology of the present application will be described with reference to particular discrete processors, modules, or parts, but one of ordinary skill in the art will recognize on reading the disclosure that processors may be integrated into a single processor or server or separated into multiple processors or servers. Moreover, the technology of the present application will be described generically and portions of the present application may be loaded onto a particular user's workstation (fat or thick client) or hosted by a server that is accessed by the workstation (thin client). Additionally, the technology of the present application is described with regard to certain exemplary embodiments. The word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any embodiment described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments. All embodiments described herein should be considered exemplary unless otherwise stated.
- Conventionally, speech recognition systems may be considered isolated applications of a speech system (whether a thick or thin application). In other words, when a user invokes or launches a speech recognition application, the system loads or accesses the language model and user profile associated with the unique user identification or with that deployment of the speech recognition software, hardware, or combination thereof. As speech recognition becomes ubiquitous, however, individuals may have multiple uses for the speech recognition. The uses may be related, but typically they will differ.
- It has been found, however, that the more tailored a language model is to the relevant speech, the more robust the recognition engine. In certain instances, for example, a natural language speech recognition engine may not require a user profile if the language model is sufficiently correlated to the particular audio or speech predicted. Conventionally, a language model is tied to a user profile, and the language model cannot be updated as the user moves to different tasks. Thus, for example, an electronic health record currently provides a user with a single language model for dictation/transcription services. However, certain fields of the electronic health record may require generic language application such that the patient can describe symptoms and specific medical application for specific disorders or the like, such as metabolic or neurologic disorders. The speech recognition engine would function more efficiently (e.g., with generally better accuracy) if the language model could be updated for the various specific applications as the doctor or healthcare provider moves through the electronic health record.
- The different tasks or fields associated with the user will generally require a new set of resources. Most specifically, the new set of resources will include a change of a language model, but may include other functionality such as, for example, new shortcuts, a new (or at least different) user profile, and the like (generically referred to as resources). Under current models, to obtain such new resources and functionality, the user must close out of an existing operation and reopen the speech recognition application using different information, such as a different user profile identification, to allow access to different resources and functionality. Continually shutting down and reopening an application is tedious and time consuming. Additionally, the accuracy increase by changing language models typically is outweighed by the time lost in the process.
- The technology of the present application, therefore, provides a distributed speech recognition system that allows a user or administrator to manage resources dynamically and seamlessly. Additionally, the technology of the present application provides a mechanism to allow a user to navigate between resources using voice commands. In certain applications, the speech recognition system may identify a resource and load appropriate resources in lieu of being commanded to do so.
- Now with reference to
FIG. 1 , a distributedspeech recognition system 100 is shown. Distributedspeech recognition system 100 may provide transcription of dictation in real-time or near real-time allowing for delays associated with transmission time, processing, and the like. Of course, delay could be built into the system to allow, for example, a user the ability to select either real-time or batch transcription services. In this exemplary embodiment, distributedspeech recognition system 100 includes one or more client stations 102 (dictation clients 1-n) that are connected to adictation manager 104 by a first network connection 106. For non-speech recognition resources,dictation manager 104 may be generically referred to as a resource manager. First network connection 106 can be any number of protocols to allow transmission of data or audio information, such as, for example, using a standard internet protocol. In certain exemplary embodiments, the first network connection 106 may be associated with a “Cloud” based network. As used herein, a Cloud based network or Cloud computing is generally the delivery of computing, processing, or the like by resources connected by a network. Typically, the network is an internet based network but could be any public or private network. The resources may include, for example, both applications and data. A conventional cloud computing system will be further explained herein below with reference toFIG. 2 . With reference back toFIG. 1 ,client station 102 receives audio for transcription from a user via amicrophone 108 or the like. While shown as a separate part,microphone 108 may be integrated intoclient station 102, such as, for example, a cellular phone, tablet computer, or the like. Also, while shown as a monitor with input/output interfaces or a computer station,client station 102 may be a wireless device, such as a WiFi enabled computer, a cellular telephone, a PDA, a smart phone, or the like. -
Dictation manager 104 is connected to one or more dictation services hosted by dictation servers 110 (dictation servers 1-n) by asecond network connection 112. Similarly to the above,dictation servers 110 are provided in this exemplary distributedspeech recognition system 100, but resource servers may alternatively be provided to provide access to functionality other than speech recognition, which includes both speech to text services and text to speech services in some aspects.Second network connection 112 may be the same as first network connection 106, which may be a cloud computing system also.Dictation manager 104 and dictation server(s) 110 may be a single integrated unit connected by a bus, such as a PCI or PCI express protocol. Eachdictation server 110 incorporates or accesses a natural language or continuous speech recognition engine as is generally understood in the art. In operation, thedictation manager 104 receives an audio file for transcription from aclient station 102.Dictation manager 104 selects anappropriate dictation server 110, using conventional load balancing or the like, and transmits the audio file to thedictation server 110. Thedictation server 110 would have a processor that uses the appropriate algorithms to transcribe the speech using a natural language or continuous speech to text processor. In most instances, thedictation manager 104 uploads a user profile to thedictation server 110 and the processing algorithms include an appropriate language model. The user profile, as explained above, modifies the speech to text processer for the user's particular dialect, speech patterns, or the like based on conventional training techniques. The language model is tailored for the expected language. A data or text file created from the audio is returned to theclient station 102 once transcribed by thedictation server 110. In certain instances, the data or text file may be created as the data or text is processed from the audio such that speaking “I am dictating a patent application” will display on a monitor of the speaker's workstation as each word is converted to text. Alternatively, the transcription or data file may be saved for retrieval by the user at a convenient time and place. - As mentioned above, the
dictation server 110 conventionally would be loaded with a single language profile for use with the identified user profile or client account to convert the audio from the user to text. As recognized by the present application, a single language model for a speech recognition engine may not be sufficiently robust. Thus, the technology of the present application provides the speech recognition engine with access to a plurality of language models. For ease of reference, the plurality of language models may be referred to as a resource or a set of resources. Different language models may be distinguished by, for example, indicating a first language model or resource and a second language model or resource. - Referring now to
FIG. 2 , the basic configuration of acloud computing system 200 will be explained for completeness as the technology of the present application may be used in a cloud computing environment. Cloud computing is generally understood in the art, and the description that follows is for furtherance of the technology of the present application. As provided above,cloud computing system 200 is arranged and configured to deliver computing and processing as a service of resources shared over a network. Clients access the Cloud using a network browser, such as, for example, Internet Explorer® from Microsoft, Inc. for internet based cloud systems. The network browser may be available on a processor, such as adesktop computer 202, alaptop computer 204 or other mobile processor such as asmart phone 206, atablet 208, or more robust devices such asservers 210, or the like. As shown, the cloud may provide a number of different computing or processing services includinginfrastructure services 212,platform services 214, andsoftware services 216.Infrastructure services 212 may include physical or virtual machines, storage devices, and network connections.Platform services 214 may include computing platforms, operating systems, application execution environments, databases, and the like.Software services 216 may include applications accessible through the cloud such as speech-to-text engines and text-to-speech engines and the like. - Referring to
FIG. 3 , client station 102 (which may be referred to as a dictation station, client dictation station, or the like) is shown in more detail. As mentioned above, theclient station 102 may include a laptop computer, a desktop computer, a server, a mobile computing device, a handheld computer, a PDA, a cellular telephone, a smart phone, a tablet or the like. Theclient station 102 includes aprocessor 302, such as a microprocessor, chipsets, field programmable gate array logic, or the like, that controls the major functions of theclient station 102, such as, for example, obtaining a user profile with respect to a user ofclient station 102 or the like.Processor 302 also processes various inputs and/or data that may be required to operate theclient station 102. Theclient station 102 also includes amemory 304 that is interconnected withprocessor 302.Memory 304 may be remotely located or co-located withprocessor 302. Thememory 304 stores processing instructions to be executed byprocessor 302. Thememory 304 also may store data necessary or convenient for operation of the distributedspeech recognition system 100. For example,memory 304 may store the audio file for the client so that the audio file may be processed later. A portion ofmemory 304 may include user profiles 305 associated with user(s)workstation 102. Thememory 304 also may include the plurality of language models that may be need to be accessed for the user during the conversion of the user audio to text, which language models and user profiles may be associated with a specific user as identified below. The user(s) may have multiple language models and user profiles depending on the tasks the user is performing. The user profiles 305 and the plurality of language models also may be stored in a memory associated withdictation manager 104 ordictation servers 110 in a distributed system. In this fashion, the user profiles and language models may be uploaded to the processor that requires the plurality of resources for a particular functionality. Also, this would be convenient for systems where the users may changeworkstations 102. - The user profiles 305 and the plurality of language models may be associated with individual users by a pass code, user identification number, biometric information or the like and is usable by
dictation servers 110 to facilitate the speech transcription engine in converting the audio to text. Associating users and user profiles using a database or relational memory is not further explained except in the context of the present application as linking fields in a database is generally understood in the art.Memory 304 may be any conventional media and may include either or both volatile or nonvolatile memory. Theclient station 102 generally includes auser interface 306 that is interconnected withprocessor 302.Such user interface 306 could include speakers, microphones, visual display screens, physical input devices such as a keyboard, mouse or touch screen, track wheels, cams, optical pens, special input buttons, etc. to allow a user to interact with theclient station 102. Theinterface 306 may include a graphical user interface. Theclient stations 102 have a network interface 308 (as would the dictation manager and the dictation server of this exemplary embodiment) to allow transmissions and reception of data (text, audio, or the like).Dictation manager 104 anddictation servers 110 may have structure similar to theclient station 102 described herein. - Additionally, while the various components are explained above with reference to a cloud, the various components necessary for a speech recognition system may be incorporated into a
single client station 102. When incorporated into asingle client station 102, the dictation manager may be optional or the functionality of the dictation manager may be incorporated into the processor as the dictation server and speech to text/text to speech components are the components associated with the invoked application. - As shown in
FIG. 4 , in certain aspects of the present technology, thedictation server 110 will include a naturallanguage speech recognizer 402, such as is available from Microsoft, Inc., International Business Machines, Inc., or the like. The naturallanguage speech recognizer 402 may be referred to as a continuous speech recognizer, and the terms natural language speech recognizer (or engine) and continuous speech recognizer (or engine) are used interchangeably herein. Thespeech recognizer 402 receives audio 404 as an input. Thenatural language recognizer 402 is loaded with a user profile and an initial language model when a user accesses thespeech recognizer 402 to process the audio 404. The initial language model (or any loaded language model) may be referred to as the first language model as will be clear from the below. The first language model may be loaded based on the initial logon of a user to the distributedspeech recognition system 100. Even more generically, the language model and user profile may be considered as resources necessary for thespeech recognizer 402 to function. - The
speech recognizer 402 uses the user profile and the language model to process the audio 404 and outputinterim text 406. The audio 402 as processed by the speech recognizer may be indexed withmarks 403 and theinterim text 406 may be indexed withtags 407. Themarks 403 andtags 407 are correlated such that words spoken in the audio and the words transcribed in the text may be matched, ideally in a word for word manner although different word intervals or time stamps may be used to name but two alternative correlating methods. For example, pauses between utterances indicative of one clause to the next may be used to mark an audio segment. Themarks 403 andtags 407 may be associated with endpointing metadata generated by thespeech recognizer 402 as it processes the audio 404 and outputs theinterim text 406. - Generally, the audio marks 403 and the text tags 407 are generated by the speech recognizer taking a large audio file and splitting the large audio file into a plurality of small audio files. Each of the plurality of small audio files is transcribed by the speech recognizer into a corresponding small text file (which is a one to one correspondence). Each of the small audio files and corresponding small text files may be called a text and audio pair. The text at this stage is generally true text or verbatim text. The plurality of small text files are normalized and concatenated into a final text file in most cases. The plurality of small audio files and the plurality of small text files may be stored in a memory such as
memory 405 along with theaudio marks 403 and the text tags 407. - The
interim text 406 is received as an input by atext recognizer 408. Thetext recognizer 408 includes a memory or has access to a memory, such asmemory 405, associated with thedictation server 110 containing keys or triggers, which may be words, phrases, or clauses. Each of the one or more triggers is linked to a language model, or more generically a resource for operation of the application. While each trigger should be linked to a single language model, any particular language model may be linked to multiple triggers. As theinterim text 406 is input to thetext recognizer 408, the text recognizer determines whether any of theinterim text 406 is a trigger by using conventional text recognition techniques, which include for example, pattern matching. - When the
text recognizer 408 determines that theinterim text 406 does not include a trigger, the text recognizer outputs the interim text as recognizedtext 410. The recognizedtext 410 may be stored, used by a subsequent process, or transmitted back to the user. As mentioned above, the recognizedtext 410 is eventually normalized from true text. - When the
text recognizer 408 determines that theinterim text 406 does include a trigger, the text recognizer (or an associated processor) sends acommand 412 to thespeech recognizer 402. Thecommand 412 causes thespeech recognizer 402 to pause the recognition ofaudio 404. Thecommand 412 further causes the speech recognizer 402 (or an associated processor) to fetch the language model to which the trigger is linked and load, invoke, or activate the identified language model. Once the identified language model is loaded, invoked, or active, thespeech recognizer 402 continues transcribingaudio 404 tointerim text 406 until thetext recognizer 408 identifies the next trigger. Of course, the audio 404 may not contain any triggers in which case the loaded resources are used for the remaining or entire transcription. - As can be appreciated, the
text recognizer 408 recognizers a trigger subsequent to the speech recognizer generating theinterim text 406. Thus, when the trigger is recognized from the interim text, atext tag 407 is identified, whichtext tag 407 may be associated with endpointing metadata. In other words, thetext tag 407 is the next word or utterance subsequent to the end of the trigger utterance or the end of the trigger itself. In certain applications, the beginning of the trigger may be a component of the final text product as well. The associated or correlatedaudio marker 403 is identified and the audio from that point is re-input to thespeech recognizer 402 for conversion tointerim text 406 using the identified language model, which language model may be referred to as the second language model, the subsequent language model, or the new language model. If the trigger is to be part of the recognizedtext 410, thetext recognizer 408 may be inhibited from any particular trigger two times (2×) in succession. For example, if the text recognizer identified “trigger A”, “trigger A”, “trigger B”, the recognizedtext 410 would be TRIGGER A as the second input ofinterim text 406 of trigger A would not initiate a language model switch. - Similarly, the
interim text 406 that may have been generated prior to the pausing of the audio is deleted or overwritten. Thus, theinterim text 406 subsequent to thetext tag 407 is deleted prior to the audio being restarted for processing using the second language model. Deleting may simply mean overwriting or the like. - With reference to
FIG. 5 , the above technology will be described with reference to an exemplary electronic health record.FIG. 5 only shows part of an exemplary graphical user interface (GUI) 500 for entry of information at aclient station 102. For purposes of the present example, theGUI 500 displayed on a monitor or screen is divided at about the centerline into a structuredata entry portion 502 and an unstructureddata entry portion 504. In the structureddata entry portion 502, theGUI 500 may have, for example, a plurality of data entry windows 506. The data entry windows may be labeled for identification. For example, data entry window 506 L is for the location of the condition or symptom. Similarly, data entry window 506 Q may be labeled Quality, data entry window 506 D may be labeled Duration, and data entry window 506 MF may be labeled Modifying Factors. For increased accuracy and efficiency, the back office speech recognition system may load a language model tailored to the type of language expected to be entered in the various data entry windows 506. Thus, as the healthcare provider, or other user, moves a cursor from one active window (e.g., the data entry window), the back office speech recognition system may load, invoke or activate a specific language model. U.S. patent application Ser. No. 13/495,406 titled Apparatus and Methods for Managing Resources for a System Using Voice Recognition, filed Jun. 13, 2012, and incorporated herein by reference as if set out in full describes changing language models as the user positions a cursor. - With reference to the
unstructured portion 504 of theGUI 500, however, the healthcare provider would place the cursor in thedata entry window 508. The back office speech recognition system would be provided with a general or initial language model (and other resources). The initial language model may be, for example, the language model associated with the Location data entry window 506 L as the Location information is expected to be the initial dictation. However, the initial language model may be a more generic language model for the case where the healthcare provider neglects to use the triggers as defined above. As the healthcare provider dictates, the provider would enunciate the trigger for the item to be input. For example, after placing the cursor inwindow 508, the healthcare provider would enunciate the location of the symptom, such as, by for example, stating: “LOCATION chest cavity” and the back office speech recognition system would, as explained above, first convert the audio to interim text. The text recognizer would next recognize the trigger word LOCATION and pause the audio to either (1) confirm the LOCATION language model and resources are operating or (2) load the LOCATION language model and resources to the speech recognizer. If (1) confirmed, the process continues with the loaded language model. If (2) loaded, the process continues subsequent to the replacement of the previous language model with the subsequent language model. The system would next convert the audio of chest cavity to interim text and recognized text as chest cavity is not a trigger. The provider would next say, for example, “DURATION one minute and fifty six seconds.” The speech recognizer would generate the interim text that the text recognizer would recognize as a trigger causing the audio to pause while the speech recognizer switched from the LOCATION language model to the DURATION language model. Once switched, the system would generate text of 1 minute and 56 seconds (normalized). Notice the normalization may occur as part of generating theinterim text 406 or as part of generating the recognizedtext 410. In some aspects, the trigger may be included in the transcribed text and in others the trigger may not be included. - While specifically referencing a language model, other portions of the audio system may be dynamically changed by the trigger. For example, in a customer service center, a customer service request may be transferred from the agent to the supervisor. The supervisor on receipt of the call may state “Supervisor Smith Product X” such that the user profile for supervisor Smith and the language model associated with Product X is loaded and activated.
- In still other embodiments, the phonetics associated with the user may change as well as other of the resources used for dictation. For example, the user may in some instance need to switch languages. With reference to a lawyer, for example, the lawyer may have a dictation system for obtaining incoming information about a new client. The intake may be initially be designed for American English (in the United States of America), but the lawyer may have an opportunity to represent a Spanish only speaking client. The intake may use a trigger such as Espanol or Spanish, which may cause a change in the user profile, the language model, and the phonetics associated with the speech recognition.
- As can be appreciated, tailoring the resources closely to the speech recognition tends to increase the accuracy of the recognizer and decrease the recognizer's dependency on the user profile. One added benefit is the applicability to use more relatively less expensive speech recognition engines in areas that may otherwise require a very expensive speech recognition engine to process a language model that would be applicable over an entire field. This is particularly relevant in the medial, engineering, accounting, or scientific fields as a recognition engine to promptly process a language model designed to cover broad swaths of the language used in the very precise, complex, and technical fields would be prohibitive in many applications. The technology, while usable with a other speech recognition engines, is particular suitable for trigram recognition engines.
- While described with specific reference to a speech recognition system, the technology of the present application relates to changing commands and responses by the processor as well. For example, while the above examples relate to dictation/transcription where an acoustic model maps sounds into phonemes and a lexicon that maps the phonemes to words coupled with a language model that turns the words into sentences, with the associated grammar models (such as syntax, capitalization, tense, etc.), other resources may be used. In some aspects of the technology, for example, the system may allow for inserts of “boiler plate” or common phrases. The inserts may require an audio trigger, a keystroke trigger, or a command entry to trigger the boiler plate insertion into the document. Other aspects may provide for a navigation tool where a trigger is associated with a unique resource locator, which URLs could be associated with a private or public network. Still other aspects may provide for other scripts, macros, application execution, or the like by pairing the commands with trigger audio, keystrokes, or commands similar to the above.
- Referring now to
FIG. 6 , a functional block diagram of atypical client station 102,dictation manager 104, ordictation server 110 is shown generically ascomputer 800. Thecomputer 800 is shown as a single, contained unit, such as, for example, a desktop, laptop, handheld, or mobile processor, butcomputer 800 may comprise portions that are remote and connectable via network connection such as via a LAN, a WAN, a WLAN, a WiFi Network, Internet, or the like. Generally,computer 800 includes a processor 802 (such as a central processing unit, a field programmable gate array, a chipset, or the like), a system memory 804, and a system bus 806. System bus 806 couples the various system components and allows data and control signals to be exchanged between the components. System bus 806 could operate on any number of conventional bus protocols. System memory 804 generally comprises both a random access memory (RAM) 808 and a read only memory (ROM) 810.ROM 810 generally stores a basic operating information system such as a basic input/output system (BIOS) 812.RAM 808 often contains the basic operating system (OS) 814,application software 816 and 818, anddata 820. System memory 804 contains the code for executing the functions and processing the data as described herein to allow the present technology of the present application to function as described.Computer 800 generally includes one or more of a hard disk drive 822 (which also includes flash drives, thumb drives, zip sticks, solid state drives, etc., as well as other volatile and non-volatile memory configurations), amagnetic disk drive 824, or an optical disk drive 826. The drives also may include flash drives and other portable devices with memory capability. The drives are connected to the bus 806 via a harddisk drive interface 828, a magnetic disk drive interface 830 and an opticaldisk drive interface 832, etc. Application modules and data may be stored on a disk, such as, for example, a hard disk installed in the hard disk drive (not shown). Thecomputer 800 hasnetwork connection 834 to connect to a local area network (LAN), a wireless network, an Ethernet, the Internet, or the like, as well as one or more serial port interfaces 836 to connect to peripherals, such as a mouse, keyboard, modem, or printer.Computer 800 also may have USB ports or wireless components, not shown.Computer 800 typically has a display or monitor 838 connected to bus 806 through an appropriate interface, such as a video adapter 840. Monitor 838 may be used as an input mechanism using a touch screen, a light pen, or the like. On reading this disclosure, those of skill in the art will recognize that many of the components discussed as separate units may be combined into one unit and an individual unit may be split into several different units. Further, the various functions could be contained in one personal computer or spread over several networked personal computers. The identified components may be upgraded and replaced as associated technology improves and advances are made in computing technology. - Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be non-transitorily implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention. The above identified components and modules may be superseded by new technologies as advancements to computer technology continue.
- The various illustrative logical blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
- The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (15)
1. A method performed on at least one processor for managing speech resources of a speech recognition engine, the method comprising the steps of:
initiating a speech recognition engine with a first language model;
converting audio received by the speech recognition engine to interim text;
determine whether the interim text matches at least one trigger; and
if it is determined that the interim text does not match the at least one trigger, outputting the interim text as recognized text;
if it is determined that the interim text does match the at least one trigger, replacing the first language model with a second language model.
2. The method of claim 1 wherein the initiating step comprises initiating the speech recognition engine with a first user profile and the replacing step further comprises replacing the first user profile with a second user profile.
3. The method of claim 1 wherein, if it is determined that the interim text does match the at least one trigger, the method comprises the steps of:
pausing the converting step until the first language model is replaced with the second language model and resuming the converting step.
4. The method of claim 3 wherein the step of converting the audio to interim text comprises correlating the audio and the text.
5. The method of claim 4 wherein correlating the audio and the text comprises creating a plurality of small audio files from the audio and converting the plurality of small audio files into a corresponding plurality of interim text files and wherein the outputted recognized text is concatenated from the plurality of interim text files.
6. The method of claim 4 wherein correlating the audio and the text comprises placing a plurality of markers in the audio and placing a corresponding plurality of tags in the text such that the markers and tags provide audio and text pairs.
7. The method of claim 4 wherein if it is determined that the interim text does match the at least one trigger, the method comprises the steps of:
rewinding the audio based on the correlation between the correlation between the audio and the text; and
deleting the interim text corresponding to a rewound portion of the audio.
8. The method of claim 1 wherein the at least one trigger is linked to the second language model.
9. The method of claim 8 wherein the at least one trigger comprises a plurality of triggers and wherein the second language model comprises a plurality of language models.
10. The method of claim 1 wherein the at least one trigger is selected from a group of triggers consisting of: a word, a clause, a phrase, or a combination thereof.
11. A speech recognition engine comprising:
a speech recognizer, the speech recognizer to receive audio and convert the audio to interim text using at least a language model;
a text recognizer operationally coupled to the speech recognizer, the text recognizer to receive the interim text and recognize whether the interim text contains a trigger;
wherein when the text recognizer recognizes a trigger in the interim text, the speech recognizer replaces a current language model with a replacement language model, and
wherein when the text recognizer does not recognize the trigger in the interim text, the interim text is provided as recognized text.
12. The speech recognition engine of claim 11 further comprising a memory wherein the memory comprises a plurality of triggers and a plurality of language models wherein each of the plurality of triggers is linked to one of the plurality of language models.
13. The speech recognition engine of claim 12 wherein the interim text comprises a plurality of interim text files and the speech recognizer converts the audio into a plurality of audio files corresponding to a plurality of interim text files.
14. The speech recognition engine of claim 13 further comprising an index engine wherein the index engine correlates the plurality of audio files and the corresponding plurality of interim text files.
15. The speech recognition engine of claim 14 wherein the index engine rewinds the audio based on the correlation between the plurality of audio files and the plurality of interim text files when the text recognizer recognizes the trigger.
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US15/805,456 US20180090147A1 (en) | 2014-03-11 | 2017-11-07 | Apparatus and methods for dynamically changing a language model based on recognized text |
| US15/950,553 US10643616B1 (en) | 2014-03-11 | 2018-04-11 | Apparatus and methods for dynamically changing a speech resource based on recognized text |
Applications Claiming Priority (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US201461951400P | 2014-03-11 | 2014-03-11 | |
| US14/638,619 US9812130B1 (en) | 2014-03-11 | 2015-03-04 | Apparatus and methods for dynamically changing a language model based on recognized text |
| US15/805,456 US20180090147A1 (en) | 2014-03-11 | 2017-11-07 | Apparatus and methods for dynamically changing a language model based on recognized text |
Related Parent Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US14/638,619 Continuation US9812130B1 (en) | 2014-03-11 | 2015-03-04 | Apparatus and methods for dynamically changing a language model based on recognized text |
Related Child Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US15/950,553 Continuation-In-Part US10643616B1 (en) | 2014-03-11 | 2018-04-11 | Apparatus and methods for dynamically changing a speech resource based on recognized text |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| US20180090147A1 true US20180090147A1 (en) | 2018-03-29 |
Family
ID=60189822
Family Applications (2)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US14/638,619 Active US9812130B1 (en) | 2014-03-11 | 2015-03-04 | Apparatus and methods for dynamically changing a language model based on recognized text |
| US15/805,456 Abandoned US20180090147A1 (en) | 2014-03-11 | 2017-11-07 | Apparatus and methods for dynamically changing a language model based on recognized text |
Family Applications Before (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US14/638,619 Active US9812130B1 (en) | 2014-03-11 | 2015-03-04 | Apparatus and methods for dynamically changing a language model based on recognized text |
Country Status (1)
| Country | Link |
|---|---|
| US (2) | US9812130B1 (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2020197062A1 (en) | 2019-03-27 | 2020-10-01 | Samsung Electronics Co., Ltd. | Multi-modal interaction with intelligent assistants in voice command devices |
Families Citing this family (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105957516B (en) * | 2016-06-16 | 2019-03-08 | 百度在线网络技术(北京)有限公司 | More voice identification model switching method and device |
| KR102596430B1 (en) * | 2016-08-31 | 2023-10-31 | 삼성전자주식회사 | Method and apparatus for speech recognition based on speaker recognition |
| CN111052229B (en) * | 2018-04-16 | 2023-09-01 | 谷歌有限责任公司 | Automatically determining a language for speech recognition of a spoken utterance received via an automated assistant interface |
| CN110047472B (en) * | 2019-03-15 | 2024-07-02 | 平安科技(深圳)有限公司 | Batch conversion method and device for voice information, computer equipment and storage medium |
| WO2021033889A1 (en) | 2019-08-20 | 2021-02-25 | Samsung Electronics Co., Ltd. | Electronic device and method for controlling the electronic device |
| CN115376490B (en) * | 2022-08-19 | 2024-07-30 | 北京字跳网络技术有限公司 | Voice recognition method and device and electronic equipment |
Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20060149558A1 (en) * | 2001-07-17 | 2006-07-06 | Jonathan Kahn | Synchronized pattern recognition source data processed by manual or automatic means for creation of shared speaker-dependent speech user profile |
| US20070294081A1 (en) * | 2006-06-16 | 2007-12-20 | Gang Wang | Speech recognition system with user profiles management component |
| US20080221881A1 (en) * | 2006-11-22 | 2008-09-11 | Eric Carraux | Recognition of Speech in Editable Audio Streams |
| US20090113293A1 (en) * | 2007-08-19 | 2009-04-30 | Multimodal Technologies, Inc. | Document editing using anchors |
| US20110055256A1 (en) * | 2007-03-07 | 2011-03-03 | Phillips Michael S | Multiple web-based content category searching in mobile search application |
| US20110224981A1 (en) * | 2001-11-27 | 2011-09-15 | Miglietta Joseph H | Dynamic speech recognition and transcription among users having heterogeneous protocols |
| US20130238329A1 (en) * | 2012-03-08 | 2013-09-12 | Nuance Communications, Inc. | Methods and apparatus for generating clinical reports |
| US9552354B1 (en) * | 2003-09-05 | 2017-01-24 | Spoken Traslation Inc. | Method and apparatus for cross-lingual communication |
Family Cites Families (32)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7139709B2 (en) * | 2000-07-20 | 2006-11-21 | Microsoft Corporation | Middleware layer between speech related applications and engines |
| US20020087315A1 (en) * | 2000-12-29 | 2002-07-04 | Lee Victor Wai Leung | Computer-implemented multi-scanning language method and system |
| US7236931B2 (en) * | 2002-05-01 | 2007-06-26 | Usb Ag, Stamford Branch | Systems and methods for automatic acoustic speaker adaptation in computer-assisted transcription systems |
| WO2005122143A1 (en) * | 2004-06-08 | 2005-12-22 | Matsushita Electric Industrial Co., Ltd. | Speech recognition device and speech recognition method |
| KR100755677B1 (en) * | 2005-11-02 | 2007-09-05 | 삼성전자주식회사 | Interactive Speech Recognition Apparatus and Method Using Subject Area Detection |
| US20110077943A1 (en) * | 2006-06-26 | 2011-03-31 | Nec Corporation | System for generating language model, method of generating language model, and program for language model generation |
| WO2008004666A1 (en) * | 2006-07-07 | 2008-01-10 | Nec Corporation | Voice recognition device, voice recognition method and voice recognition program |
| US8433576B2 (en) * | 2007-01-19 | 2013-04-30 | Microsoft Corporation | Automatic reading tutoring with parallel polarized language modeling |
| US10056077B2 (en) * | 2007-03-07 | 2018-08-21 | Nuance Communications, Inc. | Using speech recognition results based on an unstructured language model with a music system |
| US8886545B2 (en) * | 2007-03-07 | 2014-11-11 | Vlingo Corporation | Dealing with switch latency in speech recognition |
| US8170869B2 (en) * | 2007-06-28 | 2012-05-01 | Panasonic Corporation | Method to detect and assist user intentions with real time visual feedback based on interaction language constraints and pattern recognition of sensory features |
| US8209171B2 (en) * | 2007-08-07 | 2012-06-26 | Aurix Limited | Methods and apparatus relating to searching of spoken audio data |
| US9892730B2 (en) * | 2009-07-01 | 2018-02-13 | Comcast Interactive Media, Llc | Generating topic-specific language models |
| KR101622111B1 (en) * | 2009-12-11 | 2016-05-18 | 삼성전자 주식회사 | Dialog system and conversational method thereof |
| US8532994B2 (en) * | 2010-08-27 | 2013-09-10 | Cisco Technology, Inc. | Speech recognition using a personal vocabulary and language model |
| US20120059658A1 (en) * | 2010-09-08 | 2012-03-08 | Nuance Communications, Inc. | Methods and apparatus for performing an internet search |
| US8630860B1 (en) * | 2011-03-03 | 2014-01-14 | Nuance Communications, Inc. | Speaker and call characteristic sensitive open voice search |
| US20130018650A1 (en) * | 2011-07-11 | 2013-01-17 | Microsoft Corporation | Selection of Language Model Training Data |
| JP2013072974A (en) * | 2011-09-27 | 2013-04-22 | Toshiba Corp | Voice recognition device, method and program |
| US9324323B1 (en) * | 2012-01-13 | 2016-04-26 | Google Inc. | Speech recognition using topic-specific language models |
| US9779080B2 (en) * | 2012-07-09 | 2017-10-03 | International Business Machines Corporation | Text auto-correction via N-grams |
| US20140039893A1 (en) * | 2012-07-31 | 2014-02-06 | Sri International | Personalized Voice-Driven User Interfaces for Remote Multi-User Services |
| US9047868B1 (en) * | 2012-07-31 | 2015-06-02 | Amazon Technologies, Inc. | Language model data collection |
| US9035884B2 (en) * | 2012-10-17 | 2015-05-19 | Nuance Communications, Inc. | Subscription updates in multiple device language models |
| US20150278194A1 (en) * | 2012-11-07 | 2015-10-01 | Nec Corporation | Information processing device, information processing method and medium |
| US20140136210A1 (en) * | 2012-11-14 | 2014-05-15 | At&T Intellectual Property I, L.P. | System and method for robust personalization of speech recognition |
| US9190057B2 (en) * | 2012-12-12 | 2015-11-17 | Amazon Technologies, Inc. | Speech model retrieval in distributed speech recognition systems |
| US20140365200A1 (en) * | 2013-06-05 | 2014-12-11 | Lexifone Communication Systems (2010) Ltd. | System and method for automatic speech translation |
| US20140379346A1 (en) * | 2013-06-21 | 2014-12-25 | Google Inc. | Video analysis based language model adaptation |
| WO2015026366A1 (en) * | 2013-08-23 | 2015-02-26 | Nuance Communications, Inc. | Multiple pass automatic speech recognition methods and apparatus |
| US9401146B2 (en) * | 2014-04-01 | 2016-07-26 | Google Inc. | Identification of communication-related voice commands |
| US9842101B2 (en) * | 2014-05-30 | 2017-12-12 | Apple Inc. | Predictive conversion of language input |
-
2015
- 2015-03-04 US US14/638,619 patent/US9812130B1/en active Active
-
2017
- 2017-11-07 US US15/805,456 patent/US20180090147A1/en not_active Abandoned
Patent Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20060149558A1 (en) * | 2001-07-17 | 2006-07-06 | Jonathan Kahn | Synchronized pattern recognition source data processed by manual or automatic means for creation of shared speaker-dependent speech user profile |
| US20110224981A1 (en) * | 2001-11-27 | 2011-09-15 | Miglietta Joseph H | Dynamic speech recognition and transcription among users having heterogeneous protocols |
| US9552354B1 (en) * | 2003-09-05 | 2017-01-24 | Spoken Traslation Inc. | Method and apparatus for cross-lingual communication |
| US20070294081A1 (en) * | 2006-06-16 | 2007-12-20 | Gang Wang | Speech recognition system with user profiles management component |
| US20080221881A1 (en) * | 2006-11-22 | 2008-09-11 | Eric Carraux | Recognition of Speech in Editable Audio Streams |
| US20110055256A1 (en) * | 2007-03-07 | 2011-03-03 | Phillips Michael S | Multiple web-based content category searching in mobile search application |
| US20090113293A1 (en) * | 2007-08-19 | 2009-04-30 | Multimodal Technologies, Inc. | Document editing using anchors |
| US20130238329A1 (en) * | 2012-03-08 | 2013-09-12 | Nuance Communications, Inc. | Methods and apparatus for generating clinical reports |
Cited By (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2020197062A1 (en) | 2019-03-27 | 2020-10-01 | Samsung Electronics Co., Ltd. | Multi-modal interaction with intelligent assistants in voice command devices |
| CN113614826A (en) * | 2019-03-27 | 2021-11-05 | 三星电子株式会社 | Multimodal interaction with an intelligent assistant in a voice command device |
| EP3906550A4 (en) * | 2019-03-27 | 2022-03-09 | Samsung Electronics Co., Ltd. | MULTIMODAL INTERACTION WITH SMART ASSISTANTS IN VOICE CONTROL DEVICES |
| US11482215B2 (en) | 2019-03-27 | 2022-10-25 | Samsung Electronics Co., Ltd. | Multi-modal interaction with intelligent assistants in voice command devices |
| US11721342B2 (en) | 2019-03-27 | 2023-08-08 | Samsung Electronics Co., Ltd. | Multi-modal interaction with intelligent assistants in voice command devices |
| US12300236B2 (en) | 2019-03-27 | 2025-05-13 | Samsung Electronics Co., Ltd. | Multi-modal interaction with intelligent assistants in voice command devices |
Also Published As
| Publication number | Publication date |
|---|---|
| US9812130B1 (en) | 2017-11-07 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20180090147A1 (en) | Apparatus and methods for dynamically changing a language model based on recognized text | |
| US10235992B2 (en) | Apparatus and methods using a pattern matching speech recognition engine to train a natural language speech recognition engine | |
| US9489940B2 (en) | Apparatus and methods to update a language model in a speech recognition system | |
| US11450313B2 (en) | Determining phonetic relationships | |
| EP3407349B1 (en) | Multiple recognizer speech recognition | |
| US10079014B2 (en) | Name recognition system | |
| US9129591B2 (en) | Recognizing speech in multiple languages | |
| EP3469489B1 (en) | Follow-up voice query prediction | |
| US10672391B2 (en) | Improving automatic speech recognition of multilingual named entities | |
| US9606767B2 (en) | Apparatus and methods for managing resources for a system using voice recognition | |
| US9275635B1 (en) | Recognizing different versions of a language | |
| US9685154B2 (en) | Apparatus and methods for managing resources for a system using voice recognition | |
| US8725492B2 (en) | Recognizing multiple semantic items from single utterance | |
| CN110494841B (en) | Contextual language translation | |
| US10186257B1 (en) | Language model for speech recognition to account for types of disfluency | |
| US10643616B1 (en) | Apparatus and methods for dynamically changing a speech resource based on recognized text | |
| KR20160062254A (en) | Method for reasoning of semantic robust on speech recognition error | |
| TW202011384A (en) | Speech correction system and speech correction method |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AS | Assignment |
Owner name: NVOQ INCORPORATED, COLORADO Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CORFIELD, CHARLES;REEL/FRAME:044052/0120 Effective date: 20140318 |
|
| STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |