EP1020848A2 - Method for transmitting auxiliary information in a vocoder stream - Google Patents
Method for transmitting auxiliary information in a vocoder stream Download PDFInfo
- Publication number
- EP1020848A2 EP1020848A2 EP00300042A EP00300042A EP1020848A2 EP 1020848 A2 EP1020848 A2 EP 1020848A2 EP 00300042 A EP00300042 A EP 00300042A EP 00300042 A EP00300042 A EP 00300042A EP 1020848 A2 EP1020848 A2 EP 1020848A2
- Authority
- EP
- European Patent Office
- Prior art keywords
- speech
- vocoder
- output
- information
- gain
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 8
- 230000003044 adaptive effect Effects 0.000 description 21
- 230000005284 excitation Effects 0.000 description 12
- 238000010586 diagram Methods 0.000 description 5
- 239000013598 vector Substances 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 2
- 238000003786 synthesis reaction Methods 0.000 description 2
- 208000032041 Hearing impaired Diseases 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 230000001755 vocal effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/04—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
- G10L19/08—Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters
- G10L19/12—Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters the excitation function being a code excitation, e.g. in code excited linear prediction [CELP] vocoders
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/018—Audio watermarking, i.e. embedding inaudible data in the audio signal
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/04—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
- G10L19/16—Vocoder architecture
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/78—Detection of presence or absence of voice signals
Definitions
- a voice encoder/decoder is used to compress voice signals so as to reduce the transmission bandwidth over a communications channel. By reducing the bandwidth per call, it becomes possible to place more calls over the same channel.
- vocoders There exists a class of vocoders known as code excited linear prediction (CELP) vocoders. In these vocoders, the speech is modeled by a series of filters. The parameters to these filters can be transmitted with much fewer bits than the original speech. It is also necessary to transmit the input (or excitation) to these filters in order to reconstruct the original speech. Because it would require too much bandwidth to transmit the excitation directly, a crude approximation is made by replacing the excitation by a few non-zero pulses.
- CELP code excited linear prediction
Landscapes
- Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
- Transmitters (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
Non-speech information is sent in the bits allocated to one or both of a vocoder's codebooks' output
by selling the gain for the corresponding codebook to zero. By selling the gain to zero, the codebook output
will not be interpreted by the receiving vocoder. In this way, it is possible to transmit additional information
in a way that is totally transparent to the vocoder. Applications for this technique of sending "secret"
messages include, but is not limited to, transmitting parameters for generating non-speech signals. As an
example, information to generate call waiting tones, DTMF, or TTY/TDD characters can be clandestinely
embedded in the compressed bit stream so that these non-speech tones can be regenerated.
Description
- The present invention relates to telecommunications; more particularly, to transmitting data in wireless speech channels.
- A voice encoder/decoder (vocoder) is used to compress voice signals so as to reduce the transmission bandwidth over a communications channel. By reducing the bandwidth per call, it becomes possible to place more calls over the same channel. There exists a class of vocoders known as code excited linear prediction (CELP) vocoders. In these vocoders, the speech is modeled by a series of filters. The parameters to these filters can be transmitted with much fewer bits than the original speech. It is also necessary to transmit the input (or excitation) to these filters in order to reconstruct the original speech. Because it would require too much bandwidth to transmit the excitation directly, a crude approximation is made by replacing the excitation by a few non-zero pulses. The locations of these pulses can be transmitted using very few bits and this crude approximation to the original excitation is adequate to reproduce high quality speech. The excitation is represented by a fixed codebook contribution and an associated gain. Also the quasi-periodicity found in speech is represented by an adaptive codebook output and an associated gain. The fixed codebook output and its associated gain, the adaptive codebook output and its associated gain, and filter parameters (also known as linear predictive coder parameters) are transmitted to represent the encoded speech signal.
- The vocoders were initially designed to compress speech by modeling its characteristics and transmitting the parameters of that model in much fewer bits than transmitting the speech itself. As wireless phones become more commonplace, people are increasingly expecting to use them for the same range of non-speech applications as they have used traditional landline phones, such as accessing voice mail and receiving call waiting tones. Recently, the FCC has mandated that text-telephones for the hearing impaired (TTY/TDD) work with digital cellular phones. The problem with non-speech applications is that they do not fit the vocoder's speech model. When non-speech signals are passed through the vocoder, the decoded result it not always acceptable. The problem is further exacerbated by the fact that wireless phones operate in an error prone environment. In order to recover from transmission errors, the vocoder depends on a speech model to recover from random errors. Once again, non-speech signals do not match this model and so the reconstruction is inadequate.
- The present invention sends information in the bits allocated to one or both of the codebooks' output by setting the gain for the corresponding codebook to zero. By setting the gain to zero, the codebook output will not be interpreted by the receiving vocoder. In this way, it is possible to transmit additional information in a way that is totally transparent to the vocoder. Applications for this technique of sending "secret" messages include, but is not limited to, transmitting parameters for generating non-speech signals. As an example, information to generate call waiting tones, DTMF, or TTY/TDD characters can be clandestinely embedded in the compressed bit stream so that these non-speech tones can be regenerated.
-
- FIG. 1 is a block diagram of a typical vocoder;
- FIG. 2 illustrates the major functions of
encoder 14 ofvocoder 10; and - FIG. 3 is a functional block diagram of
decoder 20 ofvocoder 10. -
- FIG. 1 illustrates a block diagram of a typical vocoder.
Vocoder 10 receives digitized speech oninput 12. The digitized speech is an analog speech signal that has been passed through an analog to digitized converter, and has been broken into frames where each frame is typically on the order of 20 milliseconds. The signal atinput 12 is passed toencoder section 14 which encodes the speech so as decrease the amount of bandwidth used to transmit the speech. The encoded speech is made available atoutput 16. The encoded speech is received by the decode section of a similar vocoder at the other end of a communication channel. The decoder at the other end of the communication channel is similar or identical to the decoder portion ofvocoder 10. Encoded speech is received byvocoder 10 throughinput 18, and is passed todecoder section 20.Decoder section 20 uses the encoded signals received from the transmitting vocoder to produce digitized speech atoutput 22. - Vocoders are well known in the communications arts. For example, vocoders are described in "Speech and audio coding for wireless and network applications," edited by Bishnu S. Atal, Vladimir Cuperman, and Allen Gersho, 1993, by Kluwer Academic Publishers. Vocoders are widely available and manufactured by companies such as Qualcomm Incorporated of San Diego, California, and Lucent Technologies Inc., of Murray Hill, New Jersey.
- FIG. 2 illustrates the major functions of
encoder 14 ofvocoder 10. A digitized speech signal is received atinput 12, and is passed to linearpredictive coder 40. Linearpredictive coder 40 performs a linear predictive analysis of the incoming speech once per frame. Linear predictive analysis is well known in the art and produces a linear predictive synthesis model of the vocal tract based on the input speech signal. The linear predictive parameters or coefficients describing this model are transmitted as part of the encoded speech signal throughoutput 16.Coder 40 uses this model to produce a residual speech signal which represents the excitation that the model uses to reproduce the input speech signal. The residual speech signal is made available atoutput 42. The residual speech fromoutput 42 is provided to input 48 of open-looppitch search unit 50 to an input ofadaptive codebook unit 72 and tofixed codebook unit 82. -
Impulse response unit 60 receives the linear predictive parameters fromcoder 40 and generates the impulse response of the model generated incoder 40. This impulse response is used in the adaptive and fixed codebook units. - Open loop
pitch search unit 50 uses the residual speech signal fromcoder 40 to model its pitch and provides a pitch, or what is commonly called the pitch period or pitch delay signal, atoutput 52. The pitch delay signal fromoutput 52 and the impulse response signal fromoutput 64 ofimpulse response unit 60 are received byinput 70 ofadaptive codebook unit 72.Adaptive codebook unit 72 produces a pitch gain output and a pitch index output which become part of encodedspeech output 16 ofvocoder 10. Output 74 ofadaptive codebook 72 also provides the pitch gain and pitch index signals to input 80 offixed codebook unit 82. Additionally,adaptive codebook 72 provides an excitation signal and an adaptive codebook target signal to input 80. - The
adaptive codebook 72 produces its outputs using the digitized speech signal frominput 12 and the residual speech signal produced by linearpredictive coder 40.Adaptive codebook 72 uses the digitized speech signal and linearpredictive coder 40's residual speech signal to form an adaptive codebook target signal. The adaptive codebook target signal is used as an input to fixedcodebook 82, and as an input to a computation that produces the pitch gain, pitch index and excitation outputs ofadaptive codebook unit 72. Additionally, the adaptive codebook target signal, the pitch delay signal from open looppitch search unit 50, and the impulse response fromimpulse response unit 60 are used to produced the pitch index, the pitch gain and excitation signals which are passed to fixedcodebook unit 82. The manner in which these signals are computed is well known in the vocoder art. - Fixed
codebook 82 uses the inputs received frominput 80 to produce a fixed gain output and a fixed index output which are used as part of the encoded speech atoutput 16. The fixed codebook unit attempts to model the stochastic part of the linearpredictive coder 40's residual speech signal. A target for the fixed codebook search is produced by determining the error between the current adaptive codebook target signal and the residual speech signal. The fixed codebook search produces the fixed gain and fixed index signal for excitation pulses so as to minimize this error. The manner in which the fixed gain and fixed index signals are computed using the outputs fromadaptive codebook unit 72 are well known in the vocoder art. -
90 and 92 are used to send data in place of the bits that are used to send the fixed codebook output and the adaptive codebook output, respectively. When the contacts of the switches are in position "A", the associated codebook output is replaced by data or other information and the associated codebook gain is set to zero or substantially zero. As a result, the scaled codebook output or excitation produced at a receiver will be zero or substantially zero and therefore will not have an adverse affect on the filter being used by the receiving vocoder to model the speech that is normally transmitted.Switches - FIG. 3 illustrates a functional block diagram of
decoder 20 ofvocoder 10. Encoded speech signals are received atinput 18 ofencoder 20. The encoded speech signals are received bydecoder 100.Decoder 100 produces fixed and adaptive code vectors corresponding to the fixed index and pitch index signals, respectively. These code vectors are passed to the excitation construction portion ofunit 110 along with the pitch gain and the fixed gain signals. The pitch gain signal is used to scale the adaptive vector which was produced using the pitch index signal, and the fixed gain signal is used to scale the fixed vector which was obtained using the fixed index signal.Decoder 100 passes the linear predictive code parameters to the filter or model synthesis section ofunit 110.Unit 110 then uses the scaled vectors to excite the filter that is synthesized using the linear predictive coefficients produced by linearpredictive coder 40, and produces an output signal which is representative of the digitized speech originally received atinput 12. Optionally,post filter 120 may be used to shape the spectrum of the digitized speech signal that is produced atoutput 20. - When data rather than speech information is being transmitted, the pitch index (adaptive codebook output) and/or the fixed index (the fixed codebook output) are used to receive the data. The affect of non-data signals on the filter synthesize by
unit 110 are eliminated because the gain value associated with the pitch or code index is zero. - The functional block diagrams can be implemented in various forms. Each block can be implemented individually using microprocessors or microcomputers, or they can be implemented using a single microprocessor or microcomputer. It is also possible to implement each or all of the functional blocks using programmable digital signal processing devices or specialized devices received from the aforementioned manufacturers or other semiconductor manufacturers.
Claims (5)
- A method for transmitting non-speech information over a speech channel, CHARACTERIZED BY the steps of:transmitting non-speech information in place of pitch index information; andtransmitting a pitch gain value having a value of substantially zero.
- The method of claim 1, CHARACTERIZED IN THAT the non-speech information is DTMF information.
- The method of claim 1, CHARACTERIZED IN THAT the non-speech information is TTY/TDD information.
- A method for transmitting non-speech information over a speech channel, CHARACTERIZED BY the steps of:transmitting first non-speech information in place of fixed index information; andtransmitting a index gain value having a value of substantially zero.
- The method of claim 4, further CHARACTERIZED BY the steps of:transmitting second non-speech information in place of pitch index information; andtransmitting a pitch gain value having a value of substantially zero.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US228102 | 1988-08-04 | ||
| US22810299A | 1999-01-11 | 1999-01-11 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| EP1020848A2 true EP1020848A2 (en) | 2000-07-19 |
Family
ID=22855803
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| EP00300042A Withdrawn EP1020848A2 (en) | 1999-01-11 | 2000-01-06 | Method for transmitting auxiliary information in a vocoder stream |
Country Status (7)
| Country | Link |
|---|---|
| EP (1) | EP1020848A2 (en) |
| JP (1) | JP2000209663A (en) |
| KR (1) | KR20000053407A (en) |
| CN (1) | CN1262577A (en) |
| AU (1) | AU6533799A (en) |
| BR (1) | BR0000002A (en) |
| CA (1) | CA2293165A1 (en) |
Cited By (17)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2002039762A3 (en) * | 2000-11-07 | 2003-02-13 | Ericsson Inc | Method of and apparatus for detecting tty type calls in cellular systems |
| EP1420390A1 (en) * | 2002-11-13 | 2004-05-19 | Digital Voice Systems, Inc. | Interoperable vocoder |
| EP1455509A3 (en) * | 2003-03-03 | 2005-01-05 | FREQUENTIS GmbH | Method and system for speech recording |
| FR2859566A1 (en) * | 2003-09-05 | 2005-03-11 | Eads Telecom | METHOD FOR TRANSMITTING AN INFORMATION FLOW BY INSERTION WITHIN A FLOW OF SPEECH DATA, AND PARAMETRIC CODEC FOR ITS IMPLEMENTATION |
| WO2006048733A1 (en) * | 2004-11-03 | 2006-05-11 | Nokia Corporation | Method and device for low bit rate speech coding |
| EP1693832A3 (en) * | 2002-02-04 | 2007-06-20 | Fujitsu Limited | Method, apparatus and system for embedding data in and extracting data from encoded voice code |
| US7310596B2 (en) | 2002-02-04 | 2007-12-18 | Fujitsu Limited | Method and system for embedding and extracting data from encoded voice code |
| US7634399B2 (en) | 2003-01-30 | 2009-12-15 | Digital Voice Systems, Inc. | Voice transcoder |
| US7932851B1 (en) * | 2002-10-15 | 2011-04-26 | Itt Manufacturing Enterprises, Inc. | Ranging signal structure with hidden acquisition code |
| US20110131047A1 (en) * | 2006-09-15 | 2011-06-02 | Rwth Aachen | Steganography in Digital Signal Encoders |
| US8036886B2 (en) | 2006-12-22 | 2011-10-11 | Digital Voice Systems, Inc. | Estimation of pulsed speech model parameters |
| US8359197B2 (en) | 2003-04-01 | 2013-01-22 | Digital Voice Systems, Inc. | Half-rate vocoder |
| US11270714B2 (en) | 2020-01-08 | 2022-03-08 | Digital Voice Systems, Inc. | Speech coding using time-varying interpolation |
| US11990144B2 (en) | 2021-07-28 | 2024-05-21 | Digital Voice Systems, Inc. | Reducing perceived effects of non-voice data in digital speech |
| US12254895B2 (en) | 2021-07-02 | 2025-03-18 | Digital Voice Systems, Inc. | Detecting and compensating for the presence of a speaker mask in a speech signal |
| US12451151B2 (en) | 2022-04-08 | 2025-10-21 | Digital Voice Systems, Inc. | Tone frame detector for digital speech |
| US12462814B2 (en) | 2023-10-06 | 2025-11-04 | Digital Voice Systems, Inc. | Bit error correction in digital speech |
-
1999
- 1999-12-17 AU AU65337/99A patent/AU6533799A/en not_active Abandoned
- 1999-12-30 CA CA002293165A patent/CA2293165A1/en not_active Abandoned
-
2000
- 2000-01-03 BR BR0000002-7A patent/BR0000002A/en not_active Application Discontinuation
- 2000-01-06 EP EP00300042A patent/EP1020848A2/en not_active Withdrawn
- 2000-01-07 KR KR1020000000557A patent/KR20000053407A/en not_active Withdrawn
- 2000-01-10 CN CN00101021A patent/CN1262577A/en active Pending
- 2000-01-11 JP JP2766A patent/JP2000209663A/en active Pending
Cited By (27)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2002039762A3 (en) * | 2000-11-07 | 2003-02-13 | Ericsson Inc | Method of and apparatus for detecting tty type calls in cellular systems |
| EP1693832A3 (en) * | 2002-02-04 | 2007-06-20 | Fujitsu Limited | Method, apparatus and system for embedding data in and extracting data from encoded voice code |
| EP1333424B1 (en) * | 2002-02-04 | 2009-12-09 | Fujitsu Limited | Embedding data in encoded voice and extracting data from encoded voice |
| US7310596B2 (en) | 2002-02-04 | 2007-12-18 | Fujitsu Limited | Method and system for embedding and extracting data from encoded voice code |
| US7932851B1 (en) * | 2002-10-15 | 2011-04-26 | Itt Manufacturing Enterprises, Inc. | Ranging signal structure with hidden acquisition code |
| US8315860B2 (en) | 2002-11-13 | 2012-11-20 | Digital Voice Systems, Inc. | Interoperable vocoder |
| US7970606B2 (en) | 2002-11-13 | 2011-06-28 | Digital Voice Systems, Inc. | Interoperable vocoder |
| EP1420390A1 (en) * | 2002-11-13 | 2004-05-19 | Digital Voice Systems, Inc. | Interoperable vocoder |
| US7957963B2 (en) | 2003-01-30 | 2011-06-07 | Digital Voice Systems, Inc. | Voice transcoder |
| US7634399B2 (en) | 2003-01-30 | 2009-12-15 | Digital Voice Systems, Inc. | Voice transcoder |
| EP1455509A3 (en) * | 2003-03-03 | 2005-01-05 | FREQUENTIS GmbH | Method and system for speech recording |
| US8595002B2 (en) | 2003-04-01 | 2013-11-26 | Digital Voice Systems, Inc. | Half-rate vocoder |
| US8359197B2 (en) | 2003-04-01 | 2013-01-22 | Digital Voice Systems, Inc. | Half-rate vocoder |
| US7684980B2 (en) | 2003-09-05 | 2010-03-23 | Eads Secure Networks | Information flow transmission method whereby said flow is inserted into a speech data flow, and parametric codec used to implement same |
| WO2005024786A1 (en) | 2003-09-05 | 2005-03-17 | Eads Telecom | Information flow transmission method whereby said flow is inserted into a speech data flow, and parametric codec used to implement same |
| FR2859566A1 (en) * | 2003-09-05 | 2005-03-11 | Eads Telecom | METHOD FOR TRANSMITTING AN INFORMATION FLOW BY INSERTION WITHIN A FLOW OF SPEECH DATA, AND PARAMETRIC CODEC FOR ITS IMPLEMENTATION |
| US7752039B2 (en) | 2004-11-03 | 2010-07-06 | Nokia Corporation | Method and device for low bit rate speech coding |
| WO2006048733A1 (en) * | 2004-11-03 | 2006-05-11 | Nokia Corporation | Method and device for low bit rate speech coding |
| US20110131047A1 (en) * | 2006-09-15 | 2011-06-02 | Rwth Aachen | Steganography in Digital Signal Encoders |
| US8412519B2 (en) * | 2006-09-15 | 2013-04-02 | Telefonaktiebolaget L M Ericsson (Publ) | Steganography in digital signal encoders |
| US8433562B2 (en) | 2006-12-22 | 2013-04-30 | Digital Voice Systems, Inc. | Speech coder that determines pulsed parameters |
| US8036886B2 (en) | 2006-12-22 | 2011-10-11 | Digital Voice Systems, Inc. | Estimation of pulsed speech model parameters |
| US11270714B2 (en) | 2020-01-08 | 2022-03-08 | Digital Voice Systems, Inc. | Speech coding using time-varying interpolation |
| US12254895B2 (en) | 2021-07-02 | 2025-03-18 | Digital Voice Systems, Inc. | Detecting and compensating for the presence of a speaker mask in a speech signal |
| US11990144B2 (en) | 2021-07-28 | 2024-05-21 | Digital Voice Systems, Inc. | Reducing perceived effects of non-voice data in digital speech |
| US12451151B2 (en) | 2022-04-08 | 2025-10-21 | Digital Voice Systems, Inc. | Tone frame detector for digital speech |
| US12462814B2 (en) | 2023-10-06 | 2025-11-04 | Digital Voice Systems, Inc. | Bit error correction in digital speech |
Also Published As
| Publication number | Publication date |
|---|---|
| KR20000053407A (en) | 2000-08-25 |
| AU6533799A (en) | 2000-07-13 |
| JP2000209663A (en) | 2000-07-28 |
| BR0000002A (en) | 2002-01-02 |
| CA2293165A1 (en) | 2000-07-11 |
| CN1262577A (en) | 2000-08-09 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| EP0920693B1 (en) | Method and apparatus for improving the voice quality of tandemed vocoders | |
| JP4927257B2 (en) | Variable rate speech coding | |
| US6615169B1 (en) | High frequency enhancement layer coding in wideband speech codec | |
| KR100594670B1 (en) | Automatic speech recognition system and method, automatic speaker recognition system | |
| KR100487943B1 (en) | Speech coding | |
| KR100574031B1 (en) | Speech Synthesis Method and Apparatus and Voice Band Expansion Method and Apparatus | |
| EP1020848A2 (en) | Method for transmitting auxiliary information in a vocoder stream | |
| US20020077812A1 (en) | Voice code conversion apparatus | |
| EP1535277B1 (en) | Bandwidth-adaptive quantization | |
| US8055499B2 (en) | Transmitter and receiver for speech coding and decoding by using additional bit allocation method | |
| US6728669B1 (en) | Relative pulse position in celp vocoding | |
| JPH11259100A (en) | Method for encoding exciting vector | |
| US20030065507A1 (en) | Network unit and a method for modifying a digital signal in the coded domain | |
| AU6203300A (en) | Coded domain echo control | |
| JPH1097295A (en) | Acoustic signal encoding method and decoding method | |
| EP1132893A2 (en) | Constraining pulse positions in CELP vocoding | |
| US7584096B2 (en) | Method and apparatus for encoding speech | |
| EP1387351B1 (en) | Speech encoding device and method having TFO (Tandem Free Operation) function | |
| JP4230550B2 (en) | Speech encoding method and apparatus, and speech decoding method and apparatus | |
| JP3700310B2 (en) | Vector quantization apparatus and vector quantization method | |
| JP2005534984A (en) | Voice communication unit and method for reducing errors in voice frames | |
| EP0930608A1 (en) | Vocoder with efficient, fault tolerant excitation vector encoding | |
| GB2365297A (en) | Data modem compatible with speech codecs |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
| AK | Designated contracting states |
Kind code of ref document: A2 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE |
|
| AX | Request for extension of the european patent |
Free format text: AL;LT;LV;MK;RO;SI |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN |
|
| 18W | Application withdrawn |
Withdrawal date: 20010622 |
|
| RIC1 | Information provided on ipc code assigned before grant |
Free format text: 7G 10L 19/00 A, 7G 10L 19/14 B, 7G 10L 11/02 B |