EP3249373B1 - Procédé de décodage vocal et audio intégrés - Google Patents
Procédé de décodage vocal et audio intégrésInfo
- Publication number
- EP3249373B1 EP3249373B1 EP17173025.2A EP17173025A EP3249373B1 EP 3249373 B1 EP3249373 B1 EP 3249373B1 EP 17173025 A EP17173025 A EP 17173025A EP 3249373 B1 EP3249373 B1 EP 3249373B1
- Authority
- EP
- European Patent Office
- Prior art keywords
- signal
- input signal
- characteristic
- speech
- audio
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/04—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
- G10L19/08—Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters
- G10L19/12—Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters the excitation function being a code excitation, e.g. in code excited linear prediction [CELP] vocoders
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/02—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/04—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
- G10L19/16—Vocoder architecture
- G10L19/18—Vocoders using multiple modes
- G10L19/20—Vocoders using multiple modes using sound class specific coding, hybrid encoders or object based coding
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/008—Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/04—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
- G10L19/16—Vocoder architecture
- G10L19/18—Vocoders using multiple modes
- G10L19/22—Mode decision, i.e. based on audio signal content versus external parameters
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C2207/00—Indexing scheme relating to arrangements for writing information into, or reading information out from, a digital store
- G11C2207/16—Solid state audio
Definitions
- the present invention relates to a method and apparatus for integrally encoding and decoding a speech signal and a audio signal. More particularly, the present invention relates to a method and apparatus that may include an encoding module and a decoding module, operating in a different structure with respect to a speech signal and a audio signal, and effectively select an internal module according to a characteristic of an input signal to thereby effectively encode the speech signal and the audio signal.
- Speech signals and audio signals have different characteristics. Therefore, speech codecs for speech signals and audio codecs for audio signals have been independently researched using unique characteristics of the speech signals and the audio signals.
- a currently widely used speech codec for example, an Adaptive Multi-Rate Wideband Plus (AMR-WB+) codec has a Code Excitation Linear Prediction (CELP) structure, and may extract and quantize a speech parameter based on a Linear Predictive Coder (LPC) according to a speech model of a speech.
- CELP Code Excitation Linear Prediction
- a widely used audio codec for example, a High-Efficiency Advanced Coding version 2 (HE-AAC V2) codec may optimally quantize a frequency coefficient in a psychological acoustic aspect by considering acoustic characteristics of human beings in a frequency domain.
- HE-AAC V2 High-Efficiency Advanced Coding version 2
- US 6,134,518 A describes an apparatus for digitally encoding an input audio signal for storage or transmission.
- a distinguishing parameter is a measure from the input signal. It is determined from the measured distinguishing parameter whether the input signal contains an audio signal of a first type or a second type.
- First and second coders are provided for digitally encoding the input signal using first and second coding methods respectively and a switching arrangement directs, at any particular time, the generation of an output signal by encoding the input signal using either the first or second coders according to whether the input signal contains an audio signal of the first type or the second type at that time.
- the present invention provides a decoding method for integrally decoding a speech signal and an audio signal according to the claim.
- the invention is defined solely by the appended claim.
- an apparatus and method for integrally encoding (not encompassed by the wording of the claims) and decoding a speech signal and a audio signal that may provide an excellent sound quality with respect to both a speech signal and a audio signal at various bitrates by effectively selecting an internal module according to a characteristic of an input signal.
- an apparatus and method for integrally encoding (not encompassed by the wording of the claims) and decoding a speech signal and a audio signal that may provide an excellent sound quality with respect to both a speech signal and a audio signal at various bitrates by appropriately combining a speech encoder and a audio encoder.
- FIG 1 is a block diagram illustrating an encoding apparatus 100 for integrally encoding a speech signal and an audio signal. (Not encompassed by the wording of the claims.)
- the encoding apparatus 100 may include an input signal analyzer 110, a first conversion encoder 120, a Linear Predictive Coding (LPC) encoder 130, and a bitstream generator 140.
- LPC Linear Predictive Coding
- the input signal analyzer 110 may analyze a characteristic of an input signal. In this instance, the input signal analyzer 110 may analyze the characteristic of the input signal to separate the input signal into any one of a audio characteristic signal, a speech characteristic signal, and a silence state signal.
- the speech characteristic signal may be classified into any one of a steady-harmonic state, a low steady-harmonic state, and a steady-noise state.
- the audio characteristic signal may be classified into any one of a complex-harmonic state and a complex-noisy state.
- a state of the input signal may be further classified as follows. Initially, a steady-harmonic (SH) state:
- the SH state may correspond to a signal interval where a harmonic state of a signal explicitly and stably appears.
- the signal interval may include a speechd interval.
- a singleton of sinusoidal signals may be classified into the SH state.
- a low steady-harmonic (LSH) state may be similar to the SH state, however, may have a relatively longer harmonic periodicity and show a strong and steady characteristic in a low frequency band.
- a speechd interval of a male speech may correspond to the LSH state.
- a steady-noise (SN) state White noise may correspond to the SN state.
- an unspeechd interval may be included in the SN state.
- a complex-harmonic (CH) state A signal interval where a plurality of singleton components is mixed to construct a complex harmonic structure may correspond to the CH state. Generally, play intervals of a audio may be included in the CH state.
- CN complex-noisy
- a signal containing unstable noise components may be classified into the CN state.
- ordinary peripheral noise, an attacking signal in a audio play interval, and the like may correspond to the CN state.
- a silence (Si) state An interval with a low energy strength may be classified into the Si state.
- An output result of the input signal analyzer 110 may be used to select one of the first conversion encoder 120 and the LPC encoder 130. Also, the output result of the input signal analyzer 110 may be used to select one of a time domain encoder 131 and a second conversion encoder 132, when performing LPC encoding.
- the first conversion encoder 120 may convert a core band of the input signal to a frequency domain signal and encode the core band of the input signal. Also, when the input signal is a speech characteristic signal, the LPC encoder 130 may perform LPC encoding of the core band of the input signal.
- the LPC encoder 130 may include the time domain encoder 131 and the second conversion encoder 132.
- the time domain encoder 131 may perform time-domain encoding of the input signal.
- the second conversion encoder 132 perform fast Fourier transform (FFT) encoding of the input signal
- the bitstream generator 140 may generate a bitstream using information of the first conversion encoder 120 and information of the LPC encoder 130.
- the encoding apparatus 100 may further include a stereo encoder (not shown) to down-mix the input signal to a mono signal, and to extract stereo sound image information.
- the stereo encoder may selectively apply at least one parameter according to the characteristic of the input signal.
- the stereo encoder 250 may down-mix the input signal to a mono signal, and may extract stereo sound image information. For example, when the input signal is a stereo, the stereo encoder 250 may down-mix the input signal to the mono signal, and may extract the stereo sound image information. An operation of the stereo encoder 250 will be further described in detail with reference to FIG 3 .
- the stereo encoder 250 may include a basic processor 351, a speech signal processor 352, and a audio signal processor 353.
- the stereo encoder 250 may utilize a different encoding module based on the characteristic of the input signal. For example, information of the input signal analyzed by the input signal analyzer 210 may be utilized in the stereo encoder 250.
- a parameter to be used in the stereo encoder 250 may be adjusted based on the analyzed input signal. For example, when the characteristic of the input signal corresponds to a complex state, the input signal may have a strong audio characteristic.
- the input signal may be processed by the speech signal processor 352.
- Other signals may be processed by the basic processor 351.
- the frequency band expander 260 may generate information for expanding the input signal to a high frequency band signal.
- the frequency band expander 260 may selectively apply at least one SBR standard according to the characteristic of the input signal.
- the frequency band expander 260 will be further described in detail with reference to FIG. 4 .
- FIG 4 is a block diagram illustrating an example of the frequency band expander 260 of FIG 2 .
- the audio signal processor 461 may allocate and process relatively large amounts of bits.
- the input signal is a speech
- most of high frequency band signals may be unvoiced noise signals.
- an operation of the frequency band expander 260 may be applied to be different from the complex state.
- the male speech since a harmonic state of a male speech is clearly different from a harmonic state of a female speech, the male speech may be relatively less sensitive to high frequency information in comparison to the female speech.
- the SH processor 462 may weaken white noise encoding with respect to the male speech and may also set an encoding so that a high frequency domain is not predicted.
- the LSH processor 463 may encode the input signal to be suitable for a characteristic of the female speech.
- the first conversion encoder 220 may convert the high frequency band signal to a frequency domain signal and thereby encode the high frequency band signal.
- the first conversion encoder may perform encoding of the core band where a frequency band expansion is not performed.
- the first conversion encoder 220 may use a Modified Discrete Cosine Transform (MDCT) encoding scheme.
- MDCT Modified Discrete Cosine Transform
- the LPC encoder 230 may perform LPC encoding of the high frequency band signal.
- the LPC encoder 230 may perform LPC encoding of the core band where a frequency band expansion is not performed.
- the LPC encoder 230 may include a time domain encoder 231 and a second conversion encoder 232.
- the time domain encoder 231 may perform time-domain encoding of the input signal. Specifically, depending on whether a harmonic state is steady or low, for example, depending on a steady state result, the time domain encoder 231 may perform time-domain encoding with respect to an LPC processed signal, using a Code Excitation Linear Prediction (CELP) scheme.
- CELP Code Excitation Linear Prediction
- the second conversion encoder 232 may perform FFT encoding of the input signal. Specifically, the second conversion encoder 232 may perform encoding in a frequency domain according to a harmonic state, using an FFT scheme of transforming the input signal to the frequency domain signal. Here, the second conversion encoder 232 may variously construct a resolution based on the characteristic of the input signal.
- the bitstream generator 240 may generate a bitstream using the stereo sound image information, information for expanding the input signal to the high frequency band signal, information of the first conversion encoder 220, and information of the LPC encoder 230.
- the encoding apparatus 200 may further include a psychological acoustic unit 270 to control the first conversion encoder 220 using an acoustic characteristic of a human being.
- FIG 5 is a block diagram illustrating a decoding apparatus 500 for integrally decoding a speech signal and a audio signal according to a decoding method of an embodiment of the present invention.
- the decoding apparatus 500 may include a bitstream analyzer 510, a first conversion decoder 520, an LPC decoder 530, a frequency band synthesizer 540, and a stereo decoder 550.
- the bitstream analyzer 510 may analyze an input bitstream signal.
- the first conversion decoder 520 may convert the bitstream signal to a frequency domain signal and decode the bitstream signal.
- the LPC decoder 530 may perform LPC decoding of the bitstream signal.
- the LPC decoder may include a time domain decoder 531 to decode the input bitstream in a time domain, and a second conversion decoder 532 to decode the input bitstream in a frequency band according to a characteristic of the input bitstream.
- the frequency band synthesizer 540 may synthesize a frequency band of the bitstream signal.
- the stereo decoder 550 may decode the bitstream signal to a stereo signal.
- the decoding apparatus 500 may perform an inverse operation of the encoding apparatuses 100 and 200.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Mathematical Physics (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
- Stereophonic System (AREA)
Claims (1)
- Un procédé de décodage pour décoder intégralement un signal de parole et un signal audio, le procédé de décodage comprenant :le fait de décoder un signal d'entrée codé dans un flux binaire en fonction du fait qu'une caractéristique du signal d'entrée codé est une caractéristique audio ou une caractéristique de parole ;le fait de synthétiser une bande de fréquences du flux binaire ; etle fait de décoder le flux binaire en un signal stéréo, etle décodage du signal d'entrée codé est mis en œuvre par l'un parmi ce qui suit :le fait de décoder une bande centrale du signal d'entrée codé en utilisant un module de décodage dans le domaine temporel, lorsque le signal d'entrée présente la caractéristique de parole, etle fait de décoder la bande centrale du signal d'entrée codé en utilisant un module de décodage par transformation, lorsque le signal d'entrée présente la caractéristique audio,la bande centrale étant la bande de fréquences du signal d'entrée à laquelle aucune extension de bande de fréquences n'a été appliquée lors de l'étape de codage du signal d'entrée,le signal d'entrée codé étant traité par le module de décodage dans le domaine temporel ou le module de décodage par transformation selon que la caractéristique du signal d'entrée codé est la caractéristique audio ou la caractéristique de parole.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| EP25187160.4A EP4648047A1 (fr) | 2008-07-14 | 2009-07-14 | Procédé de décodage vocal et audio intégrés |
Applications Claiming Priority (5)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| KR20080068369 | 2008-07-14 | ||
| KR20080071218 | 2008-07-22 | ||
| KR1020090062070A KR101261677B1 (ko) | 2008-07-14 | 2009-07-08 | 음성/음악 통합 신호의 부호화/복호화 장치 |
| PCT/KR2009/003861 WO2010008179A1 (fr) | 2008-07-14 | 2009-07-14 | Appareil et procédé de codage et de décodage vocal et audio intégrés |
| EP09798082.5A EP2302345B1 (fr) | 2008-07-14 | 2009-07-14 | Appareil de codage et de décodage vocal et audio intégrés |
Related Parent Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| EP09798082.5A Division EP2302345B1 (fr) | 2008-07-14 | 2009-07-14 | Appareil de codage et de décodage vocal et audio intégrés |
Related Child Applications (2)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| EP25187160.4A Division-Into EP4648047A1 (fr) | 2008-07-14 | 2009-07-14 | Procédé de décodage vocal et audio intégrés |
| EP25187160.4A Division EP4648047A1 (fr) | 2008-07-14 | 2009-07-14 | Procédé de décodage vocal et audio intégrés |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| EP3249373A1 EP3249373A1 (fr) | 2017-11-29 |
| EP3249373B1 true EP3249373B1 (fr) | 2025-09-10 |
Family
ID=41816656
Family Applications (3)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| EP09798082.5A Active EP2302345B1 (fr) | 2008-07-14 | 2009-07-14 | Appareil de codage et de décodage vocal et audio intégrés |
| EP17173025.2A Active EP3249373B1 (fr) | 2008-07-14 | 2009-07-14 | Procédé de décodage vocal et audio intégrés |
| EP25187160.4A Pending EP4648047A1 (fr) | 2008-07-14 | 2009-07-14 | Procédé de décodage vocal et audio intégrés |
Family Applications Before (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| EP09798082.5A Active EP2302345B1 (fr) | 2008-07-14 | 2009-07-14 | Appareil de codage et de décodage vocal et audio intégrés |
Family Applications After (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| EP25187160.4A Pending EP4648047A1 (fr) | 2008-07-14 | 2009-07-14 | Procédé de décodage vocal et audio intégrés |
Country Status (5)
| Country | Link |
|---|---|
| US (5) | US8990072B2 (fr) |
| EP (3) | EP2302345B1 (fr) |
| KR (2) | KR101261677B1 (fr) |
| CN (2) | CN104299618B (fr) |
| WO (1) | WO2010008179A1 (fr) |
Families Citing this family (14)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR101261677B1 (ko) | 2008-07-14 | 2013-05-06 | 광운대학교 산학협력단 | 음성/음악 통합 신호의 부호화/복호화 장치 |
| US20120095729A1 (en) * | 2010-10-14 | 2012-04-19 | Electronics And Telecommunications Research Institute | Known information compression apparatus and method for separating sound source |
| CN103035248B (zh) * | 2011-10-08 | 2015-01-21 | 华为技术有限公司 | 音频信号编码方法和装置 |
| US9111531B2 (en) * | 2012-01-13 | 2015-08-18 | Qualcomm Incorporated | Multiple coding mode signal classification |
| EP2981956B1 (fr) | 2013-04-05 | 2022-11-30 | Dolby International AB | Système de traitement audio |
| CN103413553B (zh) | 2013-08-20 | 2016-03-09 | 腾讯科技(深圳)有限公司 | 音频编码方法、音频解码方法、编码端、解码端和系统 |
| KR102552293B1 (ko) | 2014-02-24 | 2023-07-06 | 삼성전자주식회사 | 신호 분류 방법 및 장치, 및 이를 이용한 오디오 부호화방법 및 장치 |
| BR112016022466B1 (pt) | 2014-04-17 | 2020-12-08 | Voiceage Evs Llc | método para codificar um sinal sonoro, método para decodificar um sinal sonoro, dispositivo para codificar um sinal sonoro e dispositivo para decodificar um sinal sonoro |
| FR3020732A1 (fr) * | 2014-04-30 | 2015-11-06 | Orange | Correction de perte de trame perfectionnee avec information de voisement |
| US9883308B2 (en) | 2014-07-01 | 2018-01-30 | Electronics And Telecommunications Research Institute | Multichannel audio signal processing method and device |
| FR3024582A1 (fr) | 2014-07-29 | 2016-02-05 | Orange | Gestion de la perte de trame dans un contexte de transition fd/lpd |
| KR102398124B1 (ko) | 2015-08-11 | 2022-05-17 | 삼성전자주식회사 | 음향 데이터의 적응적 처리 |
| KR20220009563A (ko) | 2020-07-16 | 2022-01-25 | 한국전자통신연구원 | 오디오 신호의 부호화 및 복호화 방법과 이를 수행하는 부호화기 및 복호화기 |
| KR102837318B1 (ko) | 2021-05-24 | 2025-07-23 | 한국전자통신연구원 | 오디오 신호의 부호화 및 복호화 방법과 그 방법을 수행하는 부호화기 및 복호화기 |
Family Cites Families (22)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| SE504010C2 (sv) * | 1995-02-08 | 1996-10-14 | Ericsson Telefon Ab L M | Förfarande och anordning för prediktiv kodning av tal- och datasignaler |
| US6134518A (en) | 1997-03-04 | 2000-10-17 | International Business Machines Corporation | Digital audio signal coding using a CELP coder and a transform coder |
| JP3211762B2 (ja) * | 1997-12-12 | 2001-09-25 | 日本電気株式会社 | 音声及び音楽符号化方式 |
| ES2247741T3 (es) * | 1998-01-22 | 2006-03-01 | Deutsche Telekom Ag | Metodo para conmutacion controlada por señales entre esquemas de codificacion de audio. |
| US7266501B2 (en) * | 2000-03-02 | 2007-09-04 | Akiba Electronics Institute Llc | Method and apparatus for accommodating primary content audio and secondary content remaining audio capability in the digital audio production process |
| US6658383B2 (en) | 2001-06-26 | 2003-12-02 | Microsoft Corporation | Method for coding speech and music signals |
| US7555434B2 (en) * | 2002-07-19 | 2009-06-30 | Nec Corporation | Audio decoding device, decoding method, and program |
| JP4445328B2 (ja) * | 2004-05-24 | 2010-04-07 | パナソニック株式会社 | 音声・楽音復号化装置および音声・楽音復号化方法 |
| JP4871501B2 (ja) * | 2004-11-04 | 2012-02-08 | パナソニック株式会社 | ベクトル変換装置及びベクトル変換方法 |
| DE102005032724B4 (de) * | 2005-07-13 | 2009-10-08 | Siemens Ag | Verfahren und Vorrichtung zur künstlichen Erweiterung der Bandbreite von Sprachsignalen |
| KR100647336B1 (ko) * | 2005-11-08 | 2006-11-23 | 삼성전자주식회사 | 적응적 시간/주파수 기반 오디오 부호화/복호화 장치 및방법 |
| TWI333643B (en) * | 2006-01-18 | 2010-11-21 | Lg Electronics Inc | Apparatus and method for encoding and decoding signal |
| KR20070077652A (ko) * | 2006-01-24 | 2007-07-27 | 삼성전자주식회사 | 적응적 시간/주파수 기반 부호화 모드 결정 장치 및 이를위한 부호화 모드 결정 방법 |
| KR101393298B1 (ko) | 2006-07-08 | 2014-05-12 | 삼성전자주식회사 | 적응적 부호화/복호화 방법 및 장치 |
| WO2008035949A1 (fr) | 2006-09-22 | 2008-03-27 | Samsung Electronics Co., Ltd. | Procédé, support et système de codage et/ou de décodage de signaux audio reposant sur l'extension de largeur de bande et le codage stéréo |
| US20080114608A1 (en) * | 2006-11-13 | 2008-05-15 | Rene Bastien | System and method for rating performance |
| KR101434198B1 (ko) * | 2006-11-17 | 2014-08-26 | 삼성전자주식회사 | 신호 복호화 방법 |
| CN101512909B (zh) * | 2006-11-30 | 2012-12-19 | 松下电器产业株式会社 | 信号处理装置 |
| KR100964402B1 (ko) * | 2006-12-14 | 2010-06-17 | 삼성전자주식회사 | 오디오 신호의 부호화 모드 결정 방법 및 장치와 이를 이용한 오디오 신호의 부호화/복호화 방법 및 장치 |
| KR101411901B1 (ko) * | 2007-06-12 | 2014-06-26 | 삼성전자주식회사 | 오디오 신호의 부호화/복호화 방법 및 장치 |
| KR101261677B1 (ko) * | 2008-07-14 | 2013-05-06 | 광운대학교 산학협력단 | 음성/음악 통합 신호의 부호화/복호화 장치 |
| AU2014211479B2 (en) * | 2013-01-29 | 2017-02-23 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Audio encoder, audio decoder, method for providing an encoded audio information, method for providing a decoded audio information, computer program and encoded representation using a signal-adaptive bandwidth extension |
-
2009
- 2009-07-08 KR KR1020090062070A patent/KR101261677B1/ko active Active
- 2009-07-14 US US13/054,376 patent/US8990072B2/en active Active
- 2009-07-14 WO PCT/KR2009/003861 patent/WO2010008179A1/fr not_active Ceased
- 2009-07-14 EP EP09798082.5A patent/EP2302345B1/fr active Active
- 2009-07-14 EP EP17173025.2A patent/EP3249373B1/fr active Active
- 2009-07-14 CN CN201410479883.9A patent/CN104299618B/zh active Active
- 2009-07-14 EP EP25187160.4A patent/EP4648047A1/fr active Pending
- 2009-07-14 CN CN200980135842.5A patent/CN102150024B/zh active Active
-
2012
- 2012-07-13 KR KR1020120076634A patent/KR101565633B1/ko active Active
-
2015
- 2015-01-26 US US14/605,006 patent/US9711159B2/en active Active
-
2017
- 2017-06-09 US US15/618,689 patent/US10121482B2/en active Active
-
2018
- 2018-11-02 US US16/179,120 patent/US10777212B2/en active Active
-
2020
- 2020-09-11 US US17/018,295 patent/US11456002B2/en active Active
Also Published As
| Publication number | Publication date |
|---|---|
| CN104299618A (zh) | 2015-01-21 |
| CN102150024B (zh) | 2014-10-22 |
| US8990072B2 (en) | 2015-03-24 |
| EP2302345A4 (fr) | 2012-10-24 |
| EP2302345A1 (fr) | 2011-03-30 |
| US9711159B2 (en) | 2017-07-18 |
| US20110112829A1 (en) | 2011-05-12 |
| WO2010008179A1 (fr) | 2010-01-21 |
| KR101261677B1 (ko) | 2013-05-06 |
| KR20120089221A (ko) | 2012-08-09 |
| KR101565633B1 (ko) | 2015-11-13 |
| CN102150024A (zh) | 2011-08-10 |
| EP2302345B1 (fr) | 2017-06-21 |
| CN104299618B (zh) | 2019-07-12 |
| US20170345435A1 (en) | 2017-11-30 |
| EP4648047A1 (fr) | 2025-11-12 |
| EP3249373A1 (fr) | 2017-11-29 |
| KR20100007749A (ko) | 2010-01-22 |
| US11456002B2 (en) | 2022-09-27 |
| US20200411022A1 (en) | 2020-12-31 |
| US10121482B2 (en) | 2018-11-06 |
| US10777212B2 (en) | 2020-09-15 |
| US20150154974A1 (en) | 2015-06-04 |
| US20190074022A1 (en) | 2019-03-07 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11456002B2 (en) | Apparatus and method for encoding and decoding of integrated speech and audio utilizing a band expander with a spectral band replication (SBR) to output the SBR to either time or transform domain encoding according to the input signal | |
| US12205599B2 (en) | Apparatus for encoding and decoding of integrated speech and audio | |
| KR101785885B1 (ko) | 적응적 대역폭 확장 및 그것을 위한 장치 | |
| KR101792712B1 (ko) | 주파수 도메인 내의 선형 예측 코딩 기반 코딩을 위한 저주파수 강조 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED |
|
| AC | Divisional application: reference to earlier application |
Ref document number: 2302345 Country of ref document: EP Kind code of ref document: P |
|
| AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO SE SI SK SM TR |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
| 17P | Request for examination filed |
Effective date: 20180529 |
|
| RBV | Designated contracting states (corrected) |
Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO SE SI SK SM TR |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
| 17Q | First examination report despatched |
Effective date: 20191004 |
|
| GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: GRANT OF PATENT IS INTENDED |
|
| INTG | Intention to grant announced |
Effective date: 20250320 |
|
| GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
| GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE PATENT HAS BEEN GRANTED |
|
| P01 | Opt-out of the competence of the unified patent court (upc) registered |
Free format text: CASE NUMBER: UPC_APP_0253_3249373/2025 Effective date: 20250714 |
|
| AC | Divisional application: reference to earlier application |
Ref document number: 2302345 Country of ref document: EP Kind code of ref document: P |
|
| AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO SE SI SK SM TR |
|
| REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D |
|
| REG | Reference to a national code |
Ref country code: CH Ref legal event code: EP |
|
| REG | Reference to a national code |
Ref country code: DE Ref legal event code: R096 Ref document number: 602009065590 Country of ref document: DE |
|
| REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D |
|
| REG | Reference to a national code |
Ref country code: NL Ref legal event code: FP |