WO2008037112A1 - Apparatus and method for processing video data - Google Patents
Apparatus and method for processing video data Download PDFInfo
- Publication number
- WO2008037112A1 WO2008037112A1 PCT/CN2006/002517 CN2006002517W WO2008037112A1 WO 2008037112 A1 WO2008037112 A1 WO 2008037112A1 CN 2006002517 W CN2006002517 W CN 2006002517W WO 2008037112 A1 WO2008037112 A1 WO 2008037112A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- module
- data
- processing
- decoding
- blocks
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 31
- 239000000872 buffer Substances 0.000 claims abstract description 22
- 238000013139 quantization Methods 0.000 claims abstract description 8
- 230000009466 transformation Effects 0.000 claims abstract description 8
- 239000013598 vector Substances 0.000 claims description 7
- 230000006870 function Effects 0.000 description 8
- 230000001419 dependent effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 230000015654 memory Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44004—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving video buffer management, e.g. video decoder buffer or video display buffer
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/438—Interfacing the downstream path of the transmission network originating from a server, e.g. retrieving encoded video stream packets from an IP network
- H04N21/4382—Demodulation or channel decoding, e.g. QPSK demodulation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/79—Processing of colour television signals in connection with recording
- H04N9/80—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
- H04N9/804—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components
Definitions
- This invention relates to an apparatus and a method for processing video data.
- the processing can be performed in the context of decoding video data.
- the decoding procedure mainly includes four stages: entropy or bit-stream decoding, inverse transformation and inverse quantization, motion compensation, and de-blocking filter (except for MPEG2 ) .
- entropy or bit-stream decoding For supporting high-resolution HD video, a high performance decoding process is required.
- All current video standards use macroblocks (MBs) as processing unit. The number (or percentage) of processing cycles that is available per MB is limited, so that parallel processing is used.
- each of the four above-mentioned blocks can work independently after getting the required information. Therefore the four stages can be executed in parallel.
- the de-blocking filter requires de-blocking information, which is motion vector (MV), skip flag etc. and the computation result after motion compensation.
- the MBs are processed one by one, i.e. processing of a new MB begins after the previous MB is finished, and each processing block handles one MB at a time.
- Entropy decoding E for a MB comprises decoding the non-residual 10a and decoding the residual syntax element 10b.
- inverse transformation and inverse quantization ITIQ are performed 10c.
- motion compensation MC the prediction data are computed 1Od and the picture data are reconstructed 1Oe.
- the single blocks work simultaneously, but all on the same MB. Each block starts working when it has enough input data from the previous block.
- the duration of the process per MB is the cycle number clO from decoding the first MB level syntax to getting the reconstructed data for the last sub-block.
- the same steps lla-lle are performed for the next MB, wherein the first step of decoding 11a is executed after the last step of reconstructing the current MB 1Oe is finished.
- the motion compensation stage needs usually most cycles since for many bit-streams the motion vectors are irregular, so that some motion vectors need a lot of cycles. Also the entropy decoding for the high data rate bit-stream, including non- residual and residual data, costs a lot of cycles. During such complicated operations, the respective other stages are idle.
- the processing in each of the four above-mentioned blocks includes at least two steps.
- the bit-stream decoding procedure in the entropy module includes decoding non- residual syntax elements (such as MB type, reference index, MV difference) and decoding residual data.
- the motion compensation stage can have the two steps of computing the prediction value using the MVs and computing the reconstructed value.
- the function blocks are controlled in a decentralized manner by their respective predecessor, so that the first block in the processing chain needs not wait for the last block to have its data processing finished, before it starts with new data. Instead, when a particular block has finished processing a data block, e.g. a MB, it passes the result data down to the next block, which turns to these data as soon as it has free processing capacity.
- This pipelining concept requires pipeline buffers located (at least logically) between and within the function blocks. If data processing in the different function blocks takes different amounts of time, the input buffer of a slower function block will soon be full.
- such function block has a possibility to control its preceding function block, e.g. slow down or stop it for a while.
- the preceding function block is designed to wait for a defined amount of cycles.
- An apparatus for decoding video data organized in data blocks comprises a plurality of processing modules to be passed sequentially by the data, wherein the data output of a first module triggers the start of the further processing of said output data in a subsequent second module, and wherein at least the first and the second module can process two or more of said data blocks at a time and include pipeline buffers for storing the two or more data blocks .
- a method for decoding video data organized in data blocks, wherein subsequent processing steps are performed in separate modules on the data blocks comprises the steps of processing a first data block in a first module, indicating from the first module to a second module that processing of the first data block is finished, detecting in the second module that the processing of the first data block in the first module is finished, transferring the first data block from the first to the second module via a pipeline buffer, and processing the transferred first data block in the second module, wherein at least the first and second modules can process two or more data blocks at a time and include pipeline buffers fo . r storing the two or more data blocks.
- Fig.l a conventional video data processing flow
- Fig.2 a pipelined video data processing flow
- Fig.3 detailed steps of the pipelined video data processing flow.
- Fig.2 shows a pipelined video data processing flow according to the invention, which processes picture data stored in an SDRAM (not shown) .
- the currently processed pixel data are copied into a pixel buffer for faster access
- Input data are processed in an entropy decoding stage E by first decoding the non-residual data 20a and then decoding the residual data 20b, for which the decoded non-residual data are required.
- decoded data are output of the residual data decoding procedure 20b, they are successively passed to the next step 20c of inverse transformation and inverse quantization ITIQ.
- the entropy decoding stage E waits for a certain time after it has processed its data 20b and before it starts processing new data 21a. This wait time can be a predefined number of wait cycles, or controlled by a signal from the next block ITIQ indicating that its buffers are full, or a delay of input data or similar.
- Motion compensation includes two steps: computing the prediction values 2Od and computing the reconstructed value 2Oe.
- computing the prediction values 2Od only needs the MVs, the reference frame index in the pixel buffer, and the frame address in the picture SDRAM.
- this step can start as soon as these data are available.
- the MVs and the reference frame index are decoded 20a in the bit-stream.
- the frame address is computed according to the reference frame index.
- computing the reconstructed value 2Oe needs the prediction value, which is computed by the first step 2Od, and the residual data, which are decoded in the bit-stream. Therefore, these values are pipelined from the first step 2Od and the decoding 20a as soon as they are ready, and the second step 2Oe can commence.
- the processing of the next MB can begin more or less immediately after processing the data of a current MB. There is no need to wait until the complicated motion compensation' s processing of the current MB is finished.
- the pipeline diagram in Fig.2 shows that one advantage of the invention is that the idle time of each processing unit is reduced, and the whole processing is more compact and thus more effective. The average cycle number required for decoding each MB is reduced significantly.
- the optimization of the present invention reduces the amount of required cycles to (tl-t2), where t2 corresponds to the time during which the motion compensation was idle in the conventional system (from the end of step 1Oe to the beginning of step Hd in Fig.l), assuming that the motion compensation step requires more cycles than any other step.
- the number of required cycles per MB depends on the maximum required number of cycles in any single step, which is usually the motion compensation.
- the above-mentioned first and second steps 2Od, 2Oe of the motion compensation are interleaved, meaning that the second step begins while the first is not yet finished, and take together an amount of cycles c20 that is less than the number of cycles for the conventional complete procedure of Fig.l, and that defines the cycle length of the complete process.
- the second step begins as soon as the first step has generated enough data for the second step to start processing.
- Fig.3 shows the interleaving of the single steps of the decoding procedure for each MB.
- the entropy module performs entropy decoding.
- the processing in this module uses MBs of l ⁇ xl ⁇ pixels.
- the procedure can be divided into three steps: decoding motion vectors 101, decoding residual data 102 and storing the residual data in a pipeline buffer 103.
- the entropy module needs not only process the MVs and residual data, but can also decode the other syntax elements in the bit-stream, e.g. MB type, skip flag etc. But since the next module will not need this information immediately, this decoding is not relevant for the pipelining and therefore not shown in Fig.3.
- the Inverse Transformation/Inverse Quantization module ITIQ processes 4x4 pixel sub-blocks. Each MB can be divided into 16 such sub-blocks.
- the procedure for one process unit can be divided into three steps: getting residual data 201 from the entropy module via a pipeline buffer, computing the ITIQ result 202 by inverse transformation and inverse quantization, and storing the ITIQ result into a pipeline buffer 203.
- This module can start to work immediately after the entropy module has provided the decoded residual data for a 4x4 pixel subblock.
- the third module performs motion compensation: the processing in this module is also based on 4x4 pixel sub- blocks.
- the procedure can be divided into computing the prediction value according to the MVs, and computing and getting the reconstructed data using the ITIQ result.
- the first part includes three steps: getting the MVs that were decoded by the entropy module 301, computing the prediction value according to the MVs 302, and storing the prediction value into the MC buffer 303. This part can begin to work immediately after decoding the related MVs.
- the second part includes also three steps: getting the ITIQ result data 401 from the ITIQ module, getting the prediction value from the MC buffer 402, and computing the reconstructed value 403. This part can begin to work after getting the computation result from the ITIQ module .
- the architecture according to the invention can hold two or more MBs to be processed in parallel. If only two MBs in parallel are supported, the buffer for storing MVs and residual data in the related modules needs to store the MVs and residual data for the two MBs. Simultaneous processing of three or more MBs can be supported if additional buffer space is available.
- An advantage of the invention is that the idle time of processing blocks is reduced. This leads to an improved efficiency, namely either less power consumption with a similar performance, or increased performance with comparable power consumption.
- the invention is advantageous to be used for video decoding products, particularly for HD resolution decoders that are implemented in a modular fashion, both in hardware or software, such as e.g. multi-standard decoders for H.264, VC-I, MPEG-2, AVS etc.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Video decoding for any standard mainly includes the steps of entropy decoding (E), inverse transformation/inverse quantization (ITIQ) and motion compensation (MC). These steps can work independently and in parallel after getting the required information. Known video decoders suffer from bottlenecks resulting from centrally organized processing. An apparatus for decoding video data organized in blocks comprises a plurality of processing modules (E, ITIQ, MC) to be passed sequentially by the data, wherein the data output of a first module (101) triggers the start of the further processing of said data in a subsequent second module (301,302), and wherein at least the first and the second module can process simultaneously two or more of said data blocks and include pipeline buffers for storing the two or more data blocks. An advantage of the invention is that the idle time of processing blocks is reduced, which improves decoding efficiency.
Description
Apparatus and method for processing video data
Field of the invention
This invention relates to an apparatus and a method for processing video data. In particular, the processing can be performed in the context of decoding video data.
Background
For today's video standards e.g. MPEG2, AVS, VC-I and H264, the decoding procedure mainly includes four stages: entropy or bit-stream decoding, inverse transformation and inverse quantization, motion compensation, and de-blocking filter (except for MPEG2 ) . For supporting high-resolution HD video, a high performance decoding process is required. All current video standards use macroblocks (MBs) as processing unit. The number (or percentage) of processing cycles that is available per MB is limited, so that parallel processing is used. In general, each of the four above-mentioned blocks can work independently after getting the required information. Therefore the four stages can be executed in parallel. E.g. the de-blocking filter requires de-blocking information, which is motion vector (MV), skip flag etc. and the computation result after motion compensation.
The MBs are processed one by one, i.e. processing of a new MB begins after the previous MB is finished, and each processing block handles one MB at a time. This is depicted in Fig.l. Entropy decoding E for a MB comprises decoding the non-residual 10a and decoding the residual syntax element 10b. Then, inverse transformation and inverse quantization ITIQ are performed 10c. In the next step
motion compensation MC, the prediction data are computed 1Od and the picture data are reconstructed 1Oe. The single blocks work simultaneously, but all on the same MB. Each block starts working when it has enough input data from the previous block. The duration of the process per MB is the cycle number clO from decoding the first MB level syntax to getting the reconstructed data for the last sub-block. The same steps lla-lle are performed for the next MB, wherein the first step of decoding 11a is executed after the last step of reconstructing the current MB 1Oe is finished.
From the four stages of the decoding procedure, the motion compensation stage needs usually most cycles since for many bit-streams the motion vectors are irregular, so that some motion vectors need a lot of cycles. Also the entropy decoding for the high data rate bit-stream, including non- residual and residual data, costs a lot of cycles. During such complicated operations, the respective other stages are idle.
Summary of the Invention
Known video processing systems suffer from the bottlenecks that result from centrally organized processing stages. Shared data busses, shared memories and centralized control units reduce the processing performance. The present invention provides a more decentralized processing flow that enables higher performance processing, wherein less processing cycles per MB are required.
It has been recognized that for the current video standards the processing in each of the four above-mentioned blocks includes at least two steps. E.g. the bit-stream decoding
procedure in the entropy module includes decoding non- residual syntax elements (such as MB type, reference index, MV difference) and decoding residual data. The motion compensation stage can have the two steps of computing the prediction value using the MVs and computing the reconstructed value.
According to the invention, the function blocks are controlled in a decentralized manner by their respective predecessor, so that the first block in the processing chain needs not wait for the last block to have its data processing finished, before it starts with new data. Instead, when a particular block has finished processing a data block, e.g. a MB, it passes the result data down to the next block, which turns to these data as soon as it has free processing capacity. This pipelining concept requires pipeline buffers located (at least logically) between and within the function blocks. If data processing in the different function blocks takes different amounts of time, the input buffer of a slower function block will soon be full. In one embodiment, such function block has a possibility to control its preceding function block, e.g. slow down or stop it for a while. In another embodiment, where the processing times per function block are known in advance, the preceding function block is designed to wait for a defined amount of cycles.
An apparatus for decoding video data organized in data blocks comprises a plurality of processing modules to be passed sequentially by the data, wherein the data output of a first module triggers the start of the further processing of said output data in a subsequent second module, and wherein at least the first and the second module can
process two or more of said data blocks at a time and include pipeline buffers for storing the two or more data blocks .
A method for decoding video data organized in data blocks, wherein subsequent processing steps are performed in separate modules on the data blocks, comprises the steps of processing a first data block in a first module, indicating from the first module to a second module that processing of the first data block is finished, detecting in the second module that the processing of the first data block in the first module is finished, transferring the first data block from the first to the second module via a pipeline buffer, and processing the transferred first data block in the second module, wherein at least the first and second modules can process two or more data blocks at a time and include pipeline buffers fo.r storing the two or more data blocks.
Advantageous embodiments of the invention are disclosed in the dependent claims, the following description and the figures .
Brief description of the drawings
Exemplary embodiments of the invention are described with reference to the accompanying drawings, which show in
Fig.l a conventional video data processing flow;
Fig.2 a pipelined video data processing flow; and
Fig.3 detailed steps of the pipelined video data processing flow.
Detailed description of the invention
Fig.2 shows a pipelined video data processing flow according to the invention, which processes picture data stored in an SDRAM (not shown) . The currently processed pixel data are copied into a pixel buffer for faster access Input data are processed in an entropy decoding stage E by first decoding the non-residual data 20a and then decoding the residual data 20b, for which the decoded non-residual data are required. While decoded data are output of the residual data decoding procedure 20b, they are successively passed to the next step 20c of inverse transformation and inverse quantization ITIQ. In this example, the entropy decoding stage E waits for a certain time after it has processed its data 20b and before it starts processing new data 21a. This wait time can be a predefined number of wait cycles, or controlled by a signal from the next block ITIQ indicating that its buffers are full, or a delay of input data or similar.
When the inverse transformation and inverse quantization stage ITIQ provides the first output data, these are passed to the motion compensation stage MC. Motion compensation includes two steps: computing the prediction values 2Od and computing the reconstructed value 2Oe.
For the first step, computing the prediction values 2Od only needs the MVs, the reference frame index in the pixel buffer, and the frame address in the picture SDRAM.
Therefore this step can start as soon as these data are available. The MVs and the reference frame index are
decoded 20a in the bit-stream. The frame address is computed according to the reference frame index. For the second step, computing the reconstructed value 2Oe needs the prediction value, which is computed by the first step 2Od, and the residual data, which are decoded in the bit-stream. Therefore, these values are pipelined from the first step 2Od and the decoding 20a as soon as they are ready, and the second step 2Oe can commence.
According to the invention, the processing of the next MB can begin more or less immediately after processing the data of a current MB. There is no need to wait until the complicated motion compensation' s processing of the current MB is finished. The pipeline diagram in Fig.2 shows that one advantage of the invention is that the idle time of each processing unit is reduced, and the whole processing is more compact and thus more effective. The average cycle number required for decoding each MB is reduced significantly. As shown in Fig.2, when the conventional amount of required cycles is tl, the optimization of the present invention reduces the amount of required cycles to (tl-t2), where t2 corresponds to the time during which the motion compensation was idle in the conventional system (from the end of step 1Oe to the beginning of step Hd in Fig.l), assuming that the motion compensation step requires more cycles than any other step.
In the processing according to the invention, the number of required cycles per MB depends on the maximum required number of cycles in any single step, which is usually the motion compensation. The above-mentioned first and second steps 2Od, 2Oe of the motion compensation are interleaved, meaning that the second step begins while the first is not
yet finished, and take together an amount of cycles c20 that is less than the number of cycles for the conventional complete procedure of Fig.l, and that defines the cycle length of the complete process. In particular, the second step begins as soon as the first step has generated enough data for the second step to start processing.
Fig.3 shows the interleaving of the single steps of the decoding procedure for each MB.
First, the entropy module performs entropy decoding. The processing in this module uses MBs of lβxlβ pixels. The procedure can be divided into three steps: decoding motion vectors 101, decoding residual data 102 and storing the residual data in a pipeline buffer 103. In fact, the entropy module needs not only process the MVs and residual data, but can also decode the other syntax elements in the bit-stream, e.g. MB type, skip flag etc. But since the next module will not need this information immediately, this decoding is not relevant for the pipelining and therefore not shown in Fig.3.
The Inverse Transformation/Inverse Quantization module ITIQ processes 4x4 pixel sub-blocks. Each MB can be divided into 16 such sub-blocks. The procedure for one process unit can be divided into three steps: getting residual data 201 from the entropy module via a pipeline buffer, computing the ITIQ result 202 by inverse transformation and inverse quantization, and storing the ITIQ result into a pipeline buffer 203. This module can start to work immediately after the entropy module has provided the decoded residual data for a 4x4 pixel subblock.
The third module performs motion compensation: the processing in this module is also based on 4x4 pixel sub- blocks. As described above, the procedure can be divided into computing the prediction value according to the MVs, and computing and getting the reconstructed data using the ITIQ result. The first part includes three steps: getting the MVs that were decoded by the entropy module 301, computing the prediction value according to the MVs 302, and storing the prediction value into the MC buffer 303. This part can begin to work immediately after decoding the related MVs. The second part includes also three steps: getting the ITIQ result data 401 from the ITIQ module, getting the prediction value from the MC buffer 402, and computing the reconstructed value 403. This part can begin to work after getting the computation result from the ITIQ module .
The architecture according to the invention can hold two or more MBs to be processed in parallel. If only two MBs in parallel are supported, the buffer for storing MVs and residual data in the related modules needs to store the MVs and residual data for the two MBs. Simultaneous processing of three or more MBs can be supported if additional buffer space is available.
An advantage of the invention is that the idle time of processing blocks is reduced. This leads to an improved efficiency, namely either less power consumption with a similar performance, or increased performance with comparable power consumption.
The invention is advantageous to be used for video decoding products, particularly for HD resolution decoders that are
implemented in a modular fashion, both in hardware or software, such as e.g. multi-standard decoders for H.264, VC-I, MPEG-2, AVS etc.
Claims
1. Device for decoding video data organized in data blocks, the device comprising a plurality of processing modules (E, ITIQ, MC) to be passed sequentially by the data, wherein data output of a first module (101) triggers the start of the further processing of said data in a subsequent second module (301,302), and wherein at least the first and the second module process two or more of said data blocks at a time and include pipeline buffers for storing the two or more data blocks .
2. Device according to claim 1, wherein the first module (101) outputs said data for triggering as soon as enough data are available for the further processing in the second module.
,3. Device according to claim 1 or 2, wherein the second module (301,302) has means for detecting that it has free processing capacity, and upon said detecting starts processing the transmitted data.
4. Device according to one of the claims 1-3, wherein external pipeline buffers are located between the first and second modules, and internal pipeline buffers are located within the first and second modules .
5. Device according to claim 4, wherein at least the internal pipeline buffers store motion vectors and residual data for two or more data blocks.
6. Device according to one of the claims 1-5, wherein the second module has means for slowing down or stopping the first module.
7. Device according to one of the claims 1-6, wherein the first module has means for receiving a wait signal from the second module and upon receiving said wait signal waits before processing a next data block.
8. Device according to one of the claims 1-7, wherein the plurality of processing modules (E, ITIQ, MC) include a module for entropy decoding, a module for inverse transformation and inverse quantization, and a module for motion compensation.
9. Device according to one of the claims 1-8, wherein the first module and the second module process data blocks of different size.
10. Method for decoding video data organized in data blocks, wherein subsequent processing steps
(20a, ...,2Oe) are performed in separate modules (E, ITIQ, MC) on the data blocks, the method comprising the steps of processing (101) a first data block in a first module (E) ; indicating from the first module (E) to a second module (MC) that processing of the first data block is finished; detecting (301) in the second module (MC) that the processing of the first data block in the first module is finished; transferring (301) the first data block from the first to the second module via a pipeline buffer; and
- processing (302) the transferred first data block in the second module, wherein at least the first and second modules process two or more data blocks at a time and include pipeline buffers for storing the two or more data blocks .
11. Method according to the previous claim, further comprising the step of processing a second data block in the first module, wherein the first step 21a of processing the second data block in the first module is executed before the last step (2Oe) of the processing of the first data block is finished.
12. Method according to claim 10 or 11, wherein the second module performs motion compensation including a first step of computing the prediction value (2Od) and a second step of computing the reconstructed value (2Oe) , wherein the second step begins before the first is finished for a macroblock.
13. Method according to one of the claims 10-12, wherein the first module performs entropy decoding and obtains (20a) motion vectors and reference frame index and wherein a frame address is computed according to the reference frame index, and the second module performs computing prediction values (2Od) for motion compensation, wherein computing the prediction values (2Od) starts as soon as motion vectors, the reference frame index and the frame address are transferred to the second module.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/CN2006/002517 WO2008037112A1 (en) | 2006-09-25 | 2006-09-25 | Apparatus and method for processing video data |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/CN2006/002517 WO2008037112A1 (en) | 2006-09-25 | 2006-09-25 | Apparatus and method for processing video data |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2008037112A1 true WO2008037112A1 (en) | 2008-04-03 |
Family
ID=39229694
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2006/002517 WO2008037112A1 (en) | 2006-09-25 | 2006-09-25 | Apparatus and method for processing video data |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2008037112A1 (en) |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2003179923A (en) * | 2001-12-12 | 2003-06-27 | Nec Corp | Decoding system for dynamic image compression coded signal and method for decoding, and program for decoding |
| EP1351512A2 (en) * | 2002-04-01 | 2003-10-08 | Broadcom Corporation | Video decoding system supporting multiple standards |
| EP1475972A2 (en) * | 2003-05-08 | 2004-11-10 | Matsushita Electric Industrial Co., Ltd. | Apparatus and method for moving picture decoding device with parallel processing |
-
2006
- 2006-09-25 WO PCT/CN2006/002517 patent/WO2008037112A1/en active Application Filing
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2003179923A (en) * | 2001-12-12 | 2003-06-27 | Nec Corp | Decoding system for dynamic image compression coded signal and method for decoding, and program for decoding |
| EP1351512A2 (en) * | 2002-04-01 | 2003-10-08 | Broadcom Corporation | Video decoding system supporting multiple standards |
| EP1475972A2 (en) * | 2003-05-08 | 2004-11-10 | Matsushita Electric Industrial Co., Ltd. | Apparatus and method for moving picture decoding device with parallel processing |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US7953284B2 (en) | Selective information handling for video processing | |
| CA2682449C (en) | Intra-macroblock video processing | |
| CA2682436C (en) | Parallel or pipelined macroblock processing | |
| US8005147B2 (en) | Method of operating a video decoding system | |
| CN100562114C (en) | Video decoding method and decoding device | |
| US7912302B2 (en) | Multiprocessor decoder system and method | |
| KR101279507B1 (en) | Pipelined decoding apparatus and method based on parallel processing | |
| EP1673942A1 (en) | Method and apparatus for processing image data | |
| CN1794814A (en) | Pipelined deblocking filter | |
| US8577165B2 (en) | Method and apparatus for bandwidth-reduced image encoding and decoding | |
| EP1689187A1 (en) | Method and system for video compression and decompression (CODEC) in a microprocessor | |
| WO2008037113A1 (en) | Apparatus and method for processing video data | |
| JP2007259323A (en) | Image decoding device | |
| WO2007117722A2 (en) | Memory organizational scheme and controller architecture for image and video processing | |
| US10244248B2 (en) | Residual processing circuit using single-path pipeline or multi-path pipeline and associated residual processing method | |
| CN106686380B (en) | Enhanced data processing apparatus and method of operation using multi-block-based pipeline | |
| WO2008037112A1 (en) | Apparatus and method for processing video data | |
| Wang et al. | High definition IEEE AVS decoder on ARM NEON platform | |
| CN102238385A (en) | Encoder and/or vertical and/or horizontal cache device of decoder and method | |
| EP1351513A2 (en) | Method of operating a video decoding system | |
| US20090201989A1 (en) | Systems and Methods to Optimize Entropy Decoding | |
| US8284836B2 (en) | Motion compensation method and apparatus to perform parallel processing on macroblocks in a video decoding system | |
| Li et al. | A decoder architecture for advanced video coding standard | |
| Ling et al. | A real-time HDTV video decoder |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 06791106 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 06791106 Country of ref document: EP Kind code of ref document: A1 |