[go: up one dir, main page]

WO2012153538A1 - Methods for encoding and decoding video using an adaptive filtering process - Google Patents

Methods for encoding and decoding video using an adaptive filtering process Download PDF

Info

Publication number
WO2012153538A1
WO2012153538A1 PCT/JP2012/003091 JP2012003091W WO2012153538A1 WO 2012153538 A1 WO2012153538 A1 WO 2012153538A1 JP 2012003091 W JP2012003091 W JP 2012003091W WO 2012153538 A1 WO2012153538 A1 WO 2012153538A1
Authority
WO
WIPO (PCT)
Prior art keywords
block
filter parameters
unit
video
filtering process
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/JP2012/003091
Other languages
French (fr)
Inventor
Chong Soon Lim
Viktor Wahadaniah
Sue Mon Thet Naing
Jin Li
Hai Wei Sun
Takahiro Nishi
Youji Shibahara
Hisao Sasai
Toshiyasu Sugio
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Panasonic Corp
Original Assignee
Panasonic Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Panasonic Corp filed Critical Panasonic Corp
Publication of WO2012153538A1 publication Critical patent/WO2012153538A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/14Coding unit complexity, e.g. amount of activity or edge presence estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation

Definitions

  • This invention can be used in any multimedia data coding and, more particularly, in image and video coding supporting either block edge or pixel de-noising filtering on reconstructed samples.
  • a quantization process is performed in the frequency domain to reduce the coded bits of the video.
  • the quantization process causes visual artifacts like blocky noise and ringing noise in the reconstructed images and spatial filtering processes can be used to reduce these noises and improve visual quality.
  • quantization matrixes are supported where different frequency components may have different quantizers.
  • different block transform sizes may support different quantization matrixes.
  • the latest video coding schemes such as the upcoming HEVC (High-Efficiency Video Coding), support two filtering processes to reduce blocky artifacts at block edges and to reduce the noise in a reconstructed image samples. Both filtering processes are customizable by signaling in a coded stream and the strength of the filtering can be adaptively changed between block units within the same picture.
  • HEVC High-Efficiency Video Coding
  • block edge filtering process is adapted within a picture by the quantization parameter and the spatial activity on both sides of a block edge.
  • the pixel filtering process is adapted by block basis using the spatial activities of the pixels in the same block.
  • the filtering processes in the prior art do not adapt within the same picture based on the quantization matrixes used. Neither do they adapt to the variable block transform sizes nor do they adapt to the block prediction mode used for the reconstruction of the image. And thus because of the lack of this adaptability, the filtering process is not optimized to minimize the noise created by the quantization process caused by having different quantization matrixes.
  • the new methods allow the filtering processes (inclusive of either the de-blocking and pixel de-noising filtering) to be adapted on a block by block basis based on either the block transform size, the block prediction mode or the quantization matrix selection.
  • the filtering process can be adapted by selecting one set of filtering parameters from a plurality sets of filtering parameters using the block transform size selection, the block prediction mode selection or the quantization matrix selection.
  • the effects of the current invention are in the form of improvement in picture quality by allowing more adaptability to the filtering processes.
  • FIG. 1 is a flowchart showing an encoding process in the first aspect of the current invention.
  • FIG. 2 is a flowchart showing a decoding process in the first aspect of the current invention.
  • FIG. 3 is a flowchart showing an encoding process in the second aspect of the current invention.
  • FIG. 4 is a flowchart showing a decoding process in the second aspect of the current invention.
  • FIG. 5 is a flowchart showing an encoding process in the third aspect of the current invention.
  • FIG. 6 is a flowchart showing a decoding process in the third aspect of the current invention.
  • FIG. 7 is a block diagram illustrating an example apparatus for a video encoder of current invention.
  • FIG. 8 is a block diagram illustrating an example apparatus for a video decoder of current invention.
  • FIG. 9A is a diagram showing the locations of a plurality sets of deblocking filter control parameters in a header of an image.
  • FIG. 9B is a diagram showing the locations of a plurality sets of deblocking filter control parameters in a header of an image.
  • FIG. 9C is a diagram showing the locations of a plurality sets of deblocking filter control parameters in a header of an image.
  • FIG. 10A is a diagram showing the locations of a plurality sets of filter coefficients in a header of an image.
  • FIG. 10B is a diagram showing the locations of a plurality sets of filter coefficients in a header of an image.
  • FIG. 10C is a diagram showing the locations of a plurality sets of filter coefficients in a header of an image.
  • FIG. 10A is a diagram showing the locations of a plurality sets of filter coefficients in a header of an image.
  • FIG. 10B is a diagram showing the locations of a plurality sets of filter coefficients in a header of an image
  • FIG. 11 is a flowchart showing an example encoding process in the first aspect of the current invention.
  • FIG. 12 is a flowchart showing an example encoding process in the second aspect of the current invention.
  • FIG. 13 is a flowchart showing an example encoding process in the third aspect of the current invention.
  • FIG. 14 shows an overall configuration of a content providing system for implementing content distribution services.
  • FIG. 15 shows an overall configuration of a digital broadcasting system.
  • FIG. 16 shows a block diagram illustrating an example of a configuration of a television.
  • FIG. 17 shows a block diagram illustrating an example of a configuration of an information reproducing/recording unit that reads and writes information from and on a recording medium that is an optical disk.
  • FIG. 18 shows an example of a configuration of a recording medium that is an optical disk.
  • FIG. 19A shows an example of a cellular phone.
  • FIG. 19B is a block diagram showing an example of a configuration of a cellular phone.
  • FIG. 20 illustrates a structure of multiplexed data.
  • FIG. 21 schematically shows how each stream is multiplexed in multiplexed data.
  • FIG. 22 shows how a video stream is stored in a stream of PES packets in more detail.
  • FIG. 23 shows a structure of TS packets and source packets in the multiplexed data.
  • FIG. 24 shows a data structure of a PMT.
  • FIG. 25 shows an internal structure of multiplexed data information.
  • FIG. 26 shows an internal structure of stream attribute information.
  • FIG. 27 shows steps for identifying video data.
  • FIG. 28 shows an example of a configuration of an integrated circuit for implementing the moving picture coding method and the moving picture decoding method according to each of embodiments.
  • FIG. 29 shows a configuration for switching between driving frequencies.
  • FIG. 30 shows steps for identifying video data and switching between driving frequencies.
  • FIG. 31 shows an example of a look-up table in which video data standards are associated with driving frequencies.
  • FIG. 32A is a diagram showing an example of a configuration for sharing a module of a signal processing unit.
  • FIG. 32B is a diagram showing another example of a configuration for sharing a module of the signal processing unit.
  • FIG. 1 shows a flowchart showing an encoding process in the first aspect of the current invention.
  • a transform unit size is selected from a plurality pre-determined transform unit sizes. Examples of transform unit size include 4x4, 8x8, 16x16 and 32x32 transform unit sizes.
  • a transform process is performed on a block of residuals to produce a block of transform coefficients based on the selected transform size.
  • the block of residuals is computed from subtracting a block of image samples with a block of prediction values.
  • the block of transform coefficients is coded into a compressed video stream.
  • the process to code the block of transform coefficients includes a quantization process where the output of that process is a block of quantized transform coefficients.
  • module 106 a de-quantization process is performed on the block of quantized transform coefficients to produce a block of reconstructed transform coefficients.
  • the block of reconstructed transform coefficients is then inverse-transformed into a block of reconstructed residuals using the selected transform unit size in module 108.
  • module 110 a block of image samples is reconstructed based on summing the block of reconstructed residuals with a block of prediction samples.
  • plurality sets of filter parameters are written into a header of the compressed image.
  • the sets of filter parameters include a set of filter parameters for strong filtering and a set of filter parameters for weak filtering.
  • one set of filter parameters is selected from the plurality sets of filter parameters based on the selected transform unit size.
  • Each transform unit size will be associated with at least one set of filter parameters based on a pre-defined association where one of the transform unit size will be associated with the set of filter parameters that defines a strong filter and one of the transform unit size will be associated with the set of filter parameters that defines a weak filter.
  • an alternate process to select one set of filter parameter from the plurality sets of filter parameters is based on a combination of the selected transform unit size and the block prediction mode.
  • a filtering process is applied on the block of reconstructed image samples using the selected set of filter parameters.
  • a set of filter parameters can include parameters that affect decision thresholds and maximum clipping values to control the block edge filtering process.
  • a set of filter parameters can include filter coefficients values to control the pixel filtering process.
  • FIG. 11 shows a flowchart showing an example encoding process in the first aspect of the current invention.
  • a smaller transform unit size is selected from a plurality pre-determined transform unit sizes for a block of image samples containing an edge.
  • Examples of transform unit size include 4x4, 8x8, 16x16 and 32x32 transform unit sizes where a smaller transform unit size may be a 4x4.
  • a transform process is performed on a block of residuals to produce a block of transform coefficients based on the selected smaller transform size.
  • the block of residuals is computed from subtracting a block of image samples with a block of prediction values.
  • the block of transform coefficients is coded into a compressed video stream.
  • the process to code the block of transform coefficients includes a quantization process where the output of that process is a block of quantized transform coefficients.
  • module 1106 a de-quantization process is performed on the block of quantized transform coefficients to produce a block of reconstructed transform coefficients.
  • the block of reconstructed transform coefficients is then inverse-transformed into a block of reconstructed residuals using the selected smaller transform unit size in module 1108.
  • module 1110 a block of image samples is reconstructed based on summing the block of reconstructed residuals with a block of prediction samples.
  • plurality sets of filter parameters are written into a header of the compressed image.
  • the sets of filter parameters include a set of filter parameters for strong filtering and a set of filter parameters for weak filtering.
  • the set of filter parameters that defines a strong filter is selected from the plurality sets of filter parameters based on the selected smaller transform unit size.
  • the smaller transform unit size will be associated with the strong filter.
  • an alternate process to select one set of filter parameter from the plurality sets of filter parameters is based on a combination of the selected transform unit size and the block prediction mode.
  • a filtering process is applied on the block of reconstructed image samples using the selected set of filter parameters.
  • a set of filter parameters can include parameters that affect decision thresholds and maximum clipping values to control the block edge filtering process.
  • a set of filter parameters can include filter coefficients values to control the pixel filtering process.
  • FIG. 2 shows a flowchart showing a decoding process in the first aspect of the current invention.
  • module 200 parameters are parsed from a coded header of a block unit to determine the size of a transform. Examples of these parameters include flags to sub-divide or split from a larger size.
  • module 202 a transform unit size is selected from plurality pre-determined transform unit sizes based on said parsed transform size parameters.
  • module 204 a block of quantized transform coefficient is decoded from a compressed video stream. An inverse quantization process is then performed on the decoded block of quantized transform coefficients in module 206 to produce a block reconstructed transform coefficients.
  • the block of reconstructed transform coefficients is then inverse-transformed into a block of reconstructed residuals using the selected transform unit size in module 208. And in module 210, a block of image samples is reconstructed based on summing the block of reconstructed residuals with a block of prediction samples.
  • plurality sets of filter parameters are parsed from a header of the compressed image.
  • the sets of filter parameters include a set of filter parameters for strong filtering and a set of filter parameters for weak filtering.
  • one set of filter parameters is selected from the plurality sets of filter parameters based on the selected transform unit size.
  • Each transform unit size will be associated with at least one set of filter parameters based on a pre-defined association where one of the transform unit size will be associated with the set of filter parameters that defines a strong filter and one of the transform unit size will be associated with the set of filter parameters that defines a weak filter.
  • an alternate process to select one set of filter parameter from the plurality sets of filter parameters is based on a combination of the selected transform unit size and the block prediction mode.
  • a filtering process is applied on the block of reconstructed image samples using the selected set of filter parameters.
  • a set of filter parameters can include parameters that affect decision thresholds and maximum clipping values to control the block edge filtering process.
  • a set of filter parameters can include filter coefficients values to control the pixel filtering process.
  • FIG. 3 shows a flowchart describing an encoding process in the second aspect of the current invention.
  • a prediction mode is selected in module 300.
  • the prediction mode can be an intra prediction mode or an inter prediction mode.
  • a prediction process is performed to produce a block of prediction samples based on the selected prediction mode.
  • a block of residual samples is created in module 304 but subtracting a block of image samples with the block of prediction samples.
  • the block of residual samples is then encoded into a compressed image utilizing a transform process and a quantization process in module 306.
  • the block of residual samples is reconstructed utilizing an inverse transform process and an inverse quantization process and a block of image samples is reconstructed by summing the reconstructed block of residuals and the block of prediction samples in module 310.
  • plurality sets of filter parameters are written into a header of the compressed image.
  • the sets of filter parameters include a set of filter parameters for strong filtering and a set of filter parameters for weak filtering.
  • one set of filter parameters is selected from the plurality sets of filter parameters based on the selected prediction mode.
  • Each prediction mode (Intra or Inter) will be associated with at least one set of filter parameters based on a pre-defined association where one of the prediction mode will be associated with the set of filter parameters that defines a strong filter and one of the prediction mode will be associated with the set of filter parameters that defines a weak filter.
  • a filtering process is applied on the block of reconstructed image samples using the selected set of filter parameters.
  • a set of filter parameters can include parameters that affect decision thresholds and maximum clipping values to control the block edge filtering process.
  • a set of filter parameters can include filter coefficients values to control the pixel filtering process.
  • FIG. 12 shows a flowchart describing an example encoding process in the second aspect of the current invention.
  • an intra prediction mode is selected from plurality of prediction modes in module 1200.
  • the intra prediction mode is selected based on a lower cost measurement relative to other prediction modes.
  • an intra prediction process is performed to produce a block of prediction samples based on the selected intra prediction mode.
  • a block of residual samples is created in module 1204 but subtracting a block of image samples with the block of intra prediction samples.
  • the block of residual samples is then encoded into a compressed image utilizing a transform process and a quantization process in module 1206.
  • the block of residual samples is reconstructed utilizing an inverse transform process and an inverse quantization process and a block of image samples is reconstructed by summing the reconstructed block of residuals and the block of intra prediction samples in module 1210.
  • plurality sets of filter parameters are written into a header of the compressed image.
  • the sets of filter parameters include a set of filter parameters for strong filtering and a set of filter parameters for weak filtering.
  • the set of filter parameters that defines a strong filter is selected from the plurality sets of filter parameters based on the selected intra prediction mode.
  • the intra prediction mode is associated is associated with the set of filter parameters that defines a strong filter and the inter prediction mode is associated with the set of filter parameters that defines a weak filter.
  • a filtering process is applied on the block of reconstructed image samples using the selected set of strong filter parameters.
  • a set of filter parameters can include parameters that affect decision thresholds and maximum clipping values to control the block edge filtering process.
  • a set of filter parameters can include filter coefficients values to control the pixel filtering process.
  • FIG. 4 shows a flowchart showing a decoding process in the second aspect of the current invention.
  • a parameter is parsed from a coded header of a block unit to determine the block prediction mode. The parsed parameter provides information whether the block unit is intra predicted or inter predicted.
  • a prediction process is performed to produce a block of prediction samples based on said parsed block prediction mode parameter.
  • a block of residuals is then decoded from a compressed image utilizing an inverse transform process and an inverse quantization process.
  • a block of image samples is reconstructed by summing the block of decoded residuals and the block of prediction samples in module 406.
  • plurality sets of filter parameters are parsed from a header of the compressed image.
  • the sets of filter parameters include a set of filter parameters for strong filtering and a set of filter parameters for weak filtering.
  • one set of filter parameters is selected from the plurality sets of filter parameters based on the selected prediction mode.
  • Each prediction mode will be associated with at least one set of filter parameters based on a pre-defined association where one of the prediction modes will be associated with the set of filter parameters that defines a strong filter and one of the prediction modes will be associated with the set of filter parameters that defines a weak filter.
  • a filtering process is applied on the block of reconstructed image samples using the selected set of filter parameters.
  • a set of filter parameters can include parameters that affect decision thresholds and maximum clipping values to control the block edge filtering process.
  • a set of filter parameters can include filter coefficients values to control the pixel filtering process.
  • FIG. 5 shows a flowchart describing an encoding process in the third aspect of the current invention.
  • a quantization matrix is selected from a plurality of quantization matrixes. For each transform unit size, more than one quantization matrix can be utilized to quantize coefficients of a transform unit within the same picture.
  • a block of image samples is encoded into a compressed image utilizing a quantization process using the selected quantization matrix.
  • a parameter is written into a header of the coded block of image samples to indicate which quantization matrix was selected to be used for the quantization process.
  • the block of image samples is then decoded utilizing an inverse quantization process using the selected quantization matrix.
  • plurality sets of filter parameters are written into a header of the compressed image.
  • the sets of filter parameters include a set of filter parameters for strong filtering and a set of filter parameters for weak filtering.
  • one set of filter parameters is selected from the plurality sets of filter parameters based on the selection parameter for quantization matrix.
  • Each quantization matrix will be associated with at least one set of filter parameters based on a pre-defined association where one of the quantization matrixes will be associated with the set of filter parameters that defines a strong filter and one of the quantization matrixes will be associated with the set of filter parameters that defines a weak filter.
  • a filtering process is applied on the block of reconstructed image samples using the selected set of filter parameters.
  • a set of filter parameters can include parameters that affect decision thresholds and maximum clipping values to control the block edge filtering process.
  • a set of filter parameters can include filter coefficients values to control the pixel filtering process.
  • FIG. 13 shows a flowchart describing an example encoding process in the third aspect of the current invention.
  • a steep sloped quantization matrix is selected from a plurality of quantization matrixes inclusive of a gentle sloped quantization matrix. For each transform unit size, more than one quantization matrix can be utilized to quantize coefficients of a transform unit within the same picture.
  • a block of image samples is encoded into a compressed image utilizing a quantization process using the selected steep sloped quantization matrix.
  • a parameter is written into a header of the coded block of image samples to indicate the steep sloped quantization matrix was selected to be used for the quantization process.
  • the block of image samples is then decoded utilizing an inverse quantization process using the selected steep sloped quantization matrix.
  • plurality sets of filter parameters are written into a header of the compressed image.
  • the sets of filter parameters include a set of filter parameters for strong filtering and a set of filter parameters for weak filtering.
  • the set of filter parameters that defines the strong filter is selected from the plurality sets of filter parameters based on the selection parameter for steep sloped quantization matrix.
  • the steep sloped quantization matrix is associated with the set of filter parameters that defines a strong filter and the gentle sloped quantization matrix is associated with the set of filter parameters that defines a weak filter.
  • a filtering process is applied on the block of reconstructed image samples using the selected set of strong filter parameters.
  • a set of filter parameters can include parameters that affect decision thresholds and maximum clipping values to control the block edge filtering process.
  • a set of filter parameters can include filter coefficients values to control the pixel filtering process.
  • FIG. 6 shows a flowchart showing a decoding process in the third aspect of the current invention.
  • a parameter is parsed from a coded header of a block unit to determine a selection parameter for quantization matrix.
  • a quantization matrix is selected from a plurality of quantization matrixes based on said parsed selection parameter.
  • the plurality of quantization matrixes can be signaled in a header of a coded image or pre-defined.
  • a block of image samples is then decoded utilizing an inverse quantization process using the selected quantization matrix.
  • plurality sets of filter parameters are parsed from a header of the compressed image.
  • the sets of filter parameters include a set of filter parameters for strong filtering and a set of filter parameters for weak filtering.
  • one set of filter parameters is selected from the plurality sets of filter parameters based on the selected quantization matrix.
  • Each quantization matrix will be associated with at least one set of filter parameters based on a pre-defined association where one of the quantization matrixes will be associated with the set of filter parameters that defines a strong filter and one of the quantization matrixes will be associated with the set of filter parameters that defines a weak filter.
  • a filtering process is applied on the block of reconstructed image samples using the selected set of filter parameters.
  • a set of filter parameters can include parameters that affect decision thresholds and maximum clipping values to control the block edge filtering process.
  • a set of filter parameters can include filter coefficients values to control the pixel filtering process.
  • FIG. 7 shows an example apparatus for a video encoder of current invention. As shown in the FIG 7, it comprises of a subtraction unit 700, a transform unit 702, a quantization unit 704, an entropy coding unit 706, an inverse quantization unit 708, an inverse transform unit 710, an adder unit 712, a filtering unit 714, two memory units 716 & 728, a motion estimation unit 718, a motion compensation unit 720, a selector unit 722, an intra prediction unit 724 and an intra prediction mode selection unit 726.
  • the intra prediction mode selection unit 726 reads a block of original samples D701 and reconstructed samples D725 from a memory unit 728 to output an intra prediction mode D731.
  • the intra prediction unit 724 reads the intra prediction mode D731 and the reconstructed samples D727 and outputs a block of intra prediction samples D729.
  • the motion estimation unit 718 reads the block of original samples D701 and a reconstructed frame D719 stored in the memory unit 716 and outputs motion prediction parameters D721.
  • the motion compensation 720 reads the motion prediction parameters D721 and the reconstructed frame D719 and outputs a block of motion predicted samples D723.
  • the selector unit 722 selects either the block of intra predicted samples D729 or the block of motion predicted samples D723 and outputs a block of prediction samples D731 to the subtraction unit 700.
  • the subtraction unit 700 reads the block of original samples D701 and the block of prediction samples D731 and outputs a block of residual samples D703.
  • the transform unit 702 reads the block of residual samples D703 and outputs a block of transform coefficients D705.
  • the quantization unit 704 reads the block of transform coefficients D705 and outputs the quantized coefficients D707 to the entropy coding unit 706 which outputs the compressed video.
  • the inverse quantization unit 708 reads the quantized transform coefficients D707 and outputs the inverse quantized transform coefficients D711.
  • the inverse transform unit 710 reads the inverse quantized transform coefficients D711 and outputs a block of reconstructed residuals D713.
  • the adder unit 712 reads the block of reconstructed residuals D713 and the block of prediction samples D731 and outputs a block of reconstructed samples D715. Some of the reconstructed samples are stored in the memory unit 728.
  • the filtering unit 714 finally reads the block of reconstructed samples and outputs the block of filtered samples to the memory unit 716.
  • FIG. 8 shows an example apparatus for a video decoder of current invention. It consists of an entropy decoding unit 800, an inverse quantization unit 802, an inverse transform unit 804, an adder unit 806, a selector unit 810, an intra prediction unit 818, memory units 812 & 814, a motion compensation unit 816 and a filtering unit 808.
  • the entropy decoding unit 800 reads a compressed video and outputs a block of quantized coefficients D803.
  • the inverse quantization unit 802 reads the quantized transform coefficients D803 and outputs the inverse quantized transform coefficients D805.
  • the inverse transform unit 804 reads the inverse quantized transform coefficients D805 and outputs a block of reconstructed residuals D807.
  • the adder unit 806 reads the block of reconstructed residuals D807 and the block of prediction samples D821 and outputs a block of reconstructed samples D809. Some of the reconstructed samples are stored in the memory unit 812.
  • the filtering unit 808 reads the block of reconstructed samples D809 and outputs the block of filtered samples D811 to the memory unit 814.
  • the block of filtered samples D811 is also outputted as the reconstructed video.
  • the motion compensation unit 816 reads reconstructed samples D813 from the memory unit 814 and outputs a block of motion predicted samples D815 to the selector unit 810.
  • the intra prediction unit 818 reads reconstructed samples from the memory unit 812 and outputs a block of intra predicted samples D819 to the selector unit 810.
  • the selector unit 810 selects either the block of intra prediction samples D819 or the block of motion predicted samples D815 and outputs a block of prediction samples D821 to the adder unit 806.
  • FIG. 9A to FIG. 9C show the location for plurality sets of deblocking filter control parameters in a header of an image.
  • multiple sets of deblocking filter control parameter where each set is inclusive of a disable_deblocking_filter_idc parameter, an alpha_c0_offset_div2 parameter or a beta_offset_div2 parameter, are located in a picture parameter set.
  • multiple sets of deblocking filter control parameter where each set is inclusive of a disable_deblocking_filter_idc parameter, an alpha_c0_offset_div2 parameter or a beta_offset_div2 parameter, are located in a slice header.
  • FIG. 9A multiple sets of deblocking filter control parameter, where each set is inclusive of a disable_deblocking_filter_idc parameter, an alpha_c0_offset_div2 parameter or a beta_offset_div2 parameter, are located in a slice header.
  • multiple sets of deblocking filter control parameter where each set is inclusive of a disable_deblocking_filter_idc parameter, an alpha_c0_offset_div2 parameter or a beta_offset_div2 parameter, are located in a slice parameter set.
  • FIG. 10A to FIG. 10C show the location for plurality sets of filter coefficients parameters in a header of an image.
  • multiple sets of filter parameters where each set is inclusive of filter coefficients, are located in a picture parameter set.
  • multiple sets of filter parameters where each set is inclusive of filter coefficients, are located in a slice header.
  • multiple sets of filter parameters, where each set is inclusive of filter coefficients are located in a slice parameter set.
  • a method of encoding video using an adaptive filtering process comprising of: selecting a transform unit size from a plurality of pre-determined transform unit size (100); performing a transform process on a block of residuals based on said selected transform size (102); encoding said block of transformed coefficients into a compressed video stream involving a quantization process (104); performing an inverse quantization process on said block of quantized transformed coefficients (106); performing an inverse transform process on said block of inverse quantized coefficients based on said selected transform size (108); reconstructing a block of image samples based on summing said block of inverse transformed residuals and a block of prediction samples (110); writing a plurality sets of filter parameters into a header of a compressed image (112); selecting one set of filter parameters from said plurality sets of filter parameters using said selected transform unit size as one of the selection criteria (114); and applying a filtering process on said block of reconstructed image samples using said selected set of filter parameters (116).
  • Method 2 A method of encoding video using an adaptive filtering process comprising of: selecting a small transform unit size from a plurality of pre-determined transform unit size for a block of image samples containing an edge (1100); performing a transform process on a block of residuals based on said selected small transform size (1102), encoding said block of transformed coefficients into a compressed video stream involving a quantization process (1104); performing an inverse quantization process on said block of quantized transformed coefficients (1106); performing an inverse transform process on said block of inverse quantized coefficients based on said selected small transform size (1108); reconstructing a block of image samples based on summing said block of inverse transformed residuals and a block of prediction samples (1110); writing a plurality sets of filter parameters into a header of a compressed image (1112); selecting one set of filter parameters that defines a strong filter from said plurality sets of filter parameters using said selected small transform unit size as one of the selection criteria (1114); and applying a strong filtering process on said block of reconstructed
  • Method 3 A method of decoding video using an adaptive filtering process comprising of: parsing parameters from a coded header of a block unit to determine the size of a transform (200); selecting a transform unit size from a plurality of pre-determined transform unit sizes based on said parsed transform size parameters (202); decoding a block of quantized coefficients from a compressed video stream (204); performing an inverse quantization process on said decoded block of quantized transformed coefficients (206); performing an inverse transform process on said block of inverse quantized coefficients based on said selected transform size (208); reconstructing a block of image samples based on summing said block of inverse transformed residuals and a block of prediction samples (210); parsing plurality sets of filter parameters from a header of a compressed image (212); selecting one set of filter parameters from said parsed plurality sets of filter parameters using said selected transform unit size as one of the selection criteria (214); and applying a filtering process on said block of reconstructed image samples using said selected set of filter parameters (216
  • Method 4 The method of encoding or decoding video using an adaptive filtering process according to Method 1, 2 or 3, whereas said process to select one set of filter parameters using said selected transform unit size can also in additionally use the block prediction mode as the selection criteria.
  • Method 5 A method of encoding video using an adaptive filtering process comprising of: selecting a spatial or temporal prediction mode (300); performing a prediction process to produce a block of prediction samples based on said selected prediction mode (302); subtracting a block of image samples with said block of prediction samples to produce a block of residual samples (304); encoding said block of residual samples into a compressed image utilizing a transform process and a quantization process (306); reconstructing said block of residual samples utilizing an inverse transform process and an inverse quantization process (308); reconstructing a block of image samples based on summing said block of residuals and said block of prediction samples (310); writing plurality sets of filter parameters into a header of said compressed image (312); selecting one set of filter parameters from said plurality sets of filter parameters using said selected prediction mode as one of the selection criteria (314); and applying a filtering process on said block of reconstructed image samples using said selected set of filter parameters (316).
  • a method of encoding video using an adaptive filtering process comprising of: selecting an intra prediction mode from a plurality of prediction mode inclusive of an inter prediction mode (1200); performing a prediction process to produce a block of prediction samples based on said selected intra prediction mode (1202); subtracting a block of image samples with said block of prediction samples to produce a block of residual samples (1204); encoding said block of residual samples into a compressed image utilizing a transform process and a quantization process (1206); reconstructing said block of residual samples utilizing an inverse transform process and an inverse quantization process (1208); reconstructing a block of image samples based on summing said block of residuals and said block of prediction samples (1210); writing plurality sets of filter parameters into a header of said compressed image (1212); selecting one set of filter parameters that represents a strong filter from said plurality sets of filter parameters using said selected intra prediction mode as one of the selection criteria(1214); and applying a strong filtering process on said intra predicted block of reconstructed image samples using said selected set of filter parameters
  • Method 7 A method of decoding video using an adaptive filtering process comprising of: parsing a parameter from a coded header of a block unit to determine the block prediction mode (400); performing a prediction process to produce a block of prediction samples based on said parsed prediction mode parameter (402); decoding a block of residual samples from a compressed image utilizing an inverse transform process and an inverse quantization process (404); reconstructing a block of image samples based on summing said block of residuals and said block of prediction samples (406); parsing plurality sets of filter parameters from a header of said compressed image (408); selecting one set of filter parameters from said plurality sets of filter parameters using said selected parsed block prediction mode as one of the selection criteria (410); and applying a filtering process on said block of reconstructed image samples using said selected set of filter parameters (412).
  • Method 8 A method of encoding video using an adaptive filtering process comprising of: selecting a quantization matrix from a plurality of quantization matrixes (500); encoding a block of image samples into a compressed image utilizing a quantization process with said selected quantization matrix (502); writing a parameter into a header of said coded block to indicate said selection for quantization matrix (504); decoding said block of image samples utilizing an inverse quantization process with said selected quantization matrix (506); writing plurality sets of filter parameters into a header of said compressed image (508); selecting one set of filter parameters from said plurality sets of filter parameters using said selection parameter for quantization matrix as one of the selection criteria (510); and applying a filtering process on said block of reconstructed image samples using said selected set of filter parameters (512).
  • Method 9 A method of encoding video using an adaptive filtering process comprising of: selecting a steep sloped quantization matrix from a plurality of quantization matrixes inclusive of a gentle sloped quantization matrix (1300); encoding a block of image samples into a compressed image utilizing a quantization process with said selected steep sloped quantization matrix (1302); writing a parameter into a header of said coded block to indicate said selection for quantization matrix (1304); decoding said block of image samples utilizing an inverse quantization process with said selected steep sloped quantization matrix (1306); writing plurality sets of filter parameters into a header of said compressed image (1308); selecting one set of filter parameters corresponding to a strong filter from said plurality sets of filter parameters using said selection parameter for the steep sloped quantization matrix as one of the selection criteria (1310); and applying a strong filtering process on said block of reconstructed image samples using said selected set of filter parameters (1312).
  • Method 10 A method of decoding video using an adaptive filtering process comprising of: parsing a parameter from a coded header of a block unit to indicate a selection parameter for quantization matrix (600); selecting a quantization matrix from a plurality of quantization matrixes based on said parsed selection parameter (602); decoding a block of image samples utilizing an inverse quantization process with said selected quantization matrix (604); parsing plurality sets of filter parameters from a header of said compressed image (606); selecting one set of filter parameters from said plurality sets of filter parameters using said selected parsed block prediction mode as one of the selection criteria (608); and applying a filtering process on said block of reconstructed image samples using said selected set of filter parameters (610).
  • Method 11 The method of encoding or decoding video using an adaptive filtering process according to Method 1, 2, 3, 5, 6, 7, 8, 9 or 10, whereas said filtering process includes a block edge filtering process.
  • Method 12 The method of encoding or decoding video using an adaptive filtering process according to Method 1, 2, 3, 5, 6, 7, 8, 9 or 10, whereas said filtering process includes a pixel filtering process.
  • An apparatus for encoding video using an adaptive filtering process comprising of: selection unit operable to select a transform unit size from a plurality of pre-determined transform unit size (100); transform unit operable to perform a transform process on a block of residuals based on said selected transform size (102), encoding unit operable to encode said block of transformed coefficients into a compressed video stream involving a quantization process (104); inverse quantization unit operable to perform an inverse quantization process on said block of quantized transformed coefficients (106); inverse transform unit operable to perform an inverse transform process on said block of inverse quantized coefficients based on said selected transform size (108); reconstruction unit operable to reconstruct a block of image samples based on summing said block of inverse transformed residuals and a block of prediction samples (110); writing unit operable to write a plurality sets of filter parameters into a header of a compressed image (112); selection unit operable to select one set of filter parameters from said plurality sets of filter parameters using said selected transform unit size as
  • An apparatus for encoding video using an adaptive filtering process comprising of: selection unit operable to select a small transform unit size from a plurality of pre-determined transform unit size for a block of image samples containing an edge(1100); transform unit operable to perform a transform process on a block of residuals based on said selected small transform size (1102), encoding unit operable to encode said said block of transformed coefficients into a compressed video stream involving a quantization process (1104); inverse quantization unit operable to perform an inverse quantization process on said block of quantized transformed coefficients (1106); inverse transform unit operable to perform an inverse transform process on said block of inverse quantized coefficients based on said selected small transform size (1108); reconstruction unit operable to reconstruct a block of image samples based on summing said block of inverse transformed residuals and a block of prediction samples (1110); writing unit operable to write a plurality sets of filter parameters into a header of a compressed image (1112); selection unit operable to select one set of
  • An apparatus for decoding video using an adaptive filtering process comprising of: parsing unit operable to parse parameters from a coded header of a block unit to determine the size of a transform (200); selection unit operable to select a transform unit size from a plurality of pre-determined transform unit sizes based on said parsed transform size parameters (202); decoding unit operable to decode a block of quantized coefficients from a compressed video stream (204); inverse quantization unit operable to perform an inverse quantization process on said decoded block of quantized transformed coefficients (206); inverse transform unit operable to perform an inverse transform process on said block of inverse quantized coefficients based on said selected transform size (208); reconstruction unit operable to reconstruct a block of image samples based on summing said block of inverse transformed residuals and a block of prediction samples (210); parsing unit operable to parse plurality sets of filter parameters from a header of a compressed image (212); selection unit operable to select one set of filter parameters from
  • Apparatus 16 The apparatus for encoding or decoding video using an adaptive filtering process according to Apparatus 13, 14 or 15, whereas said selection unit to select one set of filter parameters using said selected transform unit size can also in additionally use the block prediction mode as the selection criteria.
  • An apparatus for encoding video using an adaptive filtering process comprising of: selection unit operable to select a spatial or temporal prediction mode (300); prediction unit operable to perform a prediction process to produce a block of prediction samples based on said selected prediction mode (302); subtraction unit operable to subtract a block of image samples with said block of prediction samples to produce a block of residual samples (304); encoding unit operable to encode said block of residual samples into a compressed image utilizing a transform process and a quantization process (306); reconstructing unit operable to reconstruct said block of residual samples utilizing an inverse transform process and an inverse quantization process (308); reconstructing unit operable to reconstruct a block of image samples based on summing said block of residuals and said block of prediction samples (310); writing unit operable to write plurality sets of filter parameters into a header of said compressed image (312); selection unit operable to select one set of filter parameters from said plurality sets of filter parameters using said selected prediction mode as one of the selection criteria (314); and filtering unit oper
  • An apparatus for encoding video using an adaptive filtering process comprising of: selection unit operable to select an intra prediction mode from a plurality of prediction mode inclusive of an inter prediction mode (1200); prediction unit operable to perform a prediction process to produce a block of prediction samples based on said selected intra prediction mode (1202); subtraction unit operable to subtract a block of image samples with said block of prediction samples to produce a block of residual samples (1204); encoding unit operable to encode said block of residual samples into a compressed image utilizing a transform process and a quantization process (1206); reconstructing unit operable to reconstruct said block of residual samples utilizing an inverse transform process and an inverse quantization process (1208); reconstructing unit operable to reconstruct a block of image samples based on summing said block of residuals and said block of prediction samples (1210); writing unit operable to write plurality sets of filter parameters into a header of said compressed image (1212); selection unit operable to select one set of filter parameters that represents a strong filter from said plurality sets of filter parameters using said
  • An apparatus for decoding video using an adaptive filtering process comprising of: parsing unit operable to parse a parameter from a coded header of a block unit to determine the block prediction mode (400); prediction unit operable to perform a prediction process to produce a block of prediction samples based on said parsed prediction mode parameter (402);decoding unit operable to decode a block of residual samples from a compressed image utilizing an inverse transform process and an inverse quantization process (404); reconstructing unit operable to reconstruct a block of image samples based on summing said block of residuals and said block of prediction samples (406); parsing unit operable to parse plurality sets of filter parameters from a header of said compressed image (408); selection unit operable to select one set of filter parameters from said plurality sets of filter parameters using said selected parsed block prediction mode as one of the selection criteria (410); and filtering unit operable to apply a filtering process on said block of reconstructed image samples using said selected set of filter parameters (412).
  • An apparatus for encoding video using an adaptive filtering process comprising of: selection unit operable to select a quantization matrix from a plurality of quantization matrixes (500); encoding unit operable to encode a block of image samples into a compressed image utilizing a quantization process with said selected quantization matrix (502); writing unit operable to write a parameter into a header of said coded block to indicate said selection for quantization matrix (504); decoding unit operable to decode said block of image samples utilizing an inverse quantization process with said selected quantization matrix (506); writing unit operable to write plurality sets of filter parameters into a header of said compressed image (508); selection unit operable to select one set of filter parameters from said plurality sets of filter parameters using said selection parameter for quantization matrix as one of the selection criteria (510); and filtering unit operable to apply a filtering process on said block of reconstructed image samples using said selected set of filter parameters (512).
  • An apparatus for encoding video using an adaptive filtering process comprising of: selection unit operable to select a steep sloped quantization matrix from a plurality of quantization matrixes inclusive of a gentle sloped quantization matrix (1300); encoding unit operable to encode a block of image samples into a compressed image utilizing a quantization process with said selected steep sloped quantization matrix (1302);writing unit operable to write a parameter into a header of said coded block to indicate said selection for quantization matrix (1304); decoding unit operable to decode said block of image samples utilizing an inverse quantization process with said selected steep sloped quantization matrix (1306); writing unit operable to write plurality sets of filter parameters into a header of said compressed image (1308); selection unit operable to select one set of filter parameters corresponding to a strong filter from said plurality sets of filter parameters using said selection parameter for the steep sloped quantization matrix as one of the selection criteria (1310); and filtering unit operable to apply a strong filtering process on said block of re
  • An apparatus for decoding video using an adaptive filtering process comprising of: parsing unit operable to parse a parameter from a coded header of a block unit to indicate a selection parameter for quantization matrix (600); selection unit operable to select a quantization matrix from a plurality of quantization matrixes based on said parsed selection parameter (602); decoding unit operable to decode a block of image samples utilizing an inverse quantization process with said selected quantization matrix (604); parsing unit operable to parse plurality sets of filter parameters from a header of said compressed image (606); selection unit operable to select one set of filter parameters from said plurality sets of filter parameters using said selected parsed block prediction mode as one of the selection criteria (608); and filtering unit operable to apply a filtering process on said block of reconstructed image samples using said selected set of filter parameters (610).
  • Apparatus 23 The apparatus for encoding or decoding video using an adaptive filtering process according to Apparatus 13, 14, 15, 17, 18, 19, 20, 21 or 22, whereas said filtering unit includes a block edge filtering unit.
  • Apparatus 24 The apparatus for encoding or decoding video using an adaptive filtering process according to Apparatus 13, 14, 15, 17, 18, 19, 20, 21 or 22, whereas said filtering unit includes a pixel filtering unit.
  • the processing described in each of embodiments can be simply implemented in an independent computer system, by recording, in a recording medium, a program for implementing the configurations of the moving picture coding method (image coding method) and the moving picture decoding method (image decoding method) described in each of embodiments.
  • the recording media may be any recording media as long as the program can be recorded, such as a magnetic disk, an optical disk, a magnetic optical disk, an IC card, and a semiconductor memory.
  • the system has a feature of having an image coding and decoding apparatus that includes an image coding apparatus using the image coding method and an image decoding apparatus using the image decoding method.
  • Other configurations in the system can be changed as appropriate depending on the cases.
  • FIG. 14 illustrates an overall configuration of a content providing system ex100 for implementing content distribution services.
  • the area for providing communication services is divided into cells of desired size, and base stations ex106, ex107, ex108, ex109, and ex110 which are fixed wireless stations are placed in each of the cells.
  • the content providing system ex100 is connected to devices, such as a computer ex111, a personal digital assistant (PDA) ex112, a camera ex113, a cellular phone ex114 and a game machine ex115, via the Internet ex101, an Internet service provider ex102, a telephone network ex104, as well as the base stations ex106 to ex110, respectively.
  • devices such as a computer ex111, a personal digital assistant (PDA) ex112, a camera ex113, a cellular phone ex114 and a game machine ex115, via the Internet ex101, an Internet service provider ex102, a telephone network ex104, as well as the base stations ex106 to ex110, respectively.
  • each device may be directly connected to the telephone network ex104, rather than via the base stations ex106 to ex110 which are the fixed wireless stations.
  • the devices may be interconnected to each other via a short distance wireless communication and others.
  • the camera ex113 such as a digital video camera
  • a camera ex116 such as a digital video camera
  • the cellular phone ex114 may be the one that meets any of the standards such as Global System for Mobile Communications (GSM) (registered trademark), Code Division Multiple Access (CDMA), Wideband-Code Division Multiple Access (W-CDMA), Long Term Evolution (LTE), and High Speed Packet Access (HSPA).
  • GSM Global System for Mobile Communications
  • CDMA Code Division Multiple Access
  • W-CDMA Wideband-Code Division Multiple Access
  • LTE Long Term Evolution
  • HSPA High Speed Packet Access
  • the cellular phone ex114 may be a Personal Handyphone System (PHS).
  • PHS Personal Handyphone System
  • a streaming server ex103 is connected to the camera ex113 and others via the telephone network ex104 and the base station ex109, which enables distribution of images of a live show and others.
  • a content for example, video of a music live show
  • the camera ex113 is coded as described above in each of embodiments (i.e., the camera functions as the image coding apparatus according to an aspect of the present invention), and the coded content is transmitted to the streaming server ex103.
  • the streaming server ex103 carries out stream distribution of the transmitted content data to the clients upon their requests.
  • the clients include the computer ex111, the PDA ex112, the camera ex113, the cellular phone ex114, and the game machine ex115 that are capable of decoding the above-mentioned coded data.
  • Each of the devices that have received the distributed data decodes and reproduces the coded data (i.e., functions as the image decoding apparatus according to an aspect of the present invention).
  • the captured data may be coded by the camera ex113 or the streaming server ex103 that transmits the data, or the coding processes may be shared between the camera ex113 and the streaming server ex103.
  • the distributed data may be decoded by the clients or the streaming server ex103, or the decoding processes may be shared between the clients and the streaming server ex103.
  • the data of the still images and video captured by not only the camera ex113 but also the camera ex116 may be transmitted to the streaming server ex103 through the computer ex111.
  • the coding processes may be performed by the camera ex116, the computer ex111, or the streaming server ex103, or shared among them.
  • the coding and decoding processes may be performed by an LSI ex500 generally included in each of the computer ex111 and the devices.
  • the LSI ex500 may be configured of a single chip or a plurality of chips.
  • Software for coding and decoding video may be integrated into some type of a recording medium (such as a CD-ROM, a flexible disk, and a hard disk) that is readable by the computer ex111 and others, and the coding and decoding processes may be performed using the software.
  • a recording medium such as a CD-ROM, a flexible disk, and a hard disk
  • the coding and decoding processes may be performed using the software.
  • the image data obtained by the camera may be transmitted.
  • the video data is data coded by the LSI ex500 included in the cellular phone ex114.
  • the streaming server ex103 may be composed of servers and computers, and may decentralize data and process the decentralized data, record, or distribute data.
  • the clients may receive and reproduce the coded data in the content providing system ex100.
  • the clients can receive and decode information transmitted by the user, and reproduce the decoded data in real time in the content providing system ex100, so that the user who does not have any particular right and equipment can implement personal broadcasting.
  • a broadcast station ex201 communicates or transmits, via radio waves to a broadcast satellite ex202, multiplexed data obtained by multiplexing audio data and others onto video data.
  • the video data is data coded by the moving picture coding method described in each of embodiments (i.e., data coded by the image coding apparatus according to an aspect of the present invention).
  • the broadcast satellite ex202 Upon receipt of the multiplexed data, the broadcast satellite ex202 transmits radio waves for broadcasting.
  • a home-use antenna ex204 with a satellite broadcast reception function receives the radio waves.
  • a device such as a television (receiver) ex300 and a set top box (STB) ex217 decodes the received multiplexed data, and reproduces the decoded data (i.e., functions as the image decoding apparatus according to an aspect of the present invention).
  • a reader/recorder ex218 (i) reads and decodes the multiplexed data recorded on a recording media ex215, such as a DVD and a BD, or (i) codes video signals in the recording medium ex215, and in some cases, writes data obtained by multiplexing an audio signal on the coded data.
  • the reader/recorder ex218 can include the moving picture decoding apparatus or the moving picture coding apparatus as shown in each of embodiments. In this case, the reproduced video signals are displayed on the monitor ex219, and can be reproduced by another device or system using the recording medium ex215 on which the multiplexed data is recorded.
  • the moving picture decoding apparatus in the set top box ex217 connected to the cable ex203 for a cable television or to the antenna ex204 for satellite and/or terrestrial broadcasting, so as to display the video signals on the monitor ex219 of the television ex300.
  • the moving picture decoding apparatus may be implemented not in the set top box but in the television ex300.
  • FIG. 16 illustrates the television (receiver) ex300 that uses the moving picture coding method and the moving picture decoding method described in each of embodiments.
  • the television ex300 includes: a tuner ex301 that obtains or provides multiplexed data obtained by multiplexing audio data onto video data, through the antenna ex204 or the cable ex203, etc. that receives a broadcast; a modulation/demodulation unit ex302 that demodulates the received multiplexed data or modulates data into multiplexed data to be supplied outside; and a multiplexing/demultiplexing unit ex303 that demultiplexes the modulated multiplexed data into video data and audio data, or multiplexes video data and audio data coded by a signal processing unit ex306 into data.
  • the television ex300 further includes: a signal processing unit ex306 including an audio signal processing unit ex304 and a video signal processing unit ex305 that decode audio data and video data and code audio data and video data, respectively (which function as the image coding apparatus and the image decoding apparatus according to the aspects of the present invention); and an output unit ex309 including a speaker ex307 that provides the decoded audio signal, and a display unit ex308 that displays the decoded video signal, such as a display. Furthermore, the television ex300 includes an interface unit ex317 including an operation input unit ex312 that receives an input of a user operation.
  • the television ex300 includes a control unit ex310 that controls overall each constituent element of the television ex300, and a power supply circuit unit ex311 that supplies power to each of the elements.
  • the interface unit ex317 may include: a bridge ex313 that is connected to an external device, such as the reader/recorder ex218; a slot unit ex314 for enabling attachment of the recording medium ex216, such as an SD card; a driver ex315 to be connected to an external recording medium, such as a hard disk; and a modem ex316 to be connected to a telephone network.
  • the recording medium ex216 can electrically record information using a non-volatile/volatile semiconductor memory element for storage.
  • the constituent elements of the television ex300 are connected to each other through a synchronous bus.
  • the television ex300 decodes multiplexed data obtained from outside through the antenna ex204 and others and reproduces the decoded data
  • the multiplexing/demultiplexing unit ex303 demultiplexes the multiplexed data demodulated by the modulation/demodulation unit ex302, under control of the control unit ex310 including a CPU.
  • the audio signal processing unit ex304 decodes the demultiplexed audio data
  • the video signal processing unit ex305 decodes the demultiplexed video data, using the decoding method described in each of embodiments, in the television ex300.
  • the output unit ex309 provides the decoded video signal and audio signal outside, respectively.
  • the signals may be temporarily stored in buffers ex318 and ex319, and others so that the signals are reproduced in synchronization with each other.
  • the television ex300 may read multiplexed data not through a broadcast and others but from the recording media ex215 and ex216, such as a magnetic disk, an optical disk, and a SD card.
  • the recording media ex215 and ex216 such as a magnetic disk, an optical disk, and a SD card.
  • the audio signal processing unit ex304 codes an audio signal
  • the video signal processing unit ex305 codes a video signal, under control of the control unit ex310 using the coding method described in each of embodiments.
  • the multiplexing/demultiplexing unit ex303 multiplexes the coded video signal and audio signal, and provides the resulting signal outside.
  • the signals may be temporarily stored in the buffers ex320 and ex321, and others so that the signals are reproduced in synchronization with each other.
  • the buffers ex318, ex319, ex320, and ex321 may be plural as illustrated, or at least one buffer may be shared in the television ex300. Furthermore, data may be stored in a buffer so that the system overflow and underflow may be avoided between the modulation/demodulation unit ex302 and the multiplexing/demultiplexing unit ex303, for example.
  • the television ex300 may include a configuration for receiving an AV input from a microphone or a camera other than the configuration for obtaining audio and video data from a broadcast or a recording medium, and may code the obtained data.
  • the television ex300 can code, multiplex, and provide outside data in the description, it may be capable of only receiving, decoding, and providing outside data but not the coding, multiplexing, and providing outside data.
  • the reader/recorder ex218 when the reader/recorder ex218 reads or writes multiplexed data from or on a recording medium, one of the television ex300 and the reader/recorder ex218 may decode or code the multiplexed data, and the television ex300 and the reader/recorder ex218 may share the decoding or coding.
  • FIG. 17 illustrates a configuration of an information reproducing/recording unit ex400 when data is read or written from or on an optical disk.
  • the information reproducing/recording unit ex400 includes constituent elements ex401, ex402, ex403, ex404, ex405, ex406, and ex407 to be described hereinafter.
  • the optical head ex401 irradiates a laser spot in a recording surface of the recording medium ex215 that is an optical disk to write information, and detects reflected light from the recording surface of the recording medium ex215 to read the information.
  • the modulation recording unit ex402 electrically drives a semiconductor laser included in the optical head ex401, and modulates the laser light according to recorded data.
  • the reproduction demodulating unit ex403 amplifies a reproduction signal obtained by electrically detecting the reflected light from the recording surface using a photo detector included in the optical head ex401, and demodulates the reproduction signal by separating a signal component recorded on the recording medium ex215 to reproduce the necessary information.
  • the buffer ex404 temporarily holds the information to be recorded on the recording medium ex215 and the information reproduced from the recording medium ex215.
  • the disk motor ex405 rotates the recording medium ex215.
  • the servo control unit ex406 moves the optical head ex401 to a predetermined information track while controlling the rotation drive of the disk motor ex405 so as to follow the laser spot.
  • the system control unit ex407 controls overall the information reproducing/recording unit ex400.
  • the reading and writing processes can be implemented by the system control unit ex407 using various information stored in the buffer ex404 and generating and adding new information as necessary, and by the modulation recording unit ex402, the reproduction demodulating unit ex403, and the servo control unit ex406 that record and reproduce information through the optical head ex401 while being operated in a coordinated manner.
  • the system control unit ex407 includes, for example, a microprocessor, and executes processing by causing a computer to execute a program for read and write.
  • the optical head ex401 may perform high-density recording using near field light.
  • FIG. 18 illustrates the recording medium ex215 that is the optical disk.
  • an information track ex230 records, in advance, address information indicating an absolute position on the disk according to change in a shape of the guide grooves.
  • the address information includes information for determining positions of recording blocks ex231 that are a unit for recording data. Reproducing the information track ex230 and reading the address information in an apparatus that records and reproduces data can lead to determination of the positions of the recording blocks.
  • the recording medium ex215 includes a data recording area ex233, an inner circumference area ex232, and an outer circumference area ex234.
  • the data recording area ex233 is an area for use in recording the user data.
  • the inner circumference area ex232 and the outer circumference area ex234 that are inside and outside of the data recording area ex233, respectively are for specific use except for recording the user data.
  • the information reproducing/recording unit 400 reads and writes coded audio, coded video data, or multiplexed data obtained by multiplexing the coded audio and video data, from and on the data recording area ex233 of the recording medium ex215.
  • optical disk having a layer such as a DVD and a BD
  • the optical disk is not limited to such, and may be an optical disk having a multilayer structure and capable of being recorded on a part other than the surface.
  • the optical disk may have a structure for multidimensional recording/reproduction, such as recording of information using light of colors with different wavelengths in the same portion of the optical disk and for recording information having different layers from various angles.
  • a car ex210 having an antenna ex205 can receive data from the satellite ex202 and others, and reproduce video on a display device such as a car navigation system ex211 set in the car ex210, in the digital broadcasting system ex200.
  • a configuration of the car navigation system ex211 will be a configuration, for example, including a GPS receiving unit from the configuration illustrated in FIG. 16. The same will be true for the configuration of the computer ex111, the cellular phone ex114, and others.
  • FIG. 19A illustrates the cellular phone ex114 that uses the moving picture coding method and the moving picture decoding method described in embodiments.
  • the cellular phone ex114 includes: an antenna ex350 for transmitting and receiving radio waves through the base station ex110; a camera unit ex365 capable of capturing moving and still images; and a display unit ex358 such as a liquid crystal display for displaying the data such as decoded video captured by the camera unit ex365 or received by the antenna ex350.
  • the cellular phone ex114 further includes: a main body unit including an operation key unit ex366; an audio output unit ex357 such as a speaker for output of audio; an audio input unit ex356 such as a microphone for input of audio; a memory unit ex367 for storing captured video or still pictures, recorded audio, coded or decoded data of the received video, the still pictures, e-mails, or others; and a slot unit ex364 that is an interface unit for a recording medium that stores data in the same manner as the memory unit ex367.
  • a main body unit including an operation key unit ex366; an audio output unit ex357 such as a speaker for output of audio; an audio input unit ex356 such as a microphone for input of audio; a memory unit ex367 for storing captured video or still pictures, recorded audio, coded or decoded data of the received video, the still pictures, e-mails, or others; and a slot unit ex364 that is an interface unit for a recording medium that stores data in the same manner as
  • a main control unit ex360 designed to control overall each unit of the main body including the display unit ex358 as well as the operation key unit ex366 is connected mutually, via a synchronous bus ex370, to a power supply circuit unit ex361, an operation input control unit ex362, a video signal processing unit ex355, a camera interface unit ex363, a liquid crystal display (LCD) control unit ex359, a modulation/demodulation unit ex352, a multiplexing/demultiplexing unit ex353, an audio signal processing unit ex354, the slot unit ex364, and the memory unit ex367.
  • a power supply circuit unit ex361 an operation input control unit ex362
  • a video signal processing unit ex355 a camera interface unit ex363, a liquid crystal display (LCD) control unit ex359
  • a modulation/demodulation unit ex352 a multiplexing/demultiplexing unit ex353, an audio signal processing unit ex354, the slot unit ex364, and the memory unit ex367.
  • LCD liquid crystal display
  • the power supply circuit unit ex361 supplies the respective units with power from a battery pack so as to activate the cell phone ex114.
  • the audio signal processing unit ex354 converts the audio signals collected by the audio input unit ex356 in voice conversation mode into digital audio signals under the control of the main control unit ex360 including a CPU, ROM, and RAM. Then, the modulation/demodulation unit ex352 performs spread spectrum processing on the digital audio signals, and the transmitting and receiving unit ex351 performs digital-to-analog conversion and frequency conversion on the data, so as to transmit the resulting data via the antenna ex350. Also, in the cellular phone ex114, the transmitting and receiving unit ex351 amplifies the data received by the antenna ex350 in voice conversation mode and performs frequency conversion and the analog-to-digital conversion on the data. Then, the modulation/demodulation unit ex352 performs inverse spread spectrum processing on the data, and the audio signal processing unit ex354 converts it into analog audio signals, so as to output them via the audio output unit ex357.
  • the video signal processing unit ex355 compresses and codes video signals supplied from the camera unit ex365 using the moving picture coding method shown in each of embodiments (i.e., functions as the image coding apparatus according to the aspect of the present invention), and transmits the coded video data to the multiplexing/demultiplexing unit ex353.
  • the audio signal processing unit ex354 codes audio signals collected by the audio input unit ex356, and transmits the coded audio data to the multiplexing/demultiplexing unit ex353.
  • the multiplexing/demultiplexing unit ex353 multiplexes the coded video data supplied from the video signal processing unit ex355 and the coded audio data supplied from the audio signal processing unit ex354, using a predetermined method. Then, the modulation/demodulation unit (modulation/demodulation circuit unit) ex352 performs spread spectrum processing on the multiplexed data, and the transmitting and receiving unit ex351 performs digital-to-analog conversion and frequency conversion on the data so as to transmit the resulting data via the antenna ex350.
  • the multiplexing/demultiplexing unit ex353 demultiplexes the multiplexed data into a video data bit stream and an audio data bit stream, and supplies the video signal processing unit ex355 with the coded video data and the audio signal processing unit ex354 with the coded audio data, through the synchronous bus ex370.
  • the video signal processing unit ex355 decodes the video signal using a moving picture decoding method corresponding to the moving picture coding method shown in each of embodiments (i.e., functions as the image decoding apparatus according to the aspect of the present invention), and then the display unit ex358 displays, for instance, the video and still images included in the video file linked to the Web page via the LCD control unit ex359. Furthermore, the audio signal processing unit ex354 decodes the audio signal, and the audio output unit ex357 provides the audio.
  • a terminal such as the cellular phone ex114 probably have 3 types of implementation configurations including not only (i) a transmitting and receiving terminal including both a coding apparatus and a decoding apparatus, but also (ii) a transmitting terminal including only a coding apparatus and (iii) a receiving terminal including only a decoding apparatus.
  • the digital broadcasting system ex200 receives and transmits the multiplexed data obtained by multiplexing audio data onto video data in the description, the multiplexed data may be data obtained by multiplexing not audio data but character data related to video onto video data, and may be not multiplexed data but video data itself.
  • the moving picture coding method and the moving picture decoding method in each of embodiments can be used in any of the devices and systems described.
  • the advantages described in each of embodiments can be obtained.
  • Video data can be generated by switching, as necessary, between (i) the moving picture coding method or the moving picture coding apparatus shown in each of embodiments and (ii) a moving picture coding method or a moving picture coding apparatus in conformity with a different standard, such as MPEG-2, MPEG-4 AVC, and VC-1.
  • a different standard such as MPEG-2, MPEG-4 AVC, and VC-1.
  • multiplexed data obtained by multiplexing audio data and others onto video data has a structure including identification information indicating to which standard the video data conforms.
  • the specific structure of the multiplexed data including the video data generated in the moving picture coding method and by the moving picture coding apparatus shown in each of embodiments will be hereinafter described.
  • the multiplexed data is a digital stream in the MPEG-2 Transport Stream format.
  • FIG. 20 illustrates a structure of the multiplexed data.
  • the multiplexed data can be obtained by multiplexing at least one of a video stream, an audio stream, a presentation graphics stream (PG), and an interactive graphics stream.
  • the video stream represents primary video and secondary video of a movie
  • the audio stream (IG) represents a primary audio part and a secondary audio part to be mixed with the primary audio part
  • the presentation graphics stream represents subtitles of the movie.
  • the primary video is normal video to be displayed on a screen
  • the secondary video is video to be displayed on a smaller window in the primary video.
  • the interactive graphics stream represents an interactive screen to be generated by arranging the GUI components on a screen.
  • the video stream is coded in the moving picture coding method or by the moving picture coding apparatus shown in each of embodiments, or in a moving picture coding method or by a moving picture coding apparatus in conformity with a conventional standard, such as MPEG-2, MPEG-4 AVC, and VC-1.
  • the audio stream is coded in accordance with a standard, such as Dolby-AC-3, Dolby Digital Plus, MLP, DTS, DTS-HD, and linear PCM.
  • Each stream included in the multiplexed data is identified by PID. For example, 0x1011 is allocated to the video stream to be used for video of a movie, 0x1100 to 0x111F are allocated to the audio streams, 0x1200 to 0x121F are allocated to the presentation graphics streams, 0x1400 to 0x141F are allocated to the interactive graphics streams, 0x1B00 to 0x1B1F are allocated to the video streams to be used for secondary video of the movie, and 0x1A00 to 0x1A1F are allocated to the audio streams to be used for the secondary video to be mixed with the primary audio.
  • FIG. 21 schematically illustrates how data is multiplexed.
  • a video stream ex235 composed of video frames and an audio stream ex238 composed of audio frames are transformed into a stream of PES packets ex236 and a stream of PES packets ex239, and further into TS packets ex237 and TS packets ex240, respectively.
  • data of a presentation graphics stream ex241 and data of an interactive graphics stream ex244 are transformed into a stream of PES packets ex242 and a stream of PES packets ex245, and further into TS packets ex243 and TS packets ex246, respectively.
  • These TS packets are multiplexed into a stream to obtain multiplexed data ex247.
  • FIG. 22 illustrates how a video stream is stored in a stream of PES packets in more detail.
  • the first bar in FIG. 22 shows a video frame stream in a video stream.
  • the second bar shows the stream of PES packets.
  • the video stream is divided into pictures as I pictures, B pictures, and P pictures each of which is a video presentation unit, and the pictures are stored in a payload of each of the PES packets.
  • Each of the PES packets has a PES header, and the PES header stores a Presentation Time-Stamp (PTS) indicating a display time of the picture, and a Decoding Time-Stamp (DTS) indicating a decoding time of the picture.
  • PTS Presentation Time-Stamp
  • DTS Decoding Time-Stamp
  • FIG. 23 illustrates a format of TS packets to be finally written on the multiplexed data.
  • Each of the TS packets is a 188-byte fixed length packet including a 4-byte TS header having information, such as a PID for identifying a stream and a 184-byte TS payload for storing data.
  • the PES packets are divided, and stored in the TS payloads, respectively.
  • each of the TS packets is given a 4-byte TP_Extra_Header, thus resulting in 192-byte source packets.
  • the source packets are written on the multiplexed data.
  • the TP_Extra_Header stores information such as an Arrival_Time_Stamp (ATS).
  • ATS Arrival_Time_Stamp
  • the ATS shows a transfer start time at which each of the TS packets is to be transferred to a PID filter.
  • the source packets are arranged in the multiplexed data as shown at the bottom of FIG. 23.
  • the numbers incrementing from the head of the multiplexed data are called source packet numbers (SPNs).
  • Each of the TS packets included in the multiplexed data includes not only streams of audio, video, subtitles and others, but also a Program Association Table (PAT), a Program Map Table (PMT), and a Program Clock Reference (PCR).
  • the PAT shows what a PID in a PMT used in the multiplexed data indicates, and a PID of the PAT itself is registered as zero.
  • the PMT stores PIDs of the streams of video, audio, subtitles and others included in the multiplexed data, and attribute information of the streams corresponding to the PIDs.
  • the PMT also has various descriptors relating to the multiplexed data. The descriptors have information such as copy control information showing whether copying of the multiplexed data is permitted or not.
  • the PCR stores STC time information corresponding to an ATS showing when the PCR packet is transferred to a decoder, in order to achieve synchronization between an Arrival Time Clock (ATC) that is a time axis of ATSs, and an System Time Clock (STC) that is a time axis of PTSs and DTSs.
  • ATC Arrival Time Clock
  • STC System Time Clock
  • FIG. 24 illustrates the data structure of the PMT in detail.
  • a PMT header is disposed at the top of the PMT.
  • the PMT header describes the length of data included in the PMT and others.
  • a plurality of descriptors relating to the multiplexed data is disposed after the PMT header. Information such as the copy control information is described in the descriptors.
  • a plurality of pieces of stream information relating to the streams included in the multiplexed data is disposed.
  • Each piece of stream information includes stream descriptors each describing information, such as a stream type for identifying a compression codec of a stream, a stream PID, and stream attribute information (such as a frame rate or an aspect ratio).
  • the stream descriptors are equal in number to the number of streams in the multiplexed data.
  • the multiplexed data When the multiplexed data is recorded on a recording medium and others, it is recorded together with multiplexed data information files.
  • Each of the multiplexed data information files is management information of the multiplexed data as shown in FIG. 25.
  • the multiplexed data information files are in one to one correspondence with the multiplexed data, and each of the files includes multiplexed data information, stream attribute information, and an entry map.
  • the multiplexed data includes a system rate, a reproduction start time, and a reproduction end time.
  • the system rate indicates the maximum transfer rate at which a system target decoder to be described later transfers the multiplexed data to a PID filter.
  • the intervals of the ATSs included in the multiplexed data are set to not higher than a system rate.
  • the reproduction start time indicates a PTS in a video frame at the head of the multiplexed data. An interval of one frame is added to a PTS in a video frame at the end of the multiplexed data, and the PTS is set to the reproduction end time.
  • a piece of attribute information is registered in the stream attribute information, for each PID of each stream included in the multiplexed data.
  • Each piece of attribute information has different information depending on whether the corresponding stream is a video stream, an audio stream, a presentation graphics stream, or an interactive graphics stream.
  • Each piece of video stream attribute information carries information including what kind of compression codec is used for compressing the video stream, and the resolution, aspect ratio and frame rate of the pieces of picture data that is included in the video stream.
  • Each piece of audio stream attribute information carries information including what kind of compression codec is used for compressing the audio stream, how many channels are included in the audio stream, which language the audio stream supports, and how high the sampling frequency is.
  • the video stream attribute information and the audio stream attribute information are used for initialization of a decoder before the player plays back the information.
  • the multiplexed data to be used is of a stream type included in the PMT. Furthermore, when the multiplexed data is recorded on a recording medium, the video stream attribute information included in the multiplexed data information is used. More specifically, the moving picture coding method or the moving picture coding apparatus described in each of embodiments includes a step or a unit for allocating unique information indicating video data generated by the moving picture coding method or the moving picture coding apparatus in each of embodiments, to the stream type included in the PMT or the video stream attribute information. With the configuration, the video data generated by the moving picture coding method or the moving picture coding apparatus described in each of embodiments can be distinguished from video data that conforms to another standard.
  • FIG. 27 illustrates steps of the moving picture decoding method according to the present embodiment.
  • Step exS100 the stream type included in the PMT or the video stream attribute information is obtained from the multiplexed data.
  • Step exS101 it is determined whether or not the stream type or the video stream attribute information indicates that the multiplexed data is generated by the moving picture coding method or the moving picture coding apparatus in each of embodiments.
  • Step exS102 decoding is performed by the moving picture decoding method in each of embodiments.
  • the stream type or the video stream attribute information indicates conformance to the conventional standards, such as MPEG-2, MPEG-4 AVC, and VC-1
  • Step exS103 decoding is performed by a moving picture decoding method in conformity with the conventional standards.
  • allocating a new unique value to the stream type or the video stream attribute information enables determination whether or not the moving picture decoding method or the moving picture decoding apparatus that is described in each of embodiments can perform decoding. Even when multiplexed data that conforms to a different standard, an appropriate decoding method or apparatus can be selected. Thus, it becomes possible to decode information without any error. Furthermore, the moving picture coding method or apparatus, or the moving picture decoding method or apparatus in the present embodiment can be used in the devices and systems described above.
  • FIG. 28 illustrates a configuration of the LSI ex500 that is made into one chip.
  • the LSI ex500 includes elements ex501, ex502, ex503, ex504, ex505, ex506, ex507, ex508, and ex509 to be described below, and the elements are connected to each other through a bus ex510.
  • the power supply circuit unit ex505 is activated by supplying each of the elements with power when the power supply circuit unit ex505 is turned on.
  • the LSI ex500 receives an AV signal from a microphone ex117, a camera ex113, and others through an AV IO ex509 under control of a control unit ex501 including a CPU ex502, a memory controller ex503, a stream controller ex504, and a driving frequency control unit ex512.
  • the received AV signal is temporarily stored in an external memory ex511, such as an SDRAM.
  • the stored data is segmented into data portions according to the processing amount and speed to be transmitted to a signal processing unit ex507.
  • the signal processing unit ex507 codes an audio signal and/or a video signal.
  • the coding of the video signal is the coding described in each of embodiments.
  • the signal processing unit ex507 sometimes multiplexes the coded audio data and the coded video data, and a stream IO ex506 provides the multiplexed data outside.
  • the provided multiplexed data is transmitted to the base station ex107, or written on the recording media ex215.
  • the data should be temporarily stored in the buffer ex508 so that the data sets are synchronized with each other.
  • the memory ex511 is an element outside the LSI ex500, it may be included in the LSI ex500.
  • the buffer ex508 is not limited to one buffer, but may be composed of buffers. Furthermore, the LSI ex500 may be made into one chip or a plurality of chips.
  • control unit ex501 includes the CPU ex502, the memory controller ex503, the stream controller ex504, the driving frequency control unit ex512
  • the configuration of the control unit ex501 is not limited to such.
  • the signal processing unit ex507 may further include a CPU. Inclusion of another CPU in the signal processing unit ex507 can improve the processing speed.
  • the CPU ex502 may serve as or be a part of the signal processing unit ex507, and, for example, may include an audio signal processing unit.
  • the control unit ex501 includes the signal processing unit ex507 or the CPU ex502 including a part of the signal processing unit ex507.
  • LSI LSI
  • IC system LSI
  • super LSI ultra LSI depending on the degree of integration
  • ways to achieve integration are not limited to the LSI, and a special circuit or a general purpose processor and so forth can also achieve the integration.
  • Field Programmable Gate Array (FPGA) that can be programmed after manufacturing LSIs or a reconfigurable processor that allows re-configuration of the connection or configuration of an LSI can be used for the same purpose.
  • the processing amount probably increases.
  • the LSI ex500 needs to be set to a driving frequency higher than that of the CPU ex502 to be used when video data in conformity with the conventional standard is decoded.
  • the driving frequency is set higher, there is a problem that the power consumption increases.
  • the moving picture decoding apparatus such as the television ex300 and the LSI ex500 is configured to determine to which standard the video data conforms, and switch between the driving frequencies according to the determined standard.
  • FIG. 29 illustrates a configuration ex800 in the present embodiment.
  • a driving frequency switching unit ex803 sets a driving frequency to a higher driving frequency when video data is generated by the moving picture coding method or the moving picture coding apparatus described in each of embodiments. Then, the driving frequency switching unit ex803 instructs a decoding processing unit ex801 that executes the moving picture decoding method described in each of embodiments to decode the video data.
  • the driving frequency switching unit ex803 sets a driving frequency to a lower driving frequency than that of the video data generated by the moving picture coding method or the moving picture coding apparatus described in each of embodiments. Then, the driving frequency switching unit ex803 instructs the decoding processing unit ex802 that conforms to the conventional standard to decode the video data.
  • the driving frequency switching unit ex803 includes the CPU ex502 and the driving frequency control unit ex512 in FIG. 124.
  • each of the decoding processing unit ex801 that executes the moving picture decoding method described in each of embodiments and the decoding processing unit ex802 that conforms to the conventional standard corresponds to the signal processing unit ex507 in FIG. 28.
  • the CPU ex502 determines to which standard the video data conforms.
  • the driving frequency control unit ex512 determines a driving frequency based on a signal from the CPU ex502.
  • the signal processing unit ex507 decodes the video data based on the signal from the CPU ex502. For example, the identification information described in Embodiment 3 is probably used for identifying the video data.
  • the identification information is not limited to the one described in Embodiment 3 but may be any information as long as the information indicates to which standard the video data conforms. For example, when which standard video data conforms to can be determined based on an external signal for determining that the video data is used for a television or a disk, etc., the determination may be made based on such an external signal.
  • the CPU ex502 selects a driving frequency based on, for example, a look-up table in which the standards of the video data are associated with the driving frequencies as shown in FIG. 31.
  • the driving frequency can be selected by storing the look-up table in the buffer ex508 and in an internal memory of an LSI, and with reference to the look-up table by the CPU ex502.
  • FIG. 30 illustrates steps for executing a method in the present embodiment.
  • the signal processing unit ex507 obtains identification information from the multiplexed data.
  • the CPU ex502 determines whether or not the video data is generated by the coding method and the coding apparatus described in each of embodiments, based on the identification information.
  • the CPU ex502 transmits a signal for setting the driving frequency to a higher driving frequency to the driving frequency control unit ex512. Then, the driving frequency control unit ex512 sets the driving frequency to the higher driving frequency.
  • Step exS203 when the identification information indicates that the video data conforms to the conventional standard, such as MPEG-2, MPEG-4 AVC, and VC-1, in Step exS203, the CPU ex502 transmits a signal for setting the driving frequency to a lower driving frequency to the driving frequency control unit ex512. Then, the driving frequency control unit ex512 sets the driving frequency to the lower driving frequency than that in the case where the video data is generated by the moving picture coding method and the moving picture coding apparatus described in each of embodiment.
  • the conventional standard such as MPEG-2, MPEG-4 AVC, and VC-1
  • the power conservation effect can be improved by changing the voltage to be applied to the LSI ex500 or an apparatus including the LSI ex500.
  • the voltage to be applied to the LSI ex500 or the apparatus including the LSI ex500 is probably set to a voltage lower than that in the case where the driving frequency is set higher.
  • the driving frequency when the processing amount for decoding is larger, the driving frequency may be set higher, and when the processing amount for decoding is smaller, the driving frequency may be set lower as the method for setting the driving frequency.
  • the setting method is not limited to the ones described above.
  • the driving frequency is probably set in reverse order to the setting described above.
  • the method for setting the driving frequency is not limited to the method for setting the driving frequency lower.
  • the identification information indicates that the video data is generated by the moving picture coding method and the moving picture coding apparatus described in each of embodiments
  • the voltage to be applied to the LSI ex500 or the apparatus including the LSI ex500 is probably set higher.
  • the identification information indicates that the video data conforms to the conventional standard, such as MPEG-2, MPEG-4 AVC, and VC-1
  • the voltage to be applied to the LSI ex500 or the apparatus including the LSI ex500 is probably set lower.
  • the driving of the CPU ex502 does not probably have to be suspended.
  • the identification information indicates that the video data conforms to the conventional standard, such as MPEG-2, MPEG-4 AVC, and VC-1
  • the driving of the CPU ex502 is probably suspended at a given time because the CPU ex502 has extra processing capacity.
  • the suspending time is probably set shorter than that in the case where when the identification information indicates that the video data conforms to the conventional standard, such as MPEG-2, MPEG-4 AVC, and VC-1.
  • the power conservation effect can be improved by switching between the driving frequencies in accordance with the standard to which the video data conforms. Furthermore, when the LSI ex500 or the apparatus including the LSI ex500 is driven using a battery, the battery life can be extended with the power conservation effect.
  • the decoding processing unit for implementing the moving picture decoding method described in each of embodiments and the decoding processing unit that conforms to the conventional standard, such as MPEG-2, MPEG-4 AVC, and VC-1 are partly shared.
  • Ex900 in FIG. 32A shows an example of the configuration.
  • the moving picture decoding method described in each of embodiments and the moving picture decoding method that conforms to MPEG-4 AVC have, partly in common, the details of processing, such as entropy coding, inverse quantization, deblocking filtering, and motion compensated prediction.
  • the details of processing to be shared probably include use of a decoding processing unit ex902 that conforms to MPEG-4 AVC.
  • a dedicated decoding processing unit ex901 is probably used for other processing unique to an aspect of the present invention. Since the aspect of the present invention is characterized by intra prediction processing in particular, for example, the dedicated decoding processing unit ex901 is used for intra prediction processing. Otherwise, the decoding processing unit is probably shared for one of the entropy decoding, inverse quantization, deblocking filtering, and motion compensation, or all of the processing.
  • the decoding processing unit for implementing the moving picture decoding method described in each of embodiments may be shared for the processing to be shared, and a dedicated decoding processing unit may be used for processing unique to that of MPEG-4 AVC.
  • ex1000 in FIG. 32B shows another example in that processing is partly shared.
  • This example uses a configuration including a dedicated decoding processing unit ex1001 that supports the processing unique to an aspect of the present invention, a dedicated decoding processing unit ex1002 that supports the processing unique to another conventional standard, and a decoding processing unit ex1003 that supports processing to be shared between the moving picture decoding method according to the aspect of the present invention and the conventional moving picture decoding method.
  • the dedicated decoding processing units ex1001 and ex1002 are not necessarily specialized for the processing according to the aspect of the present invention and the processing of the conventional standard, respectively, and may be the ones capable of implementing general processing.
  • the configuration of the present embodiment can be implemented by the LSI ex500.
  • Methods for encoding and decoding video according to the present invention have advantages of improving the quality of video.
  • the methods are applicable to video cameras, mobile phones, and personal computers.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

Recent video coding schemes supports post image filtering process of the reconstructed image samples in additional to block edge filtering process. The filtering processes can be adapted on a block by block basis based on the spatial activity measurement of the block of image samples. The current invention provides methods and apparatuses to improve the adaptability of these filtering processes by adapting the filtering processes based on the prediction mode, transform sizes and quantization matrixes used for the reconstruction of a block of image samples. The benefits of the current invention are in the form of improving picture quality.

Description

METHODS FOR ENCODING AND DECODING VIDEO USING AN ADAPTIVE FILTERING PROCESS
This invention can be used in any multimedia data coding and, more particularly, in image and video coding supporting either block edge or pixel de-noising filtering on reconstructed samples.
In conventional video compression schemes, a quantization process is performed in the frequency domain to reduce the coded bits of the video. Typically, the quantization process causes visual artifacts like blocky noise and ringing noise in the reconstructed images and spatial filtering processes can be used to reduce these noises and improve visual quality. In some video compression schemes, quantization matrixes are supported where different frequency components may have different quantizers. And typically, in the combination case when both quantization matrixes and variable block transform sizes are supported in the same compression scheme, different block transform sizes may support different quantization matrixes.
The latest video coding schemes, such as the upcoming HEVC (High-Efficiency Video Coding), support two filtering processes to reduce blocky artifacts at block edges and to reduce the noise in a reconstructed image samples. Both filtering processes are customizable by signaling in a coded stream and the strength of the filtering can be adaptively changed between block units within the same picture.
In the prior art, block edge filtering process is adapted within a picture by the quantization parameter and the spatial activity on both sides of a block edge. The pixel filtering process, on the other hand, is adapted by block basis using the spatial activities of the pixels in the same block. However, when quantization matrixes are used, the filtering processes in the prior art do not adapt within the same picture based on the quantization matrixes used. Neither do they adapt to the variable block transform sizes nor do they adapt to the block prediction mode used for the reconstruction of the image. And thus because of the lack of this adaptability, the filtering process is not optimized to minimize the noise created by the quantization process caused by having different quantization matrixes.
To solve these problems, new methods for an adaptive filtering process is introduced. The new methods allow the filtering processes (inclusive of either the de-blocking and pixel de-noising filtering) to be adapted on a block by block basis based on either the block transform size, the block prediction mode or the quantization matrix selection.
What is novel about this invention is that the filtering process can be adapted by selecting one set of filtering parameters from a plurality sets of filtering parameters using the block transform size selection, the block prediction mode selection or the quantization matrix selection.
The effects of the current invention are in the form of improvement in picture quality by allowing more adaptability to the filtering processes.
FIG. 1 is a flowchart showing an encoding process in the first aspect of the current invention. FIG. 2 is a flowchart showing a decoding process in the first aspect of the current invention. FIG. 3 is a flowchart showing an encoding process in the second aspect of the current invention. FIG. 4 is a flowchart showing a decoding process in the second aspect of the current invention. FIG. 5 is a flowchart showing an encoding process in the third aspect of the current invention. FIG. 6 is a flowchart showing a decoding process in the third aspect of the current invention. FIG. 7 is a block diagram illustrating an example apparatus for a video encoder of current invention. FIG. 8 is a block diagram illustrating an example apparatus for a video decoder of current invention. FIG. 9A is a diagram showing the locations of a plurality sets of deblocking filter control parameters in a header of an image. FIG. 9B is a diagram showing the locations of a plurality sets of deblocking filter control parameters in a header of an image. FIG. 9C is a diagram showing the locations of a plurality sets of deblocking filter control parameters in a header of an image. FIG. 10A is a diagram showing the locations of a plurality sets of filter coefficients in a header of an image. FIG. 10B is a diagram showing the locations of a plurality sets of filter coefficients in a header of an image. FIG. 10C is a diagram showing the locations of a plurality sets of filter coefficients in a header of an image. FIG. 11 is a flowchart showing an example encoding process in the first aspect of the current invention. FIG. 12 is a flowchart showing an example encoding process in the second aspect of the current invention. FIG. 13 is a flowchart showing an example encoding process in the third aspect of the current invention. FIG. 14 shows an overall configuration of a content providing system for implementing content distribution services. FIG. 15 shows an overall configuration of a digital broadcasting system. FIG. 16 shows a block diagram illustrating an example of a configuration of a television. FIG. 17 shows a block diagram illustrating an example of a configuration of an information reproducing/recording unit that reads and writes information from and on a recording medium that is an optical disk. FIG. 18 shows an example of a configuration of a recording medium that is an optical disk. FIG. 19A shows an example of a cellular phone. FIG. 19B is a block diagram showing an example of a configuration of a cellular phone. FIG. 20 illustrates a structure of multiplexed data. FIG. 21 schematically shows how each stream is multiplexed in multiplexed data. FIG. 22 shows how a video stream is stored in a stream of PES packets in more detail. FIG. 23 shows a structure of TS packets and source packets in the multiplexed data. FIG. 24 shows a data structure of a PMT. FIG. 25 shows an internal structure of multiplexed data information. FIG. 26 shows an internal structure of stream attribute information. FIG. 27 shows steps for identifying video data. FIG. 28 shows an example of a configuration of an integrated circuit for implementing the moving picture coding method and the moving picture decoding method according to each of embodiments. FIG. 29 shows a configuration for switching between driving frequencies. FIG. 30 shows steps for identifying video data and switching between driving frequencies. FIG. 31 shows an example of a look-up table in which video data standards are associated with driving frequencies. FIG. 32A is a diagram showing an example of a configuration for sharing a module of a signal processing unit. FIG. 32B is a diagram showing another example of a configuration for sharing a module of the signal processing unit.
Embodiment 1
FIG. 1 shows a flowchart showing an encoding process in the first aspect of the current invention. Firstly in module 100, a transform unit size is selected from a plurality pre-determined transform unit sizes. Examples of transform unit size include 4x4, 8x8, 16x16 and 32x32 transform unit sizes. In module 102, a transform process is performed on a block of residuals to produce a block of transform coefficients based on the selected transform size. The block of residuals is computed from subtracting a block of image samples with a block of prediction values. And in module 104, the block of transform coefficients is coded into a compressed video stream. The process to code the block of transform coefficients includes a quantization process where the output of that process is a block of quantized transform coefficients.
Next in module 106, a de-quantization process is performed on the block of quantized transform coefficients to produce a block of reconstructed transform coefficients. The block of reconstructed transform coefficients is then inverse-transformed into a block of reconstructed residuals using the selected transform unit size in module 108. And in module 110, a block of image samples is reconstructed based on summing the block of reconstructed residuals with a block of prediction samples.
In module 112, plurality sets of filter parameters are written into a header of the compressed image. The sets of filter parameters include a set of filter parameters for strong filtering and a set of filter parameters for weak filtering. And in module 114, one set of filter parameters is selected from the plurality sets of filter parameters based on the selected transform unit size. Each transform unit size will be associated with at least one set of filter parameters based on a pre-defined association where one of the transform unit size will be associated with the set of filter parameters that defines a strong filter and one of the transform unit size will be associated with the set of filter parameters that defines a weak filter. In module 114, an alternate process to select one set of filter parameter from the plurality sets of filter parameters is based on a combination of the selected transform unit size and the block prediction mode.
Finally in module 116, a filtering process is applied on the block of reconstructed image samples using the selected set of filter parameters. A set of filter parameters can include parameters that affect decision thresholds and maximum clipping values to control the block edge filtering process. A set of filter parameters can include filter coefficients values to control the pixel filtering process.
FIG. 11 shows a flowchart showing an example encoding process in the first aspect of the current invention. Firstly in module 1100, a smaller transform unit size is selected from a plurality pre-determined transform unit sizes for a block of image samples containing an edge. Examples of transform unit size include 4x4, 8x8, 16x16 and 32x32 transform unit sizes where a smaller transform unit size may be a 4x4. In module 1102, a transform process is performed on a block of residuals to produce a block of transform coefficients based on the selected smaller transform size. The block of residuals is computed from subtracting a block of image samples with a block of prediction values. And in module 1104, the block of transform coefficients is coded into a compressed video stream. The process to code the block of transform coefficients includes a quantization process where the output of that process is a block of quantized transform coefficients.
Next in module 1106, a de-quantization process is performed on the block of quantized transform coefficients to produce a block of reconstructed transform coefficients. The block of reconstructed transform coefficients is then inverse-transformed into a block of reconstructed residuals using the selected smaller transform unit size in module 1108. And in module 1110, a block of image samples is reconstructed based on summing the block of reconstructed residuals with a block of prediction samples.
In module 1112, plurality sets of filter parameters are written into a header of the compressed image. The sets of filter parameters include a set of filter parameters for strong filtering and a set of filter parameters for weak filtering. And in module 1114, the set of filter parameters that defines a strong filter is selected from the plurality sets of filter parameters based on the selected smaller transform unit size. The smaller transform unit size will be associated with the strong filter. In module 1114, an alternate process to select one set of filter parameter from the plurality sets of filter parameters is based on a combination of the selected transform unit size and the block prediction mode.
Finally in module 1116, a filtering process is applied on the block of reconstructed image samples using the selected set of filter parameters. A set of filter parameters can include parameters that affect decision thresholds and maximum clipping values to control the block edge filtering process. A set of filter parameters can include filter coefficients values to control the pixel filtering process.
FIG. 2 shows a flowchart showing a decoding process in the first aspect of the current invention. In module 200, parameters are parsed from a coded header of a block unit to determine the size of a transform. Examples of these parameters include flags to sub-divide or split from a larger size. Next in module 202, a transform unit size is selected from plurality pre-determined transform unit sizes based on said parsed transform size parameters. And in module 204, a block of quantized transform coefficient is decoded from a compressed video stream. An inverse quantization process is then performed on the decoded block of quantized transform coefficients in module 206 to produce a block reconstructed transform coefficients. The block of reconstructed transform coefficients is then inverse-transformed into a block of reconstructed residuals using the selected transform unit size in module 208. And in module 210, a block of image samples is reconstructed based on summing the block of reconstructed residuals with a block of prediction samples.
In module 212, plurality sets of filter parameters are parsed from a header of the compressed image. The sets of filter parameters include a set of filter parameters for strong filtering and a set of filter parameters for weak filtering. And in module 214, one set of filter parameters is selected from the plurality sets of filter parameters based on the selected transform unit size. Each transform unit size will be associated with at least one set of filter parameters based on a pre-defined association where one of the transform unit size will be associated with the set of filter parameters that defines a strong filter and one of the transform unit size will be associated with the set of filter parameters that defines a weak filter. In module 214, an alternate process to select one set of filter parameter from the plurality sets of filter parameters is based on a combination of the selected transform unit size and the block prediction mode.
Finally in module 216, a filtering process is applied on the block of reconstructed image samples using the selected set of filter parameters. A set of filter parameters can include parameters that affect decision thresholds and maximum clipping values to control the block edge filtering process. A set of filter parameters can include filter coefficients values to control the pixel filtering process.
FIG. 3 shows a flowchart describing an encoding process in the second aspect of the current invention. Firstly a prediction mode is selected in module 300. The prediction mode can be an intra prediction mode or an inter prediction mode. Next in module 302, a prediction process is performed to produce a block of prediction samples based on the selected prediction mode. A block of residual samples is created in module 304 but subtracting a block of image samples with the block of prediction samples. The block of residual samples is then encoded into a compressed image utilizing a transform process and a quantization process in module 306. In module 308, the block of residual samples is reconstructed utilizing an inverse transform process and an inverse quantization process and a block of image samples is reconstructed by summing the reconstructed block of residuals and the block of prediction samples in module 310.
In module 312, plurality sets of filter parameters are written into a header of the compressed image. The sets of filter parameters include a set of filter parameters for strong filtering and a set of filter parameters for weak filtering. And in module 314, one set of filter parameters is selected from the plurality sets of filter parameters based on the selected prediction mode. Each prediction mode (Intra or Inter) will be associated with at least one set of filter parameters based on a pre-defined association where one of the prediction mode will be associated with the set of filter parameters that defines a strong filter and one of the prediction mode will be associated with the set of filter parameters that defines a weak filter. Finally in module 316, a filtering process is applied on the block of reconstructed image samples using the selected set of filter parameters. A set of filter parameters can include parameters that affect decision thresholds and maximum clipping values to control the block edge filtering process. A set of filter parameters can include filter coefficients values to control the pixel filtering process.
FIG. 12 shows a flowchart describing an example encoding process in the second aspect of the current invention. Firstly an intra prediction mode is selected from plurality of prediction modes in module 1200. The intra prediction mode is selected based on a lower cost measurement relative to other prediction modes. Next in module 1202, an intra prediction process is performed to produce a block of prediction samples based on the selected intra prediction mode. A block of residual samples is created in module 1204 but subtracting a block of image samples with the block of intra prediction samples. The block of residual samples is then encoded into a compressed image utilizing a transform process and a quantization process in module 1206. In module 1208, the block of residual samples is reconstructed utilizing an inverse transform process and an inverse quantization process and a block of image samples is reconstructed by summing the reconstructed block of residuals and the block of intra prediction samples in module 1210.
In module 1212, plurality sets of filter parameters are written into a header of the compressed image. The sets of filter parameters include a set of filter parameters for strong filtering and a set of filter parameters for weak filtering. And in module 1214, the set of filter parameters that defines a strong filter is selected from the plurality sets of filter parameters based on the selected intra prediction mode. The intra prediction mode is associated is associated with the set of filter parameters that defines a strong filter and the inter prediction mode is associated with the set of filter parameters that defines a weak filter. Finally in module 1216, a filtering process is applied on the block of reconstructed image samples using the selected set of strong filter parameters. A set of filter parameters can include parameters that affect decision thresholds and maximum clipping values to control the block edge filtering process. A set of filter parameters can include filter coefficients values to control the pixel filtering process.
FIG. 4 shows a flowchart showing a decoding process in the second aspect of the current invention. In module 400, a parameter is parsed from a coded header of a block unit to determine the block prediction mode. The parsed parameter provides information whether the block unit is intra predicted or inter predicted. Next in module 402, a prediction process is performed to produce a block of prediction samples based on said parsed block prediction mode parameter. In module 404, a block of residuals is then decoded from a compressed image utilizing an inverse transform process and an inverse quantization process. A block of image samples is reconstructed by summing the block of decoded residuals and the block of prediction samples in module 406.
In module 408, plurality sets of filter parameters are parsed from a header of the compressed image. The sets of filter parameters include a set of filter parameters for strong filtering and a set of filter parameters for weak filtering. And in module 410, one set of filter parameters is selected from the plurality sets of filter parameters based on the selected prediction mode. Each prediction mode will be associated with at least one set of filter parameters based on a pre-defined association where one of the prediction modes will be associated with the set of filter parameters that defines a strong filter and one of the prediction modes will be associated with the set of filter parameters that defines a weak filter. Finally in module 412, a filtering process is applied on the block of reconstructed image samples using the selected set of filter parameters. A set of filter parameters can include parameters that affect decision thresholds and maximum clipping values to control the block edge filtering process. A set of filter parameters can include filter coefficients values to control the pixel filtering process.
FIG. 5 shows a flowchart describing an encoding process in the third aspect of the current invention. In module 500, a quantization matrix is selected from a plurality of quantization matrixes. For each transform unit size, more than one quantization matrix can be utilized to quantize coefficients of a transform unit within the same picture. Next in module 502, a block of image samples is encoded into a compressed image utilizing a quantization process using the selected quantization matrix. And in module 504, a parameter is written into a header of the coded block of image samples to indicate which quantization matrix was selected to be used for the quantization process. In module 506, the block of image samples is then decoded utilizing an inverse quantization process using the selected quantization matrix.
In module 508, plurality sets of filter parameters are written into a header of the compressed image. The sets of filter parameters include a set of filter parameters for strong filtering and a set of filter parameters for weak filtering. And in module 510, one set of filter parameters is selected from the plurality sets of filter parameters based on the selection parameter for quantization matrix. Each quantization matrix will be associated with at least one set of filter parameters based on a pre-defined association where one of the quantization matrixes will be associated with the set of filter parameters that defines a strong filter and one of the quantization matrixes will be associated with the set of filter parameters that defines a weak filter. Finally in module 512, a filtering process is applied on the block of reconstructed image samples using the selected set of filter parameters. A set of filter parameters can include parameters that affect decision thresholds and maximum clipping values to control the block edge filtering process. A set of filter parameters can include filter coefficients values to control the pixel filtering process.
FIG. 13 shows a flowchart describing an example encoding process in the third aspect of the current invention. In module 1300, a steep sloped quantization matrix is selected from a plurality of quantization matrixes inclusive of a gentle sloped quantization matrix. For each transform unit size, more than one quantization matrix can be utilized to quantize coefficients of a transform unit within the same picture. Next in module 1302, a block of image samples is encoded into a compressed image utilizing a quantization process using the selected steep sloped quantization matrix. And in module 1304, a parameter is written into a header of the coded block of image samples to indicate the steep sloped quantization matrix was selected to be used for the quantization process. In module 1306, the block of image samples is then decoded utilizing an inverse quantization process using the selected steep sloped quantization matrix.
In module 1308, plurality sets of filter parameters are written into a header of the compressed image. The sets of filter parameters include a set of filter parameters for strong filtering and a set of filter parameters for weak filtering. And in module 1310, the set of filter parameters that defines the strong filter is selected from the plurality sets of filter parameters based on the selection parameter for steep sloped quantization matrix. The steep sloped quantization matrix is associated with the set of filter parameters that defines a strong filter and the gentle sloped quantization matrix is associated with the set of filter parameters that defines a weak filter. Finally in module 1312, a filtering process is applied on the block of reconstructed image samples using the selected set of strong filter parameters. A set of filter parameters can include parameters that affect decision thresholds and maximum clipping values to control the block edge filtering process. A set of filter parameters can include filter coefficients values to control the pixel filtering process.
FIG. 6 shows a flowchart showing a decoding process in the third aspect of the current invention. In module 600, a parameter is parsed from a coded header of a block unit to determine a selection parameter for quantization matrix. In module 602, a quantization matrix is selected from a plurality of quantization matrixes based on said parsed selection parameter. The plurality of quantization matrixes can be signaled in a header of a coded image or pre-defined. Next in module 604, a block of image samples is then decoded utilizing an inverse quantization process using the selected quantization matrix.
In module 606, plurality sets of filter parameters are parsed from a header of the compressed image. The sets of filter parameters include a set of filter parameters for strong filtering and a set of filter parameters for weak filtering. And in module 608, one set of filter parameters is selected from the plurality sets of filter parameters based on the selected quantization matrix. Each quantization matrix will be associated with at least one set of filter parameters based on a pre-defined association where one of the quantization matrixes will be associated with the set of filter parameters that defines a strong filter and one of the quantization matrixes will be associated with the set of filter parameters that defines a weak filter. Finally in module 610, a filtering process is applied on the block of reconstructed image samples using the selected set of filter parameters. A set of filter parameters can include parameters that affect decision thresholds and maximum clipping values to control the block edge filtering process. A set of filter parameters can include filter coefficients values to control the pixel filtering process.
FIG. 7 shows an example apparatus for a video encoder of current invention. As shown in the FIG 7, it comprises of a subtraction unit 700, a transform unit 702, a quantization unit 704, an entropy coding unit 706, an inverse quantization unit 708, an inverse transform unit 710, an adder unit 712, a filtering unit 714, two memory units 716 & 728, a motion estimation unit 718, a motion compensation unit 720, a selector unit 722, an intra prediction unit 724 and an intra prediction mode selection unit 726.
Firstly, the intra prediction mode selection unit 726 reads a block of original samples D701 and reconstructed samples D725 from a memory unit 728 to output an intra prediction mode D731. The intra prediction unit 724 reads the intra prediction mode D731 and the reconstructed samples D727 and outputs a block of intra prediction samples D729. The motion estimation unit 718 reads the block of original samples D701 and a reconstructed frame D719 stored in the memory unit 716 and outputs motion prediction parameters D721. The motion compensation 720 reads the motion prediction parameters D721 and the reconstructed frame D719 and outputs a block of motion predicted samples D723. The selector unit 722 then selects either the block of intra predicted samples D729 or the block of motion predicted samples D723 and outputs a block of prediction samples D731 to the subtraction unit 700.
The subtraction unit 700 reads the block of original samples D701 and the block of prediction samples D731 and outputs a block of residual samples D703. The transform unit 702 reads the block of residual samples D703 and outputs a block of transform coefficients D705. The quantization unit 704 reads the block of transform coefficients D705 and outputs the quantized coefficients D707 to the entropy coding unit 706 which outputs the compressed video.
The inverse quantization unit 708 reads the quantized transform coefficients D707 and outputs the inverse quantized transform coefficients D711. The inverse transform unit 710 reads the inverse quantized transform coefficients D711 and outputs a block of reconstructed residuals D713. The adder unit 712 reads the block of reconstructed residuals D713 and the block of prediction samples D731 and outputs a block of reconstructed samples D715. Some of the reconstructed samples are stored in the memory unit 728. The filtering unit 714 finally reads the block of reconstructed samples and outputs the block of filtered samples to the memory unit 716.
FIG. 8 shows an example apparatus for a video decoder of current invention. It consists of an entropy decoding unit 800, an inverse quantization unit 802, an inverse transform unit 804, an adder unit 806, a selector unit 810, an intra prediction unit 818, memory units 812 & 814, a motion compensation unit 816 and a filtering unit 808.
Firstly, the entropy decoding unit 800 reads a compressed video and outputs a block of quantized coefficients D803. The inverse quantization unit 802 reads the quantized transform coefficients D803 and outputs the inverse quantized transform coefficients D805. The inverse transform unit 804 reads the inverse quantized transform coefficients D805 and outputs a block of reconstructed residuals D807. The adder unit 806 reads the block of reconstructed residuals D807 and the block of prediction samples D821 and outputs a block of reconstructed samples D809. Some of the reconstructed samples are stored in the memory unit 812. The filtering unit 808 reads the block of reconstructed samples D809 and outputs the block of filtered samples D811 to the memory unit 814. The block of filtered samples D811 is also outputted as the reconstructed video. The motion compensation unit 816 reads reconstructed samples D813 from the memory unit 814 and outputs a block of motion predicted samples D815 to the selector unit 810. The intra prediction unit 818 reads reconstructed samples from the memory unit 812 and outputs a block of intra predicted samples D819 to the selector unit 810. The selector unit 810 selects either the block of intra prediction samples D819 or the block of motion predicted samples D815 and outputs a block of prediction samples D821 to the adder unit 806.
FIG. 9A to FIG. 9C show the location for plurality sets of deblocking filter control parameters in a header of an image. As shown in the FIG. 9A, multiple sets of deblocking filter control parameter, where each set is inclusive of a disable_deblocking_filter_idc parameter, an alpha_c0_offset_div2 parameter or a beta_offset_div2 parameter, are located in a picture parameter set. As shown in the FIG. 9B, multiple sets of deblocking filter control parameter, where each set is inclusive of a disable_deblocking_filter_idc parameter, an alpha_c0_offset_div2 parameter or a beta_offset_div2 parameter, are located in a slice header. As shown in the FIG. 9C, multiple sets of deblocking filter control parameter, where each set is inclusive of a disable_deblocking_filter_idc parameter, an alpha_c0_offset_div2 parameter or a beta_offset_div2 parameter, are located in a slice parameter set.
FIG. 10A to FIG. 10C show the location for plurality sets of filter coefficients parameters in a header of an image. As shown in the FIG. 10A, multiple sets of filter parameters, where each set is inclusive of filter coefficients, are located in a picture parameter set. As shown in the FIG. 10B, multiple sets of filter parameters, where each set is inclusive of filter coefficients, are located in a slice header. As shown in the FIG. 10C, multiple sets of filter parameters, where each set is inclusive of filter coefficients, are located in a slice parameter set.
<Summary>
(Method 1) A method of encoding video using an adaptive filtering process comprising of: selecting a transform unit size from a plurality of pre-determined transform unit size (100); performing a transform process on a block of residuals based on said selected transform size (102); encoding said block of transformed coefficients into a compressed video stream involving a quantization process (104); performing an inverse quantization process on said block of quantized transformed coefficients (106); performing an inverse transform process on said block of inverse quantized coefficients based on said selected transform size (108); reconstructing a block of image samples based on summing said block of inverse transformed residuals and a block of prediction samples (110); writing a plurality sets of filter parameters into a header of a compressed image (112); selecting one set of filter parameters from said plurality sets of filter parameters using said selected transform unit size as one of the selection criteria (114); and applying a filtering process on said block of reconstructed image samples using said selected set of filter parameters (116).
(Method 2) A method of encoding video using an adaptive filtering process comprising of: selecting a small transform unit size from a plurality of pre-determined transform unit size for a block of image samples containing an edge (1100); performing a transform process on a block of residuals based on said selected small transform size (1102), encoding said block of transformed coefficients into a compressed video stream involving a quantization process (1104); performing an inverse quantization process on said block of quantized transformed coefficients (1106); performing an inverse transform process on said block of inverse quantized coefficients based on said selected small transform size (1108); reconstructing a block of image samples based on summing said block of inverse transformed residuals and a block of prediction samples (1110); writing a plurality sets of filter parameters into a header of a compressed image (1112); selecting one set of filter parameters that defines a strong filter from said plurality sets of filter parameters using said selected small transform unit size as one of the selection criteria (1114); and applying a strong filtering process on said block of reconstructed image samples using said selected set of filter parameters (1116).
(Method 3) A method of decoding video using an adaptive filtering process comprising of: parsing parameters from a coded header of a block unit to determine the size of a transform (200); selecting a transform unit size from a plurality of pre-determined transform unit sizes based on said parsed transform size parameters (202); decoding a block of quantized coefficients from a compressed video stream (204); performing an inverse quantization process on said decoded block of quantized transformed coefficients (206); performing an inverse transform process on said block of inverse quantized coefficients based on said selected transform size (208); reconstructing a block of image samples based on summing said block of inverse transformed residuals and a block of prediction samples (210); parsing plurality sets of filter parameters from a header of a compressed image (212); selecting one set of filter parameters from said parsed plurality sets of filter parameters using said selected transform unit size as one of the selection criteria (214); and applying a filtering process on said block of reconstructed image samples using said selected set of filter parameters (216).
(Method 4) The method of encoding or decoding video using an adaptive filtering process according to Method 1, 2 or 3, whereas said process to select one set of filter parameters using said selected transform unit size can also in additionally use the block prediction mode as the selection criteria.
(Method 5) A method of encoding video using an adaptive filtering process comprising of: selecting a spatial or temporal prediction mode (300); performing a prediction process to produce a block of prediction samples based on said selected prediction mode (302); subtracting a block of image samples with said block of prediction samples to produce a block of residual samples (304); encoding said block of residual samples into a compressed image utilizing a transform process and a quantization process (306); reconstructing said block of residual samples utilizing an inverse transform process and an inverse quantization process (308); reconstructing a block of image samples based on summing said block of residuals and said block of prediction samples (310); writing plurality sets of filter parameters into a header of said compressed image (312); selecting one set of filter parameters from said plurality sets of filter parameters using said selected prediction mode as one of the selection criteria (314); and applying a filtering process on said block of reconstructed image samples using said selected set of filter parameters (316).
(Method 6) A method of encoding video using an adaptive filtering process comprising of: selecting an intra prediction mode from a plurality of prediction mode inclusive of an inter prediction mode (1200); performing a prediction process to produce a block of prediction samples based on said selected intra prediction mode (1202); subtracting a block of image samples with said block of prediction samples to produce a block of residual samples (1204); encoding said block of residual samples into a compressed image utilizing a transform process and a quantization process (1206); reconstructing said block of residual samples utilizing an inverse transform process and an inverse quantization process (1208); reconstructing a block of image samples based on summing said block of residuals and said block of prediction samples (1210); writing plurality sets of filter parameters into a header of said compressed image (1212); selecting one set of filter parameters that represents a strong filter from said plurality sets of filter parameters using said selected intra prediction mode as one of the selection criteria(1214); and applying a strong filtering process on said intra predicted block of reconstructed image samples using said selected set of filter parameters (1216).
(Method 7) A method of decoding video using an adaptive filtering process comprising of: parsing a parameter from a coded header of a block unit to determine the block prediction mode (400); performing a prediction process to produce a block of prediction samples based on said parsed prediction mode parameter (402); decoding a block of residual samples from a compressed image utilizing an inverse transform process and an inverse quantization process (404); reconstructing a block of image samples based on summing said block of residuals and said block of prediction samples (406); parsing plurality sets of filter parameters from a header of said compressed image (408); selecting one set of filter parameters from said plurality sets of filter parameters using said selected parsed block prediction mode as one of the selection criteria (410); and applying a filtering process on said block of reconstructed image samples using said selected set of filter parameters (412).
(Method 8) A method of encoding video using an adaptive filtering process comprising of: selecting a quantization matrix from a plurality of quantization matrixes (500); encoding a block of image samples into a compressed image utilizing a quantization process with said selected quantization matrix (502); writing a parameter into a header of said coded block to indicate said selection for quantization matrix (504); decoding said block of image samples utilizing an inverse quantization process with said selected quantization matrix (506); writing plurality sets of filter parameters into a header of said compressed image (508); selecting one set of filter parameters from said plurality sets of filter parameters using said selection parameter for quantization matrix as one of the selection criteria (510); and applying a filtering process on said block of reconstructed image samples using said selected set of filter parameters (512).
(Method 9) A method of encoding video using an adaptive filtering process comprising of: selecting a steep sloped quantization matrix from a plurality of quantization matrixes inclusive of a gentle sloped quantization matrix (1300); encoding a block of image samples into a compressed image utilizing a quantization process with said selected steep sloped quantization matrix (1302); writing a parameter into a header of said coded block to indicate said selection for quantization matrix (1304); decoding said block of image samples utilizing an inverse quantization process with said selected steep sloped quantization matrix (1306); writing plurality sets of filter parameters into a header of said compressed image (1308); selecting one set of filter parameters corresponding to a strong filter from said plurality sets of filter parameters using said selection parameter for the steep sloped quantization matrix as one of the selection criteria (1310); and applying a strong filtering process on said block of reconstructed image samples using said selected set of filter parameters (1312).
(Method 10) A method of decoding video using an adaptive filtering process comprising of: parsing a parameter from a coded header of a block unit to indicate a selection parameter for quantization matrix (600); selecting a quantization matrix from a plurality of quantization matrixes based on said parsed selection parameter (602); decoding a block of image samples utilizing an inverse quantization process with said selected quantization matrix (604); parsing plurality sets of filter parameters from a header of said compressed image (606); selecting one set of filter parameters from said plurality sets of filter parameters using said selected parsed block prediction mode as one of the selection criteria (608); and applying a filtering process on said block of reconstructed image samples using said selected set of filter parameters (610).
(Method 11) The method of encoding or decoding video using an adaptive filtering process according to Method 1, 2, 3, 5, 6, 7, 8, 9 or 10, whereas said filtering process includes a block edge filtering process.
(Method 12) The method of encoding or decoding video using an adaptive filtering process according to Method 1, 2, 3, 5, 6, 7, 8, 9 or 10, whereas said filtering process includes a pixel filtering process.
(Apparatus 13) An apparatus for encoding video using an adaptive filtering process comprising of: selection unit operable to select a transform unit size from a plurality of pre-determined transform unit size (100); transform unit operable to perform a transform process on a block of residuals based on said selected transform size (102), encoding unit operable to encode said block of transformed coefficients into a compressed video stream involving a quantization process (104); inverse quantization unit operable to perform an inverse quantization process on said block of quantized transformed coefficients (106); inverse transform unit operable to perform an inverse transform process on said block of inverse quantized coefficients based on said selected transform size (108); reconstruction unit operable to reconstruct a block of image samples based on summing said block of inverse transformed residuals and a block of prediction samples (110); writing unit operable to write a plurality sets of filter parameters into a header of a compressed image (112); selection unit operable to select one set of filter parameters from said plurality sets of filter parameters using said selected transform unit size as one of the selection criteria(114); and filtering unit operable to apply a filtering process on said block of reconstructed image samples using said selected set of filter parameters (116).
(Apparatus 14) An apparatus for encoding video using an adaptive filtering process comprising of: selection unit operable to select a small transform unit size from a plurality of pre-determined transform unit size for a block of image samples containing an edge(1100); transform unit operable to perform a transform process on a block of residuals based on said selected small transform size (1102), encoding unit operable to encode said said block of transformed coefficients into a compressed video stream involving a quantization process (1104); inverse quantization unit operable to perform an inverse quantization process on said block of quantized transformed coefficients (1106); inverse transform unit operable to perform an inverse transform process on said block of inverse quantized coefficients based on said selected small transform size (1108); reconstruction unit operable to reconstruct a block of image samples based on summing said block of inverse transformed residuals and a block of prediction samples (1110); writing unit operable to write a plurality sets of filter parameters into a header of a compressed image (1112); selection unit operable to select one set of filter parameters that defines a strong filter from said plurality sets of filter parameters using said selected small transform unit size as one of the selection criteria (1114); and filtering unit operable to apply a strong filtering process on said block of reconstructed image samples using said selected set of filter parameters (1116).
(Apparatus 15) An apparatus for decoding video using an adaptive filtering process comprising of: parsing unit operable to parse parameters from a coded header of a block unit to determine the size of a transform (200); selection unit operable to select a transform unit size from a plurality of pre-determined transform unit sizes based on said parsed transform size parameters (202); decoding unit operable to decode a block of quantized coefficients from a compressed video stream (204); inverse quantization unit operable to perform an inverse quantization process on said decoded block of quantized transformed coefficients (206); inverse transform unit operable to perform an inverse transform process on said block of inverse quantized coefficients based on said selected transform size (208); reconstruction unit operable to reconstruct a block of image samples based on summing said block of inverse transformed residuals and a block of prediction samples (210); parsing unit operable to parse plurality sets of filter parameters from a header of a compressed image (212); selection unit operable to select one set of filter parameters from said parsed plurality sets of filter parameters using said selected transform unit size as one of the selection criteria (214); and filtering unit operable to apply a filtering process on said block of reconstructed image samples using said selected set of filter parameters (216).
(Apparatus 16) The apparatus for encoding or decoding video using an adaptive filtering process according to Apparatus 13, 14 or 15, whereas said selection unit to select one set of filter parameters using said selected transform unit size can also in additionally use the block prediction mode as the selection criteria.
(Apparatus 17) An apparatus for encoding video using an adaptive filtering process comprising of: selection unit operable to select a spatial or temporal prediction mode (300); prediction unit operable to perform a prediction process to produce a block of prediction samples based on said selected prediction mode (302); subtraction unit operable to subtract a block of image samples with said block of prediction samples to produce a block of residual samples (304); encoding unit operable to encode said block of residual samples into a compressed image utilizing a transform process and a quantization process (306); reconstructing unit operable to reconstruct said block of residual samples utilizing an inverse transform process and an inverse quantization process (308); reconstructing unit operable to reconstruct a block of image samples based on summing said block of residuals and said block of prediction samples (310); writing unit operable to write plurality sets of filter parameters into a header of said compressed image (312); selection unit operable to select one set of filter parameters from said plurality sets of filter parameters using said selected prediction mode as one of the selection criteria (314); and filtering unit operable to apply a filtering process on said block of reconstructed image samples using said selected set of filter parameters (316).
(Apparatus 18) An apparatus for encoding video using an adaptive filtering process comprising of: selection unit operable to select an intra prediction mode from a plurality of prediction mode inclusive of an inter prediction mode (1200); prediction unit operable to perform a prediction process to produce a block of prediction samples based on said selected intra prediction mode (1202); subtraction unit operable to subtract a block of image samples with said block of prediction samples to produce a block of residual samples (1204); encoding unit operable to encode said block of residual samples into a compressed image utilizing a transform process and a quantization process (1206); reconstructing unit operable to reconstruct said block of residual samples utilizing an inverse transform process and an inverse quantization process (1208); reconstructing unit operable to reconstruct a block of image samples based on summing said block of residuals and said block of prediction samples (1210); writing unit operable to write plurality sets of filter parameters into a header of said compressed image (1212); selection unit operable to select one set of filter parameters that represents a strong filter from said plurality sets of filter parameters using said selected intra prediction mode as one of the selection criteria (1214); and filtering unit operable to apply a strong filtering process on said intra predicted block of reconstructed image samples using said selected set of filter parameters (1216).
(Apparatus 19) An apparatus for decoding video using an adaptive filtering process comprising of: parsing unit operable to parse a parameter from a coded header of a block unit to determine the block prediction mode (400); prediction unit operable to perform a prediction process to produce a block of prediction samples based on said parsed prediction mode parameter (402);decoding unit operable to decode a block of residual samples from a compressed image utilizing an inverse transform process and an inverse quantization process (404); reconstructing unit operable to reconstruct a block of image samples based on summing said block of residuals and said block of prediction samples (406); parsing unit operable to parse plurality sets of filter parameters from a header of said compressed image (408); selection unit operable to select one set of filter parameters from said plurality sets of filter parameters using said selected parsed block prediction mode as one of the selection criteria (410); and filtering unit operable to apply a filtering process on said block of reconstructed image samples using said selected set of filter parameters (412).
(Apparatus 20) An apparatus for encoding video using an adaptive filtering process comprising of: selection unit operable to select a quantization matrix from a plurality of quantization matrixes (500); encoding unit operable to encode a block of image samples into a compressed image utilizing a quantization process with said selected quantization matrix (502); writing unit operable to write a parameter into a header of said coded block to indicate said selection for quantization matrix (504); decoding unit operable to decode said block of image samples utilizing an inverse quantization process with said selected quantization matrix (506); writing unit operable to write plurality sets of filter parameters into a header of said compressed image (508); selection unit operable to select one set of filter parameters from said plurality sets of filter parameters using said selection parameter for quantization matrix as one of the selection criteria (510); and filtering unit operable to apply a filtering process on said block of reconstructed image samples using said selected set of filter parameters (512).
(Apparatus 21) An apparatus for encoding video using an adaptive filtering process comprising of: selection unit operable to select a steep sloped quantization matrix from a plurality of quantization matrixes inclusive of a gentle sloped quantization matrix (1300); encoding unit operable to encode a block of image samples into a compressed image utilizing a quantization process with said selected steep sloped quantization matrix (1302);writing unit operable to write a parameter into a header of said coded block to indicate said selection for quantization matrix (1304); decoding unit operable to decode said block of image samples utilizing an inverse quantization process with said selected steep sloped quantization matrix (1306); writing unit operable to write plurality sets of filter parameters into a header of said compressed image (1308); selection unit operable to select one set of filter parameters corresponding to a strong filter from said plurality sets of filter parameters using said selection parameter for the steep sloped quantization matrix as one of the selection criteria (1310); and filtering unit operable to apply a strong filtering process on said block of reconstructed image samples using said selected set of filter parameters (1312).
(Apparatus 22) An apparatus for decoding video using an adaptive filtering process comprising of: parsing unit operable to parse a parameter from a coded header of a block unit to indicate a selection parameter for quantization matrix (600); selection unit operable to select a quantization matrix from a plurality of quantization matrixes based on said parsed selection parameter (602); decoding unit operable to decode a block of image samples utilizing an inverse quantization process with said selected quantization matrix (604); parsing unit operable to parse plurality sets of filter parameters from a header of said compressed image (606); selection unit operable to select one set of filter parameters from said plurality sets of filter parameters using said selected parsed block prediction mode as one of the selection criteria (608); and filtering unit operable to apply a filtering process on said block of reconstructed image samples using said selected set of filter parameters (610).
(Apparatus 23) The apparatus for encoding or decoding video using an adaptive filtering process according to Apparatus 13, 14, 15, 17, 18, 19, 20, 21 or 22, whereas said filtering unit includes a block edge filtering unit.
(Apparatus 24) The apparatus for encoding or decoding video using an adaptive filtering process according to Apparatus 13, 14, 15, 17, 18, 19, 20, 21 or 22, whereas said filtering unit includes a pixel filtering unit.
Embodiment 2
The processing described in each of embodiments can be simply implemented in an independent computer system, by recording, in a recording medium, a program for implementing the configurations of the moving picture coding method (image coding method) and the moving picture decoding method (image decoding method) described in each of embodiments. The recording media may be any recording media as long as the program can be recorded, such as a magnetic disk, an optical disk, a magnetic optical disk, an IC card, and a semiconductor memory.
Hereinafter, the applications to the moving picture coding method (image coding method) and the moving picture decoding method (image decoding method) described in each of embodiments and systems using thereof will be described. The system has a feature of having an image coding and decoding apparatus that includes an image coding apparatus using the image coding method and an image decoding apparatus using the image decoding method. Other configurations in the system can be changed as appropriate depending on the cases.
FIG. 14 illustrates an overall configuration of a content providing system ex100 for implementing content distribution services. The area for providing communication services is divided into cells of desired size, and base stations ex106, ex107, ex108, ex109, and ex110 which are fixed wireless stations are placed in each of the cells.
The content providing system ex100 is connected to devices, such as a computer ex111, a personal digital assistant (PDA) ex112, a camera ex113, a cellular phone ex114 and a game machine ex115, via the Internet ex101, an Internet service provider ex102, a telephone network ex104, as well as the base stations ex106 to ex110, respectively.
However, the configuration of the content providing system ex100 is not limited to the configuration shown in FIG. 14, and a combination in which any of the elements are connected is acceptable. In addition, each device may be directly connected to the telephone network ex104, rather than via the base stations ex106 to ex110 which are the fixed wireless stations. Furthermore, the devices may be interconnected to each other via a short distance wireless communication and others.
The camera ex113, such as a digital video camera, is capable of capturing video. A camera ex116, such as a digital video camera, is capable of capturing both still images and video. Furthermore, the cellular phone ex114 may be the one that meets any of the standards such as Global System for Mobile Communications (GSM) (registered trademark), Code Division Multiple Access (CDMA), Wideband-Code Division Multiple Access (W-CDMA), Long Term Evolution (LTE), and High Speed Packet Access (HSPA). Alternatively, the cellular phone ex114 may be a Personal Handyphone System (PHS).
In the content providing system ex100, a streaming server ex103 is connected to the camera ex113 and others via the telephone network ex104 and the base station ex109, which enables distribution of images of a live show and others. In such a distribution, a content (for example, video of a music live show) captured by the user using the camera ex113 is coded as described above in each of embodiments (i.e., the camera functions as the image coding apparatus according to an aspect of the present invention), and the coded content is transmitted to the streaming server ex103. On the other hand, the streaming server ex103 carries out stream distribution of the transmitted content data to the clients upon their requests. The clients include the computer ex111, the PDA ex112, the camera ex113, the cellular phone ex114, and the game machine ex115 that are capable of decoding the above-mentioned coded data. Each of the devices that have received the distributed data decodes and reproduces the coded data (i.e., functions as the image decoding apparatus according to an aspect of the present invention).
The captured data may be coded by the camera ex113 or the streaming server ex103 that transmits the data, or the coding processes may be shared between the camera ex113 and the streaming server ex103. Similarly, the distributed data may be decoded by the clients or the streaming server ex103, or the decoding processes may be shared between the clients and the streaming server ex103. Furthermore, the data of the still images and video captured by not only the camera ex113 but also the camera ex116 may be transmitted to the streaming server ex103 through the computer ex111. The coding processes may be performed by the camera ex116, the computer ex111, or the streaming server ex103, or shared among them.
Furthermore, the coding and decoding processes may be performed by an LSI ex500 generally included in each of the computer ex111 and the devices. The LSI ex500 may be configured of a single chip or a plurality of chips. Software for coding and decoding video may be integrated into some type of a recording medium (such as a CD-ROM, a flexible disk, and a hard disk) that is readable by the computer ex111 and others, and the coding and decoding processes may be performed using the software. Furthermore, when the cellular phone ex114 is equipped with a camera, the image data obtained by the camera may be transmitted. The video data is data coded by the LSI ex500 included in the cellular phone ex114.
Furthermore, the streaming server ex103 may be composed of servers and computers, and may decentralize data and process the decentralized data, record, or distribute data.
As described above, the clients may receive and reproduce the coded data in the content providing system ex100. In other words, the clients can receive and decode information transmitted by the user, and reproduce the decoded data in real time in the content providing system ex100, so that the user who does not have any particular right and equipment can implement personal broadcasting.
Aside from the example of the content providing system ex100, at least one of the moving picture coding apparatus (image coding apparatus) and the moving picture decoding apparatus (image decoding apparatus) described in each of embodiments may be implemented in a digital broadcasting system ex200 illustrated in FIG. 15. More specifically, a broadcast station ex201 communicates or transmits, via radio waves to a broadcast satellite ex202, multiplexed data obtained by multiplexing audio data and others onto video data. The video data is data coded by the moving picture coding method described in each of embodiments (i.e., data coded by the image coding apparatus according to an aspect of the present invention). Upon receipt of the multiplexed data, the broadcast satellite ex202 transmits radio waves for broadcasting. Then, a home-use antenna ex204 with a satellite broadcast reception function receives the radio waves. Next, a device such as a television (receiver) ex300 and a set top box (STB) ex217 decodes the received multiplexed data, and reproduces the decoded data (i.e., functions as the image decoding apparatus according to an aspect of the present invention).
Furthermore, a reader/recorder ex218 (i) reads and decodes the multiplexed data recorded on a recording media ex215, such as a DVD and a BD, or (i) codes video signals in the recording medium ex215, and in some cases, writes data obtained by multiplexing an audio signal on the coded data. The reader/recorder ex218 can include the moving picture decoding apparatus or the moving picture coding apparatus as shown in each of embodiments. In this case, the reproduced video signals are displayed on the monitor ex219, and can be reproduced by another device or system using the recording medium ex215 on which the multiplexed data is recorded. It is also possible to implement the moving picture decoding apparatus in the set top box ex217 connected to the cable ex203 for a cable television or to the antenna ex204 for satellite and/or terrestrial broadcasting, so as to display the video signals on the monitor ex219 of the television ex300. The moving picture decoding apparatus may be implemented not in the set top box but in the television ex300.
FIG. 16 illustrates the television (receiver) ex300 that uses the moving picture coding method and the moving picture decoding method described in each of embodiments. The television ex300 includes: a tuner ex301 that obtains or provides multiplexed data obtained by multiplexing audio data onto video data, through the antenna ex204 or the cable ex203, etc. that receives a broadcast; a modulation/demodulation unit ex302 that demodulates the received multiplexed data or modulates data into multiplexed data to be supplied outside; and a multiplexing/demultiplexing unit ex303 that demultiplexes the modulated multiplexed data into video data and audio data, or multiplexes video data and audio data coded by a signal processing unit ex306 into data.
The television ex300 further includes: a signal processing unit ex306 including an audio signal processing unit ex304 and a video signal processing unit ex305 that decode audio data and video data and code audio data and video data, respectively (which function as the image coding apparatus and the image decoding apparatus according to the aspects of the present invention); and an output unit ex309 including a speaker ex307 that provides the decoded audio signal, and a display unit ex308 that displays the decoded video signal, such as a display. Furthermore, the television ex300 includes an interface unit ex317 including an operation input unit ex312 that receives an input of a user operation. Furthermore, the television ex300 includes a control unit ex310 that controls overall each constituent element of the television ex300, and a power supply circuit unit ex311 that supplies power to each of the elements. Other than the operation input unit ex312, the interface unit ex317 may include: a bridge ex313 that is connected to an external device, such as the reader/recorder ex218; a slot unit ex314 for enabling attachment of the recording medium ex216, such as an SD card; a driver ex315 to be connected to an external recording medium, such as a hard disk; and a modem ex316 to be connected to a telephone network. Here, the recording medium ex216 can electrically record information using a non-volatile/volatile semiconductor memory element for storage. The constituent elements of the television ex300 are connected to each other through a synchronous bus.
First, the configuration in which the television ex300 decodes multiplexed data obtained from outside through the antenna ex204 and others and reproduces the decoded data will be described. In the television ex300, upon a user operation through a remote controller ex220 and others, the multiplexing/demultiplexing unit ex303 demultiplexes the multiplexed data demodulated by the modulation/demodulation unit ex302, under control of the control unit ex310 including a CPU. Furthermore, the audio signal processing unit ex304 decodes the demultiplexed audio data, and the video signal processing unit ex305 decodes the demultiplexed video data, using the decoding method described in each of embodiments, in the television ex300. The output unit ex309 provides the decoded video signal and audio signal outside, respectively. When the output unit ex309 provides the video signal and the audio signal, the signals may be temporarily stored in buffers ex318 and ex319, and others so that the signals are reproduced in synchronization with each other. Furthermore, the television ex300 may read multiplexed data not through a broadcast and others but from the recording media ex215 and ex216, such as a magnetic disk, an optical disk, and a SD card. Next, a configuration in which the television ex300 codes an audio signal and a video signal, and transmits the data outside or writes the data on a recording medium will be described. In the television ex300, upon a user operation through the remote controller ex220 and others, the audio signal processing unit ex304 codes an audio signal, and the video signal processing unit ex305 codes a video signal, under control of the control unit ex310 using the coding method described in each of embodiments. The multiplexing/demultiplexing unit ex303 multiplexes the coded video signal and audio signal, and provides the resulting signal outside. When the multiplexing/demultiplexing unit ex303 multiplexes the video signal and the audio signal, the signals may be temporarily stored in the buffers ex320 and ex321, and others so that the signals are reproduced in synchronization with each other. Here, the buffers ex318, ex319, ex320, and ex321 may be plural as illustrated, or at least one buffer may be shared in the television ex300. Furthermore, data may be stored in a buffer so that the system overflow and underflow may be avoided between the modulation/demodulation unit ex302 and the multiplexing/demultiplexing unit ex303, for example.
Furthermore, the television ex300 may include a configuration for receiving an AV input from a microphone or a camera other than the configuration for obtaining audio and video data from a broadcast or a recording medium, and may code the obtained data. Although the television ex300 can code, multiplex, and provide outside data in the description, it may be capable of only receiving, decoding, and providing outside data but not the coding, multiplexing, and providing outside data.
Furthermore, when the reader/recorder ex218 reads or writes multiplexed data from or on a recording medium, one of the television ex300 and the reader/recorder ex218 may decode or code the multiplexed data, and the television ex300 and the reader/recorder ex218 may share the decoding or coding.
As an example, FIG. 17 illustrates a configuration of an information reproducing/recording unit ex400 when data is read or written from or on an optical disk. The information reproducing/recording unit ex400 includes constituent elements ex401, ex402, ex403, ex404, ex405, ex406, and ex407 to be described hereinafter. The optical head ex401 irradiates a laser spot in a recording surface of the recording medium ex215 that is an optical disk to write information, and detects reflected light from the recording surface of the recording medium ex215 to read the information. The modulation recording unit ex402 electrically drives a semiconductor laser included in the optical head ex401, and modulates the laser light according to recorded data. The reproduction demodulating unit ex403 amplifies a reproduction signal obtained by electrically detecting the reflected light from the recording surface using a photo detector included in the optical head ex401, and demodulates the reproduction signal by separating a signal component recorded on the recording medium ex215 to reproduce the necessary information. The buffer ex404 temporarily holds the information to be recorded on the recording medium ex215 and the information reproduced from the recording medium ex215. The disk motor ex405 rotates the recording medium ex215. The servo control unit ex406 moves the optical head ex401 to a predetermined information track while controlling the rotation drive of the disk motor ex405 so as to follow the laser spot. The system control unit ex407 controls overall the information reproducing/recording unit ex400. The reading and writing processes can be implemented by the system control unit ex407 using various information stored in the buffer ex404 and generating and adding new information as necessary, and by the modulation recording unit ex402, the reproduction demodulating unit ex403, and the servo control unit ex406 that record and reproduce information through the optical head ex401 while being operated in a coordinated manner. The system control unit ex407 includes, for example, a microprocessor, and executes processing by causing a computer to execute a program for read and write.
Although the optical head ex401 irradiates a laser spot in the description, it may perform high-density recording using near field light.
FIG. 18 illustrates the recording medium ex215 that is the optical disk. On the recording surface of the recording medium ex215, guide grooves are spirally formed, and an information track ex230 records, in advance, address information indicating an absolute position on the disk according to change in a shape of the guide grooves. The address information includes information for determining positions of recording blocks ex231 that are a unit for recording data. Reproducing the information track ex230 and reading the address information in an apparatus that records and reproduces data can lead to determination of the positions of the recording blocks. Furthermore, the recording medium ex215 includes a data recording area ex233, an inner circumference area ex232, and an outer circumference area ex234. The data recording area ex233 is an area for use in recording the user data. The inner circumference area ex232 and the outer circumference area ex234 that are inside and outside of the data recording area ex233, respectively are for specific use except for recording the user data. The information reproducing/recording unit 400 reads and writes coded audio, coded video data, or multiplexed data obtained by multiplexing the coded audio and video data, from and on the data recording area ex233 of the recording medium ex215.
Although an optical disk having a layer, such as a DVD and a BD is described as an example in the description, the optical disk is not limited to such, and may be an optical disk having a multilayer structure and capable of being recorded on a part other than the surface. Furthermore, the optical disk may have a structure for multidimensional recording/reproduction, such as recording of information using light of colors with different wavelengths in the same portion of the optical disk and for recording information having different layers from various angles.
Furthermore, a car ex210 having an antenna ex205 can receive data from the satellite ex202 and others, and reproduce video on a display device such as a car navigation system ex211 set in the car ex210, in the digital broadcasting system ex200. Here, a configuration of the car navigation system ex211 will be a configuration, for example, including a GPS receiving unit from the configuration illustrated in FIG. 16. The same will be true for the configuration of the computer ex111, the cellular phone ex114, and others.
FIG. 19A illustrates the cellular phone ex114 that uses the moving picture coding method and the moving picture decoding method described in embodiments. The cellular phone ex114 includes: an antenna ex350 for transmitting and receiving radio waves through the base station ex110; a camera unit ex365 capable of capturing moving and still images; and a display unit ex358 such as a liquid crystal display for displaying the data such as decoded video captured by the camera unit ex365 or received by the antenna ex350. The cellular phone ex114 further includes: a main body unit including an operation key unit ex366; an audio output unit ex357 such as a speaker for output of audio; an audio input unit ex356 such as a microphone for input of audio; a memory unit ex367 for storing captured video or still pictures, recorded audio, coded or decoded data of the received video, the still pictures, e-mails, or others; and a slot unit ex364 that is an interface unit for a recording medium that stores data in the same manner as the memory unit ex367.
Next, an example of a configuration of the cellular phone ex114 will be described with reference to FIG. 19B. In the cellular phone ex114, a main control unit ex360 designed to control overall each unit of the main body including the display unit ex358 as well as the operation key unit ex366 is connected mutually, via a synchronous bus ex370, to a power supply circuit unit ex361, an operation input control unit ex362, a video signal processing unit ex355, a camera interface unit ex363, a liquid crystal display (LCD) control unit ex359, a modulation/demodulation unit ex352, a multiplexing/demultiplexing unit ex353, an audio signal processing unit ex354, the slot unit ex364, and the memory unit ex367.
When a call-end key or a power key is turned ON by a user's operation, the power supply circuit unit ex361 supplies the respective units with power from a battery pack so as to activate the cell phone ex114.
In the cellular phone ex114, the audio signal processing unit ex354 converts the audio signals collected by the audio input unit ex356 in voice conversation mode into digital audio signals under the control of the main control unit ex360 including a CPU, ROM, and RAM. Then, the modulation/demodulation unit ex352 performs spread spectrum processing on the digital audio signals, and the transmitting and receiving unit ex351 performs digital-to-analog conversion and frequency conversion on the data, so as to transmit the resulting data via the antenna ex350. Also, in the cellular phone ex114, the transmitting and receiving unit ex351 amplifies the data received by the antenna ex350 in voice conversation mode and performs frequency conversion and the analog-to-digital conversion on the data. Then, the modulation/demodulation unit ex352 performs inverse spread spectrum processing on the data, and the audio signal processing unit ex354 converts it into analog audio signals, so as to output them via the audio output unit ex357.
Furthermore, when an e-mail in data communication mode is transmitted, text data of the e-mail inputted by operating the operation key unit ex366 and others of the main body is sent out to the main control unit ex360 via the operation input control unit ex362. The main control unit ex360 causes the modulation/demodulation unit ex352 to perform spread spectrum processing on the text data, and the transmitting and receiving unit ex351 performs the digital-to-analog conversion and the frequency conversion on the resulting data to transmit the data to the base station ex110 via the antenna ex350. When an e-mail is received, processing that is approximately inverse to the processing for transmitting an e-mail is performed on the received data, and the resulting data is provided to the display unit ex358.
When video, still images, or video and audio in data communication mode is or are transmitted, the video signal processing unit ex355 compresses and codes video signals supplied from the camera unit ex365 using the moving picture coding method shown in each of embodiments (i.e., functions as the image coding apparatus according to the aspect of the present invention), and transmits the coded video data to the multiplexing/demultiplexing unit ex353. In contrast, during when the camera unit ex365 captures video, still images, and others, the audio signal processing unit ex354 codes audio signals collected by the audio input unit ex356, and transmits the coded audio data to the multiplexing/demultiplexing unit ex353.
The multiplexing/demultiplexing unit ex353 multiplexes the coded video data supplied from the video signal processing unit ex355 and the coded audio data supplied from the audio signal processing unit ex354, using a predetermined method. Then, the modulation/demodulation unit (modulation/demodulation circuit unit) ex352 performs spread spectrum processing on the multiplexed data, and the transmitting and receiving unit ex351 performs digital-to-analog conversion and frequency conversion on the data so as to transmit the resulting data via the antenna ex350.
When receiving data of a video file which is linked to a Web page and others in data communication mode or when receiving an e-mail with video and/or audio attached, in order to decode the multiplexed data received via the antenna ex350, the multiplexing/demultiplexing unit ex353 demultiplexes the multiplexed data into a video data bit stream and an audio data bit stream, and supplies the video signal processing unit ex355 with the coded video data and the audio signal processing unit ex354 with the coded audio data, through the synchronous bus ex370. The video signal processing unit ex355 decodes the video signal using a moving picture decoding method corresponding to the moving picture coding method shown in each of embodiments (i.e., functions as the image decoding apparatus according to the aspect of the present invention), and then the display unit ex358 displays, for instance, the video and still images included in the video file linked to the Web page via the LCD control unit ex359. Furthermore, the audio signal processing unit ex354 decodes the audio signal, and the audio output unit ex357 provides the audio.
Furthermore, similarly to the television ex300, a terminal such as the cellular phone ex114 probably have 3 types of implementation configurations including not only (i) a transmitting and receiving terminal including both a coding apparatus and a decoding apparatus, but also (ii) a transmitting terminal including only a coding apparatus and (iii) a receiving terminal including only a decoding apparatus. Although the digital broadcasting system ex200 receives and transmits the multiplexed data obtained by multiplexing audio data onto video data in the description, the multiplexed data may be data obtained by multiplexing not audio data but character data related to video onto video data, and may be not multiplexed data but video data itself.
As such, the moving picture coding method and the moving picture decoding method in each of embodiments can be used in any of the devices and systems described. Thus, the advantages described in each of embodiments can be obtained.
Furthermore, the present invention is not limited to embodiments, and various modifications and revisions are possible without departing from the scope of the present invention.
Embodiment 3
Video data can be generated by switching, as necessary, between (i) the moving picture coding method or the moving picture coding apparatus shown in each of embodiments and (ii) a moving picture coding method or a moving picture coding apparatus in conformity with a different standard, such as MPEG-2, MPEG-4 AVC, and VC-1.
Here, when a plurality of video data that conforms to the different standards is generated and is then decoded, the decoding methods need to be selected to conform to the different standards. However, since to which standard each of the plurality of the video data to be decoded conform cannot be detected, there is a problem that an appropriate decoding method cannot be selected.
In order to solve the problem, multiplexed data obtained by multiplexing audio data and others onto video data has a structure including identification information indicating to which standard the video data conforms. The specific structure of the multiplexed data including the video data generated in the moving picture coding method and by the moving picture coding apparatus shown in each of embodiments will be hereinafter described. The multiplexed data is a digital stream in the MPEG-2 Transport Stream format.
FIG. 20 illustrates a structure of the multiplexed data. As illustrated in FIG. 20, the multiplexed data can be obtained by multiplexing at least one of a video stream, an audio stream, a presentation graphics stream (PG), and an interactive graphics stream. The video stream represents primary video and secondary video of a movie, the audio stream (IG) represents a primary audio part and a secondary audio part to be mixed with the primary audio part, and the presentation graphics stream represents subtitles of the movie. Here, the primary video is normal video to be displayed on a screen, and the secondary video is video to be displayed on a smaller window in the primary video. Furthermore, the interactive graphics stream represents an interactive screen to be generated by arranging the GUI components on a screen. The video stream is coded in the moving picture coding method or by the moving picture coding apparatus shown in each of embodiments, or in a moving picture coding method or by a moving picture coding apparatus in conformity with a conventional standard, such as MPEG-2, MPEG-4 AVC, and VC-1. The audio stream is coded in accordance with a standard, such as Dolby-AC-3, Dolby Digital Plus, MLP, DTS, DTS-HD, and linear PCM.
Each stream included in the multiplexed data is identified by PID. For example, 0x1011 is allocated to the video stream to be used for video of a movie, 0x1100 to 0x111F are allocated to the audio streams, 0x1200 to 0x121F are allocated to the presentation graphics streams, 0x1400 to 0x141F are allocated to the interactive graphics streams, 0x1B00 to 0x1B1F are allocated to the video streams to be used for secondary video of the movie, and 0x1A00 to 0x1A1F are allocated to the audio streams to be used for the secondary video to be mixed with the primary audio.
FIG. 21 schematically illustrates how data is multiplexed. First, a video stream ex235 composed of video frames and an audio stream ex238 composed of audio frames are transformed into a stream of PES packets ex236 and a stream of PES packets ex239, and further into TS packets ex237 and TS packets ex240, respectively. Similarly, data of a presentation graphics stream ex241 and data of an interactive graphics stream ex244 are transformed into a stream of PES packets ex242 and a stream of PES packets ex245, and further into TS packets ex243 and TS packets ex246, respectively. These TS packets are multiplexed into a stream to obtain multiplexed data ex247.
FIG. 22 illustrates how a video stream is stored in a stream of PES packets in more detail. The first bar in FIG. 22 shows a video frame stream in a video stream. The second bar shows the stream of PES packets. As indicated by arrows denoted as yy1, yy2, yy3, and yy4 in FIG. 22, the video stream is divided into pictures as I pictures, B pictures, and P pictures each of which is a video presentation unit, and the pictures are stored in a payload of each of the PES packets. Each of the PES packets has a PES header, and the PES header stores a Presentation Time-Stamp (PTS) indicating a display time of the picture, and a Decoding Time-Stamp (DTS) indicating a decoding time of the picture.
FIG. 23 illustrates a format of TS packets to be finally written on the multiplexed data. Each of the TS packets is a 188-byte fixed length packet including a 4-byte TS header having information, such as a PID for identifying a stream and a 184-byte TS payload for storing data. The PES packets are divided, and stored in the TS payloads, respectively. When a BD ROM is used, each of the TS packets is given a 4-byte TP_Extra_Header, thus resulting in 192-byte source packets. The source packets are written on the multiplexed data. The TP_Extra_Header stores information such as an Arrival_Time_Stamp (ATS). The ATS shows a transfer start time at which each of the TS packets is to be transferred to a PID filter. The source packets are arranged in the multiplexed data as shown at the bottom of FIG. 23. The numbers incrementing from the head of the multiplexed data are called source packet numbers (SPNs).
Each of the TS packets included in the multiplexed data includes not only streams of audio, video, subtitles and others, but also a Program Association Table (PAT), a Program Map Table (PMT), and a Program Clock Reference (PCR). The PAT shows what a PID in a PMT used in the multiplexed data indicates, and a PID of the PAT itself is registered as zero. The PMT stores PIDs of the streams of video, audio, subtitles and others included in the multiplexed data, and attribute information of the streams corresponding to the PIDs. The PMT also has various descriptors relating to the multiplexed data. The descriptors have information such as copy control information showing whether copying of the multiplexed data is permitted or not. The PCR stores STC time information corresponding to an ATS showing when the PCR packet is transferred to a decoder, in order to achieve synchronization between an Arrival Time Clock (ATC) that is a time axis of ATSs, and an System Time Clock (STC) that is a time axis of PTSs and DTSs.
FIG. 24 illustrates the data structure of the PMT in detail. A PMT header is disposed at the top of the PMT. The PMT header describes the length of data included in the PMT and others. A plurality of descriptors relating to the multiplexed data is disposed after the PMT header. Information such as the copy control information is described in the descriptors. After the descriptors, a plurality of pieces of stream information relating to the streams included in the multiplexed data is disposed. Each piece of stream information includes stream descriptors each describing information, such as a stream type for identifying a compression codec of a stream, a stream PID, and stream attribute information (such as a frame rate or an aspect ratio). The stream descriptors are equal in number to the number of streams in the multiplexed data.
When the multiplexed data is recorded on a recording medium and others, it is recorded together with multiplexed data information files.
Each of the multiplexed data information files is management information of the multiplexed data as shown in FIG. 25. The multiplexed data information files are in one to one correspondence with the multiplexed data, and each of the files includes multiplexed data information, stream attribute information, and an entry map.
As illustrated in FIG. 25, the multiplexed data includes a system rate, a reproduction start time, and a reproduction end time. The system rate indicates the maximum transfer rate at which a system target decoder to be described later transfers the multiplexed data to a PID filter. The intervals of the ATSs included in the multiplexed data are set to not higher than a system rate. The reproduction start time indicates a PTS in a video frame at the head of the multiplexed data. An interval of one frame is added to a PTS in a video frame at the end of the multiplexed data, and the PTS is set to the reproduction end time.
As shown in FIG. 26, a piece of attribute information is registered in the stream attribute information, for each PID of each stream included in the multiplexed data. Each piece of attribute information has different information depending on whether the corresponding stream is a video stream, an audio stream, a presentation graphics stream, or an interactive graphics stream. Each piece of video stream attribute information carries information including what kind of compression codec is used for compressing the video stream, and the resolution, aspect ratio and frame rate of the pieces of picture data that is included in the video stream. Each piece of audio stream attribute information carries information including what kind of compression codec is used for compressing the audio stream, how many channels are included in the audio stream, which language the audio stream supports, and how high the sampling frequency is. The video stream attribute information and the audio stream attribute information are used for initialization of a decoder before the player plays back the information.
In the present embodiment, the multiplexed data to be used is of a stream type included in the PMT. Furthermore, when the multiplexed data is recorded on a recording medium, the video stream attribute information included in the multiplexed data information is used. More specifically, the moving picture coding method or the moving picture coding apparatus described in each of embodiments includes a step or a unit for allocating unique information indicating video data generated by the moving picture coding method or the moving picture coding apparatus in each of embodiments, to the stream type included in the PMT or the video stream attribute information. With the configuration, the video data generated by the moving picture coding method or the moving picture coding apparatus described in each of embodiments can be distinguished from video data that conforms to another standard.
Furthermore, FIG. 27 illustrates steps of the moving picture decoding method according to the present embodiment. In Step exS100, the stream type included in the PMT or the video stream attribute information is obtained from the multiplexed data. Next, in Step exS101, it is determined whether or not the stream type or the video stream attribute information indicates that the multiplexed data is generated by the moving picture coding method or the moving picture coding apparatus in each of embodiments. When it is determined that the stream type or the video stream attribute information indicates that the multiplexed data is generated by the moving picture coding method or the moving picture coding apparatus in each of embodiments, in Step exS102, decoding is performed by the moving picture decoding method in each of embodiments. Furthermore, when the stream type or the video stream attribute information indicates conformance to the conventional standards, such as MPEG-2, MPEG-4 AVC, and VC-1, in Step exS103, decoding is performed by a moving picture decoding method in conformity with the conventional standards.
As such, allocating a new unique value to the stream type or the video stream attribute information enables determination whether or not the moving picture decoding method or the moving picture decoding apparatus that is described in each of embodiments can perform decoding. Even when multiplexed data that conforms to a different standard, an appropriate decoding method or apparatus can be selected. Thus, it becomes possible to decode information without any error. Furthermore, the moving picture coding method or apparatus, or the moving picture decoding method or apparatus in the present embodiment can be used in the devices and systems described above.
Embodiment 4
Each of the moving picture coding method, the moving picture coding apparatus, the moving picture decoding method, and the moving picture decoding apparatus in each of embodiments is typically achieved in the form of an integrated circuit or a Large Scale Integrated (LSI) circuit. As an example of the LSI, FIG. 28 illustrates a configuration of the LSI ex500 that is made into one chip. The LSI ex500 includes elements ex501, ex502, ex503, ex504, ex505, ex506, ex507, ex508, and ex509 to be described below, and the elements are connected to each other through a bus ex510. The power supply circuit unit ex505 is activated by supplying each of the elements with power when the power supply circuit unit ex505 is turned on.
For example, when coding is performed, the LSI ex500 receives an AV signal from a microphone ex117, a camera ex113, and others through an AV IO ex509 under control of a control unit ex501 including a CPU ex502, a memory controller ex503, a stream controller ex504, and a driving frequency control unit ex512. The received AV signal is temporarily stored in an external memory ex511, such as an SDRAM. Under control of the control unit ex501, the stored data is segmented into data portions according to the processing amount and speed to be transmitted to a signal processing unit ex507. Then, the signal processing unit ex507 codes an audio signal and/or a video signal. Here, the coding of the video signal is the coding described in each of embodiments. Furthermore, the signal processing unit ex507 sometimes multiplexes the coded audio data and the coded video data, and a stream IO ex506 provides the multiplexed data outside. The provided multiplexed data is transmitted to the base station ex107, or written on the recording media ex215. When data sets are multiplexed, the data should be temporarily stored in the buffer ex508 so that the data sets are synchronized with each other.
Although the memory ex511 is an element outside the LSI ex500, it may be included in the LSI ex500. The buffer ex508 is not limited to one buffer, but may be composed of buffers. Furthermore, the LSI ex500 may be made into one chip or a plurality of chips.
Furthermore, although the control unit ex501 includes the CPU ex502, the memory controller ex503, the stream controller ex504, the driving frequency control unit ex512, the configuration of the control unit ex501 is not limited to such. For example, the signal processing unit ex507 may further include a CPU. Inclusion of another CPU in the signal processing unit ex507 can improve the processing speed. Furthermore, as another example, the CPU ex502 may serve as or be a part of the signal processing unit ex507, and, for example, may include an audio signal processing unit. In such a case, the control unit ex501 includes the signal processing unit ex507 or the CPU ex502 including a part of the signal processing unit ex507.
The name used here is LSI, but it may also be called IC, system LSI, super LSI, or ultra LSI depending on the degree of integration.
Moreover, ways to achieve integration are not limited to the LSI, and a special circuit or a general purpose processor and so forth can also achieve the integration. Field Programmable Gate Array (FPGA) that can be programmed after manufacturing LSIs or a reconfigurable processor that allows re-configuration of the connection or configuration of an LSI can be used for the same purpose.
In the future, with advancement in semiconductor technology, a brand-new technology may replace LSI. The functional blocks can be integrated using such a technology. The possibility is that the present invention is applied to biotechnology.
Embodiment 5
When video data generated in the moving picture coding method or by the moving picture coding apparatus described in each of embodiments is decoded, compared to when video data that conforms to a conventional standard, such as MPEG-2, MPEG-4 AVC, and VC-1 is decoded, the processing amount probably increases. Thus, the LSI ex500 needs to be set to a driving frequency higher than that of the CPU ex502 to be used when video data in conformity with the conventional standard is decoded. However, when the driving frequency is set higher, there is a problem that the power consumption increases.
In order to solve the problem, the moving picture decoding apparatus, such as the television ex300 and the LSI ex500 is configured to determine to which standard the video data conforms, and switch between the driving frequencies according to the determined standard. FIG. 29 illustrates a configuration ex800 in the present embodiment. A driving frequency switching unit ex803 sets a driving frequency to a higher driving frequency when video data is generated by the moving picture coding method or the moving picture coding apparatus described in each of embodiments. Then, the driving frequency switching unit ex803 instructs a decoding processing unit ex801 that executes the moving picture decoding method described in each of embodiments to decode the video data. When the video data conforms to the conventional standard, the driving frequency switching unit ex803 sets a driving frequency to a lower driving frequency than that of the video data generated by the moving picture coding method or the moving picture coding apparatus described in each of embodiments. Then, the driving frequency switching unit ex803 instructs the decoding processing unit ex802 that conforms to the conventional standard to decode the video data.
More specifically, the driving frequency switching unit ex803 includes the CPU ex502 and the driving frequency control unit ex512 in FIG. 124. Here, each of the decoding processing unit ex801 that executes the moving picture decoding method described in each of embodiments and the decoding processing unit ex802 that conforms to the conventional standard corresponds to the signal processing unit ex507 in FIG. 28. The CPU ex502 determines to which standard the video data conforms. Then, the driving frequency control unit ex512 determines a driving frequency based on a signal from the CPU ex502. Furthermore, the signal processing unit ex507 decodes the video data based on the signal from the CPU ex502. For example, the identification information described in Embodiment 3 is probably used for identifying the video data. The identification information is not limited to the one described in Embodiment 3 but may be any information as long as the information indicates to which standard the video data conforms. For example, when which standard video data conforms to can be determined based on an external signal for determining that the video data is used for a television or a disk, etc., the determination may be made based on such an external signal. Furthermore, the CPU ex502 selects a driving frequency based on, for example, a look-up table in which the standards of the video data are associated with the driving frequencies as shown in FIG. 31. The driving frequency can be selected by storing the look-up table in the buffer ex508 and in an internal memory of an LSI, and with reference to the look-up table by the CPU ex502.
FIG. 30 illustrates steps for executing a method in the present embodiment. First, in Step exS200, the signal processing unit ex507 obtains identification information from the multiplexed data. Next, in Step exS201, the CPU ex502 determines whether or not the video data is generated by the coding method and the coding apparatus described in each of embodiments, based on the identification information. When the video data is generated by the moving picture coding method and the moving picture coding apparatus described in each of embodiments, in Step exS202, the CPU ex502 transmits a signal for setting the driving frequency to a higher driving frequency to the driving frequency control unit ex512. Then, the driving frequency control unit ex512 sets the driving frequency to the higher driving frequency. On the other hand, when the identification information indicates that the video data conforms to the conventional standard, such as MPEG-2, MPEG-4 AVC, and VC-1, in Step exS203, the CPU ex502 transmits a signal for setting the driving frequency to a lower driving frequency to the driving frequency control unit ex512. Then, the driving frequency control unit ex512 sets the driving frequency to the lower driving frequency than that in the case where the video data is generated by the moving picture coding method and the moving picture coding apparatus described in each of embodiment.
Furthermore, along with the switching of the driving frequencies, the power conservation effect can be improved by changing the voltage to be applied to the LSI ex500 or an apparatus including the LSI ex500. For example, when the driving frequency is set lower, the voltage to be applied to the LSI ex500 or the apparatus including the LSI ex500 is probably set to a voltage lower than that in the case where the driving frequency is set higher.
Furthermore, when the processing amount for decoding is larger, the driving frequency may be set higher, and when the processing amount for decoding is smaller, the driving frequency may be set lower as the method for setting the driving frequency. Thus, the setting method is not limited to the ones described above. For example, when the processing amount for decoding video data in conformity with MPEG-4 AVC is larger than the processing amount for decoding video data generated by the moving picture coding method and the moving picture coding apparatus described in each of embodiments, the driving frequency is probably set in reverse order to the setting described above.
Furthermore, the method for setting the driving frequency is not limited to the method for setting the driving frequency lower. For example, when the identification information indicates that the video data is generated by the moving picture coding method and the moving picture coding apparatus described in each of embodiments, the voltage to be applied to the LSI ex500 or the apparatus including the LSI ex500 is probably set higher. When the identification information indicates that the video data conforms to the conventional standard, such as MPEG-2, MPEG-4 AVC, and VC-1, the voltage to be applied to the LSI ex500 or the apparatus including the LSI ex500 is probably set lower. As another example, when the identification information indicates that the video data is generated by the moving picture coding method and the moving picture coding apparatus described in each of embodiments, the driving of the CPU ex502 does not probably have to be suspended. When the identification information indicates that the video data conforms to the conventional standard, such as MPEG-2, MPEG-4 AVC, and VC-1, the driving of the CPU ex502 is probably suspended at a given time because the CPU ex502 has extra processing capacity. Even when the identification information indicates that the video data is generated by the moving picture coding method and the moving picture coding apparatus described in each of embodiments, in the case where the CPU ex502 has extra processing capacity, the driving of the CPU ex502 is probably suspended at a given time. In such a case, the suspending time is probably set shorter than that in the case where when the identification information indicates that the video data conforms to the conventional standard, such as MPEG-2, MPEG-4 AVC, and VC-1.
Accordingly, the power conservation effect can be improved by switching between the driving frequencies in accordance with the standard to which the video data conforms. Furthermore, when the LSI ex500 or the apparatus including the LSI ex500 is driven using a battery, the battery life can be extended with the power conservation effect.
Embodiment 6
There are cases where a plurality of video data that conforms to different standards, is provided to the devices and systems, such as a television and a mobile phone. In order to enable decoding the plurality of video data that conforms to the different standards, the signal processing unit ex507 of the LSI ex500 needs to conform to the different standards. However, the problems of increase in the scale of the circuit of the LSI ex500 and increase in the cost arise with the individual use of the signal processing units ex507 that conform to the respective standards.
In order to solve the problem, what is conceived is a configuration in which the decoding processing unit for implementing the moving picture decoding method described in each of embodiments and the decoding processing unit that conforms to the conventional standard, such as MPEG-2, MPEG-4 AVC, and VC-1 are partly shared. Ex900 in FIG. 32A shows an example of the configuration. For example, the moving picture decoding method described in each of embodiments and the moving picture decoding method that conforms to MPEG-4 AVC have, partly in common, the details of processing, such as entropy coding, inverse quantization, deblocking filtering, and motion compensated prediction. The details of processing to be shared probably include use of a decoding processing unit ex902 that conforms to MPEG-4 AVC. In contrast, a dedicated decoding processing unit ex901 is probably used for other processing unique to an aspect of the present invention. Since the aspect of the present invention is characterized by intra prediction processing in particular, for example, the dedicated decoding processing unit ex901 is used for intra prediction processing. Otherwise, the decoding processing unit is probably shared for one of the entropy decoding, inverse quantization, deblocking filtering, and motion compensation, or all of the processing. The decoding processing unit for implementing the moving picture decoding method described in each of embodiments may be shared for the processing to be shared, and a dedicated decoding processing unit may be used for processing unique to that of MPEG-4 AVC.
Furthermore, ex1000 in FIG. 32B shows another example in that processing is partly shared. This example uses a configuration including a dedicated decoding processing unit ex1001 that supports the processing unique to an aspect of the present invention, a dedicated decoding processing unit ex1002 that supports the processing unique to another conventional standard, and a decoding processing unit ex1003 that supports processing to be shared between the moving picture decoding method according to the aspect of the present invention and the conventional moving picture decoding method. Here, the dedicated decoding processing units ex1001 and ex1002 are not necessarily specialized for the processing according to the aspect of the present invention and the processing of the conventional standard, respectively, and may be the ones capable of implementing general processing. Furthermore, the configuration of the present embodiment can be implemented by the LSI ex500.
As such, reducing the scale of the circuit of an LSI and reducing the cost are possible by sharing the decoding processing unit for the processing to be shared between the moving picture decoding method according to the aspect of the present invention and the moving picture decoding method in conformity with the conventional standard.
Methods for encoding and decoding video according to the present invention have advantages of improving the quality of video. For example, the methods are applicable to video cameras, mobile phones, and personal computers.
700 Subtractor
702 Transform
704 Quantization
706 Entropy Coding
708 Inverse Quantization
710 Inverse Transform
712 Adder
714 Filtering
716 Memory
718 Motion Estimation
720 Motion Compensation
722 Selector
724 Intra Prediction
726 Intra Prediction Direction Selection
800 Entropy Decoding
802 Inverse Quantization
804 Inverse Transform
806 Adder
808 Filtering
810 Selector
812, 814 Memory
816 Motion Compensation
818 Intra Prediction

Claims (8)

  1. A method of encoding video using an adaptive filtering process comprising :
    selecting a transform unit size from a plurality of pre-determined transform unit size;
    performing a transform process on a block of residuals based on said selected transform size;
    encoding said block of transformed coefficients into a compressed video stream involving a quantization process;
    performing an inverse quantization process on said block of quantized transformed coefficients;
    performing an inverse transform process on said block of inverse quantized coefficients based on said selected transform size;
    reconstructing a block of image samples based on summing said block of inverse transformed residuals and a block of prediction samples;
    writing a plurality sets of filter parameters into a header of a compressed image;
    selecting one set of filter parameters from said plurality sets of filter parameters using said selected transform unit size as one of the selection criteria; and
    applying a filtering process on said block of reconstructed image samples using said selected set of filter parameters.
  2. The method of encoding video using an adaptive filtering process according to Claim 1,
    wherein said process to select one set of filter parameters using said selected transform unit size can also in additionally use the block prediction mode as the selection criteria.
  3. The method of encoding video using an adaptive filtering process according to Claim 1,
    wherein said filtering process includes a block edge filtering process.
  4. The method of encoding video using an adaptive filtering process according to Claims 1,
    wherein said filtering process includes a pixel filtering process.
  5. A method of decoding video using an adaptive filtering process comprising :
    parsing parameters from a coded header of a block unit to determine the size of a transform;
    selecting a transform unit size from a plurality of pre-determined transform unit sizes based on said parsed transform size parameters;
    decoding a block of quantized coefficients from a compressed video stream;
    performing an inverse quantization process on said decoded block of quantized transformed coefficients;
    performing an inverse transform process on said block of inverse quantized coefficients based on said selected transform size;
    reconstructing a block of image samples based on summing said block of inverse transformed residuals and a block of prediction samples;
    parsing plurality sets of filter parameters from a header of a compressed image;
    selecting one set of filter parameters from said parsed plurality sets of filter parameters using said selected transform unit size as one of the selection criteria; and
    applying a filtering process on said block of reconstructed image samples using said selected set of filter parameters.
  6. The method of decoding video using an adaptive filtering process according to Claim 5,
    wherein said process to select one set of filter parameters using said selected transform unit size can also in additionally use the block prediction mode as the selection criteria.
  7. The method of decoding video using an adaptive filtering process according to Claim 5,
    wherein said filtering process includes a block edge filtering process.
  8. The method of decoding video using an adaptive filtering process according to Claim 5,
    wherein said filtering process includes a pixel filtering process.
PCT/JP2012/003091 2011-05-11 2012-05-11 Methods for encoding and decoding video using an adaptive filtering process Ceased WO2012153538A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201161484869P 2011-05-11 2011-05-11
US61/484,869 2011-05-11

Publications (1)

Publication Number Publication Date
WO2012153538A1 true WO2012153538A1 (en) 2012-11-15

Family

ID=47139024

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2012/003091 Ceased WO2012153538A1 (en) 2011-05-11 2012-05-11 Methods for encoding and decoding video using an adaptive filtering process

Country Status (1)

Country Link
WO (1) WO2012153538A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9756327B2 (en) 2012-04-03 2017-09-05 Qualcomm Incorporated Quantization matrix and deblocking filter adjustments for video coding

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100061645A1 (en) * 2008-09-11 2010-03-11 On2 Technologies Inc. System and method for video encoding using adaptive loop filter
JP2011049740A (en) * 2009-08-26 2011-03-10 Sony Corp Image processing apparatus and method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100061645A1 (en) * 2008-09-11 2010-03-11 On2 Technologies Inc. System and method for video encoding using adaptive loop filter
JP2011049740A (en) * 2009-08-26 2011-03-10 Sony Corp Image processing apparatus and method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9756327B2 (en) 2012-04-03 2017-09-05 Qualcomm Incorporated Quantization matrix and deblocking filter adjustments for video coding

Similar Documents

Publication Publication Date Title
CA2830036C (en) Moving picture coding method, moving picture coding apparatus, moving picture decoding method, moving picture decoding apparatus and moving picture coding and decoding apparatus
CA2876567C (en) Image coding and decoding of slice boundaries wherein loop filtering of top and left slice boundaries is controlled by a boundary control flag
EP2736253B1 (en) Filtering method, moving image decoding method, moving image encoding method, moving image decoding apparatus, moving image encoding apparatus, and moving image encoding/decoding apparatus
JP6327435B2 (en) Image encoding method, image decoding method, image encoding device, and image decoding device
WO2012172779A1 (en) Method and apparatus for encoding and decoding video using intra prediction mode dependent adaptive quantization matrix
AU2017203193B2 (en) Filtering method, moving picture coding apparatus, moving picture decoding apparatus, and moving picture coding and decoding apparatus
EP2749027B1 (en) Methods and apparatuses for encoding and decoding video using updated buffer description
WO2012160797A1 (en) Methods and apparatuses for encoding and decoding video using inter-color-plane prediction
CA2825767C (en) Image coding method, image decoding method, image coding apparatus and image decoding apparatus
WO2013027407A1 (en) Methods and apparatuses for encoding, extracting and decoding video using tiles coding scheme
EP2680582A1 (en) Image encoding method, image decoding method, image encoding device, image decoding device, and image encoding/decoding device
EP2739053A1 (en) Video encoding method, video decoding method, video encoding apparatus, video decoding apparatus, and video encoding/decoding apparatus
CA2841107C (en) Image encoding and decoding using context adaptive binary arithmetic coding with a bypass mode
WO2012090504A1 (en) Methods and apparatuses for coding and decoding video stream
WO2012105265A1 (en) Systems and methods for encoding and decoding video which support compatibility functionality to legacy video players
JPWO2011129090A1 (en) Coding distortion removing method, coding method, decoding method, coding distortion removing apparatus, coding apparatus, and decoding apparatus
WO2012095317A1 (en) Efficient clipping
WO2013005386A1 (en) Method and apparatus for encoding and decoding video using adaptive quantization matrix for square and rectangular transform units
WO2013001796A1 (en) Methods and apparatuses for encoding video using adaptive memory management scheme for reference pictures
WO2012124347A1 (en) Methods and apparatuses for encoding and decoding video using reserved nal unit type values of avc standard
WO2012153538A1 (en) Methods for encoding and decoding video using an adaptive filtering process
WO2013021601A1 (en) Methods and apparatuses for encoding and decoding video using adaptive reference pictures memory management scheme to support temporal scalability
WO2012035766A1 (en) Image decoding method, image encoding method, image decoding device and image encoding device
WO2012095930A1 (en) Image encoding method, image decoding method, image encoding device, and image decoding device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12782640

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12782640

Country of ref document: EP

Kind code of ref document: A1