[go: up one dir, main page]

WO2013005967A2 - Procédé de codage de données d'image et procédé de décodage de données d'image - Google Patents

Procédé de codage de données d'image et procédé de décodage de données d'image Download PDF

Info

Publication number
WO2013005967A2
WO2013005967A2 PCT/KR2012/005252 KR2012005252W WO2013005967A2 WO 2013005967 A2 WO2013005967 A2 WO 2013005967A2 KR 2012005252 W KR2012005252 W KR 2012005252W WO 2013005967 A2 WO2013005967 A2 WO 2013005967A2
Authority
WO
WIPO (PCT)
Prior art keywords
prediction
region
prediction region
mode
intra
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/KR2012/005252
Other languages
English (en)
Korean (ko)
Other versions
WO2013005967A3 (fr
Inventor
김휘용
이진호
임성창
최진수
김진웅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Electronics and Telecommunications Research Institute ETRI
Original Assignee
Electronics and Telecommunications Research Institute ETRI
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Electronics and Telecommunications Research Institute ETRI filed Critical Electronics and Telecommunications Research Institute ETRI
Priority to US14/130,716 priority Critical patent/US20140133559A1/en
Priority claimed from KR1020120071616A external-priority patent/KR102187246B1/ko
Publication of WO2013005967A2 publication Critical patent/WO2013005967A2/fr
Publication of WO2013005967A3 publication Critical patent/WO2013005967A3/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/11Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/119Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/15Data rate or code amount at the encoder output by monitoring actual compressed data size at the memory before deciding storage at the transmission buffer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques

Definitions

  • the present invention relates to image information compression technology, and more particularly, to an image region segmentation method and apparatus that are dependent on an intra prediction mode.
  • Image compression techniques include an inter prediction technique that predicts pixel values included in a current picture from a previous and / or subsequent picture in time, and an intra predictor of pixel values included in a current picture using pixel information in the current picture.
  • (intra) prediction technique weight prediction technique to prevent deterioration of image quality due to lighting changes, entropy encoding technique for assigning short codes to symbols with high appearance frequency and long codes to symbols with low appearance frequency, etc.
  • entropy encoding technique for assigning short codes to symbols with high appearance frequency and long codes to symbols with low appearance frequency, etc.
  • various intra prediction modes are used for intra prediction among compression techniques of image information.
  • the prediction mode pixel values of the current block may be predicted using different reference samples. Therefore, a method of obtaining an optimal compression efficiency by adaptively changing the prediction method according to different prediction modes, that is, different reference samples, may be considered.
  • the technical problem according to the present invention is to provide a method of increasing the efficiency of intra coding and reducing the complexity of the image information processing process.
  • Another technical problem according to the present invention is to provide a method for dividing a prediction unit (PU) and a transform unit (TU) differently according to an intra prediction mode.
  • Another technical problem according to the present invention is to provide a method for solving the problem of complexity caused by splitting the prediction domain and the transform domain regardless of the prediction mode.
  • Another technical problem according to the present invention is to provide a method of determining a prediction mode for each prediction region divided according to a prediction mode.
  • Another technical problem according to the present invention is to provide a prediction domain and a transform domain partitioning method for improving the performance of intra prediction and at the same time reducing the complexity required for determining an optimal prediction domain partition structure and an optimal transform domain partition structure. .
  • An embodiment of the present invention provides a decoding method of image information, comprising: dividing a prediction region into a first prediction region and a second prediction region according to an intra prediction mode, intra prediction of the first prediction region, and Performing a reconstruction and predicting and reconstructing the second prediction region, wherein in the predicting and reconstructing the second prediction region, a reference sample or the reconstructed reference to the first prediction region is performed.
  • Intra prediction of the second prediction region may be performed with reference to a predetermined sample in the first prediction region.
  • the information about the intra prediction mode may be received from an encoder, and in the prediction region dividing step, when the intra prediction mode is used, an area where a residual signal of a reference value or more exists is used. It may be set as the second prediction region.
  • the information about the intra prediction mode may be received from an encoder, and the second prediction area may be an area of the position farthest in the current block from a reference sample of the intra prediction mode.
  • the information about the intra prediction mode may be received from an encoder, and the first prediction region and the second prediction region may be preset for each intra prediction mode.
  • a residual signal is generated based on a transform coefficient of a transform region corresponding to the second prediction region, and the second prediction region is generated.
  • the reconstruction signal may be generated by combining the prediction result with respect to the generated residual signal.
  • the second prediction region may be further divided into a plurality of prediction regions, and for a prediction region divided from the second prediction region, a reference sample or another reconstructed reference to the first prediction region is performed. Intra prediction may be performed by referring to a predetermined sample in the prediction region.
  • the prediction mode applied to the second prediction region may be selected from a prediction mode applied to the first prediction region and a prediction mode having an angle similar to that of the prediction mode applied to the first prediction region. have.
  • intra prediction for the second prediction region may be performed with reference to a sample in the reconstructed first region.
  • a prediction mode applied to the second prediction region may be selected from candidate prediction modes for the first prediction region.
  • the prediction mode applied to the second prediction region is a prediction mode applied to a block adjacent to the second prediction region and a prediction mode having an angle similar to the prediction mode applied to the block adjacent to the second prediction region. Can be selected from.
  • Another embodiment of the present invention provides a method of encoding image information, the method comprising: dividing a prediction region into a first prediction region and a second prediction region according to an intra prediction mode, intra prediction of the first prediction region, and Performing a reconstruction, performing a prediction and reconstruction of the second prediction region, and transmitting information about a prediction mode of the current block, wherein the predicting and reconstruction of the second prediction region includes: Intra prediction of the second prediction region may be performed by referring to a reference sample of the first prediction region or a predetermined sample in the reconstructed first prediction region.
  • an area in which a residual signal having a reference value or more exists may be set as the second prediction region.
  • the second prediction region may be the region of the position furthest from the current sample from the reference sample of the intra prediction mode.
  • a residual signal is generated based on transform coefficients of the transform region corresponding to the second prediction region, and the second prediction region is generated.
  • the reconstruction signal may be generated by combining the prediction result with respect to the generated residual signal.
  • the transform region may be a region having the same size as the first prediction region and the second prediction region or a region in which the first prediction region or the second prediction region is divided into square or non-square. .
  • the second prediction region is further divided into a plurality of prediction regions, and for the prediction region divided from the second prediction region, Intra prediction may be performed by referring to a reference sample for the first prediction region or a predetermined sample in another reconstructed prediction region.
  • a prediction mode applied to the second prediction region may be selected from a prediction mode applied to the first prediction region and a prediction mode having an angle similar to that of the prediction mode applied to the first prediction region.
  • the prediction mode applied to the second prediction region has a prediction mode of an angle similar to the prediction mode of the block adjacent to the second prediction region and the prediction mode of the block adjacent to the second prediction region. You can choose from.
  • the efficiency of intra coding can be improved and the complexity of the image information processing process can be reduced.
  • prediction and transformation may be performed based on an optimal prediction region partitioning structure and an optimal transform region partitioning structure by dividing the prediction region and the transform region differently according to the intra prediction mode.
  • the performance of intra prediction can be improved by applying an optimal prediction mode to the prediction region and the transform region segmented according to the intra prediction mode.
  • FIG. 1 is a block diagram illustrating a configuration of an image encoding apparatus according to an embodiment.
  • FIG. 2 is a block diagram illustrating a configuration of an image decoding apparatus according to an embodiment.
  • 3 is a diagram illustrating each mode of intra prediction.
  • FIG. 4 schematically illustrates a sample that the current block may refer to in the intra prediction mode.
  • FIG. 5 is a diagram schematically illustrating a distribution of residual signals according to directional prediction.
  • FIG. 6 schematically illustrates an example of a first prediction region and a second prediction region predetermined according to an intra prediction mode in a system to which the present invention is applied.
  • FIG. 7 is a flowchart schematically illustrating an example of intra prediction in a system to which the present invention is applied.
  • FIG. 8 is a flowchart schematically illustrating an operation of an encoder for performing the above-described intra prediction method in a system to which the present invention is applied.
  • FIG. 9 is a flowchart schematically illustrating an operation of a decoder for performing the above-described intra prediction method in a system to which the present invention is applied.
  • FIG. 10 schematically illustrates an example of a partition structure of a coding region CU, a prediction region PU, and a transform region TU.
  • FIG. 11 schematically illustrates an example of dividing a current block (coding target block) into two prediction regions according to a prediction mode in a system to which the present invention is applied.
  • FIG. 12 schematically illustrates another example of dividing a current block (block to be encoded) into two prediction regions according to a prediction mode in a system to which the present invention is applied.
  • FIG. 13 schematically illustrates examples of dividing a current block (coding target block) into three prediction regions according to an intra prediction mode in a system to which the present invention is applied.
  • FIG. 15 schematically illustrates examples of performing prediction on a subordinate prediction region using a reconstructed sample of a priority prediction region based on a correlation between a priority prediction region and a subordinate prediction region in a current block in a system to which the present invention is applied. It is shown as.
  • FIG. 16 schematically illustrates examples of determining a prediction mode of a current prediction region in a system to which the present invention is applied.
  • first and second may be used to describe various components, but the components should not be limited by the terms. The terms are used only for the purpose of distinguishing one component from another.
  • the first component may be referred to as the second component, and similarly, the second component may also be referred to as the first component.
  • each component shown in the embodiments of the present invention are shown independently to represent different characteristic functions, and do not mean that each component is made of separate hardware or one software component unit.
  • each component is included in each component for convenience of description, and at least two of the components may be combined into one component, or one component may be divided into a plurality of components to perform a function.
  • Integrated and separate embodiments of the components are also included within the scope of the present invention without departing from the spirit of the invention.
  • the components may not be essential components for performing essential functions in the present invention, but may be optional components for improving performance.
  • the present invention can be implemented including only the components essential for implementing the essentials of the present invention except for the components used for improving performance, and the structure including only the essential components except for the optional components used for improving performance. Also included in the scope of the present invention.
  • FIG. 1 is a block diagram illustrating a configuration of an image encoding apparatus according to an embodiment.
  • the image encoding apparatus 100 may include a motion predictor 110, a motion compensator 115, an intra predictor 120, a subtractor 125, a transformer 130, and a quantizer 135. ), An entropy encoding unit 140, an inverse quantization unit 145, an inverse transform unit 150, an adder 155, a filter unit 160, and a reference image buffer 165.
  • the image encoding apparatus 100 may encode the input image in an intra mode or an inter mode and output a bit stream.
  • the prediction may be performed by the intra predictor 120
  • the prediction may be performed by the motion predictor 110, the motion compensator 115, and the like.
  • the image encoding apparatus 100 may generate a prediction block for an input block of an input image and then encode a difference between the input block and the prediction block.
  • the intra predictor 120 may generate a predictive block by performing spatial prediction using pixel values of blocks that are already encoded around the current block.
  • the motion predictor 110 may obtain a motion vector by finding a region that best matches the input block in the reference image stored in the reference image buffer 165 during the motion prediction process.
  • the motion compensator 115 may generate a prediction block by performing motion compensation using the motion vector and the reference image stored in the reference image buffer 165.
  • the subtractor 125 may generate a residual block by the difference between the input block and the generated prediction block.
  • the transformer 130 may perform a transform on the residual block and output a transform coefficient.
  • the residual signal may mean a difference between the original signal and the prediction signal, and a signal in which the difference between the original signal and the prediction signal is transformed or a signal in which the difference between the original signal and the prediction signal is converted and quantized It may mean.
  • the residual signal may be referred to as a residual block in block units.
  • the quantization unit 135 may output a quantized coefficient obtained by quantizing the transform coefficients according to the quantization parameter.
  • the entropy encoder 140 may entropy-encode a symbol corresponding to the values calculated by the quantizer 135 or the encoding parameter value calculated in the encoding process according to a probability distribution to output a bit stream. Can be.
  • the compression performance of image encoding may be improved by assigning a small number of bits to a symbol having a high occurrence probability and a large number of bits to a symbol having a low occurrence probability.
  • Encoding methods such as context-adaptive variable length coding (CAVLC) and context-adaptive binary arithmetic coding (CABAC) may be used for entropy encoding.
  • the entropy encoder 140 may perform entropy encoding using a variable length coding (VLC) table.
  • VLC variable length coding
  • the entropy encoder 140 derives a binarization method of a target symbol and a probability model of a target symbol / bin, and then performs entropy encoding by using the derived binarization method or a probability model. It may be.
  • the quantized coefficient may be inversely quantized by the inverse quantizer 145 and inversely transformed by the inverse transformer 150.
  • the inverse quantized and inversely transformed coefficients are generated as reconstructed residual blocks, and the adder 155 may generate reconstructed blocks by using the predicted blocks and the reconstructed residual blocks.
  • the filter unit 160 may apply at least one or more of a deblocking filter, a sample adaptive offset (SAO), and an adaptive loop filter (ALF) to the reconstructed block or the reconstructed picture.
  • the reconstruction block that has passed through the filter unit 160 may be stored in the reference image buffer 165.
  • FIG. 2 is a block diagram illustrating a configuration of an image decoding apparatus according to an embodiment.
  • the image decoding apparatus 200 may include an entropy decoder 210, an inverse quantizer 220, an inverse transformer 230, an intra predictor 240, a motion compensator 250, and a filter. 260, a reference image buffer 270, and an adder 280.
  • the image decoding apparatus 200 may receive a bit stream output from the encoder and perform decoding in an intra mode or an inter mode, and output a reconstructed image, that is, a reconstructed image.
  • the prediction may be performed by the intra predictor 240
  • the prediction may be performed by the motion compensator 250.
  • the image decoding apparatus 200 may obtain a residual block reconstructed from the received bitstream, generate a prediction block, and then add the reconstructed residual block and the prediction block to generate a reconstructed block, that is, a reconstruction block. have.
  • the entropy decoder 210 may entropy decode the input bit stream according to a probability distribution to generate symbols in the form of quantized coefficients.
  • the entropy decoding method may be performed corresponding to the entropy encoding method described above.
  • the quantized coefficients are inversely quantized by the inverse quantizer 220 and inversely transformed by the inverse transformer 230, and a residual block restored as a result of inverse quantization / inverse transformation of the quantized coefficients may be generated.
  • the intra predictor 240 may generate a prediction block by performing spatial prediction using pixel values of blocks that are already decoded around the current block.
  • the motion compensator 250 may generate a prediction block by performing motion compensation using the motion vector and the reference image stored in the reference image buffer 270.
  • the adder 280 may generate a reconstruction block based on the reconstructed residual block and the prediction block.
  • the filter unit 260 may apply at least one or more of the deblocking filter, SAO, and ALF to the reconstruction block.
  • the filter unit 260 outputs the reconstructed image, that is, the reconstructed image.
  • the reconstructed picture may be stored in the reference picture buffer 270 to be used for inter prediction.
  • directional prediction or non-directional prediction is performed using one or more reconstructed reference samples.
  • FIG. 3 is a diagram illustrating each mode of intra prediction. Referring to FIG. 3, except that the DC mode Intra_DC, the planner mode Intra_Planar, and the mode Intra_FromLuma for applying the luma mode to chroma, modes 0, 1, and 3 to 33 are defined according to directions. You can check it.
  • the number of modes that can be used for prediction of the current block among each prediction mode illustrated in FIG. 3 may be determined according to the size of the current block.
  • Table 1 shows an example of the number of prediction modes available according to the size of the current block.
  • the size of the prediction block may be not only squares such as 2x2, 4x4, 8x8, 16x16, 32x32, and 64x64, but also rectangular blocks such as 2x8, 4x8, 2x16, 4x16, and 8x16. have.
  • the size of the prediction target block may be at least one of a coding unit (CU), a prediction unit (PU), and a transform unit (TU).
  • CU coding unit
  • PU prediction unit
  • TU transform unit
  • information of a reference sample may be used according to each mode as shown in FIG. 3.
  • FIG. 4 schematically illustrates a sample that the current block may refer to in the intra prediction mode.
  • a reconstructed reference sample around the current block 410 that is, an upper_left reference sample 420 and anabove.
  • the reference sample selected according to the prediction mode among the reference sample 430, the above_right reference sample 440, the left reference sample 450, and the lower_left reference sample 460 is selected from the current block ( 410 may be used for prediction.
  • an upper reference sample 430 or an upper_right reference sample 440 may be used.
  • a left reference sample 450 or a lower_left reference sample 460 may be used.
  • the prediction based pixel value (reference sample value) is directly predicted according to the prediction direction, that is, according to the prediction mode. Or use the average of the prediction-based pixel values as the prediction value.
  • a residual quadtree (RQT) method for signaling the partition structure of the transform region may be used.
  • RQT residual quadtree
  • the accuracy of the prediction decreases as the distance from the reference sample increases.
  • FIG. 5 is a diagram schematically illustrating a distribution of residual signals according to directional prediction.
  • FIG. 5A schematically shows the residual signal distribution when diagonal prediction from the upper left side to the lower right direction is performed.
  • the intra prediction mode is applied to the current block 500, which is the prediction target block, and the right and lower predictions 530 are performed using the reference samples 510 and 520.
  • the residual signal 540 is distributed much in the lower right direction away from the reference sample.
  • FIG. 5B schematically illustrates residual signal division when prediction is performed in the vertical direction.
  • the intra prediction mode is applied to the current block 550 that is the prediction target block, and the prediction 580 in the vertical direction is performed by using the upper reference sample 560 among the reference samples 560 and 570. ) Is being performed.
  • the residual signal 540 is distributed away from the upper reference sample 560.
  • the magnitude of the residual signal increases as the distance from the reference sample increases.
  • the distribution of the residual signal is distributed differently according to the prediction mode.
  • the accuracy of the prediction becomes farther away from the reference sample. Therefore, considering that the residual signal is larger and the residual signal is larger as the distance from the reference sample increases, it is closer to the region where the residual signal is expected to be distributed according to the direction of intra prediction. By using the reconstructed sample as a reference sample, the prediction efficiency can be further improved.
  • an area in which the residual signal is widely distributed may be determined according to the intra prediction mode.
  • the signaling overhead required for separately encoding the information of the region division can be reduced, and the complexity required to determine the structure of the region division can be minimized.
  • an area where a large number of residual signals are estimated to be distributed is referred to as a 'second prediction area' for convenience of description, and an area other than the second prediction area, that is, the residual signal is not distributed much in the current block.
  • an area in which a residual signal having a magnitude greater than or equal to a predetermined reference value is distributed in the current block may be set as a second prediction area.
  • a predetermined area may be set as the second prediction area according to each prediction mode. For example, a region located farthest in the current block from a sample referred to in each prediction mode may be set as the second prediction block.
  • the second prediction region may have the same size as the transform region, as shown in the following drawings.
  • FIG. 6 schematically illustrates an example of a first prediction region and a second prediction region predetermined according to an intra prediction mode in a system to which the present invention is applied.
  • the transform block is set to be one-fourth the size of the prediction target block (the current block).
  • intra prediction is performed on the first prediction region 610 of the current block 600 by using a reconstructed reference sample 605 around the current block 600.
  • the prediction mode 615 in the lower right direction is applied to the first prediction region 610 of the current block 600.
  • the sample 625 of the reconstructed first prediction region is used for the prediction of the second prediction region 620.
  • Encoding efficiency of the second prediction region 620 estimated to generate a large number of residual signals may be increased.
  • the prediction mode 630 for the second prediction region 620 may be determined after the reconstruction of the first prediction region 610 is performed.
  • intra prediction is performed on the first prediction region 650 of the current block 640 using the reconstructed reference samples 645 around the current block 640.
  • the prediction mode 655 in the vertical direction is applied to the first prediction region 640 of the current block 640.
  • the prediction of the second prediction region 660 is performed by using the reconstructed sample 665 of the first prediction region 660. Encoding efficiency of the second prediction region 660 estimated to be generated a lot of residual signals may be increased.
  • the prediction mode 670 for the second prediction region 660 may be determined after the reconstruction of the first prediction region 650 is performed.
  • the first prediction reconstructing the prediction for the second prediction region after the prediction / transformation / reconstruction of the first prediction region.
  • the assumptions made using a sample of the region can be repeated sequentially. For example, in the case of the third prediction region obtained by dividing the second prediction region or when the first prediction region is divided into the second prediction region and the third prediction region, the prediction for the third prediction region is reconstructed in the second prediction region. Can be performed using a sample of
  • prediction area refers to an area where pixel value prediction is performed according to various intra prediction modes.
  • 'transform region' refers to a region that is configured as part or all of the prediction region, and has a same prediction mode as that of the prediction region including itself, and performs sample value reconstruction through transform coding. .
  • FIG. 7 is a flowchart schematically illustrating an example of intra prediction in a system to which the present invention is applied.
  • the intra prediction method of FIG. 7 may be performed by an encoder or may be performed by a decoder.
  • a prediction area is divided into two or more according to an intra prediction mode with respect to a current block (S710).
  • the transform region is divided into two or more (S720).
  • the processing order of the prediction region and the transform region is determined according to the intra prediction mode (S730).
  • intra prediction / restore of the first prediction region is performed (S740).
  • intra prediction / restore of the second prediction region is performed according to the processing order (S750).
  • FIG. 8 is a flowchart schematically illustrating an operation of an encoder for performing the above-described intra prediction method in a system to which the present invention is applied.
  • the encoder determines an optimal prediction mode for a plurality of prediction regions (S810).
  • the method of determining the optimal prediction mode is described in terms of the nth prediction region predetermined for each prediction mode in the current block, and the prediction and transformation / reconstruction according to the structure of the prediction region predetermined for each prediction mode and the order of transform / restore. Perform a restore.
  • a prediction error and / or a prediction bit amount for the n th prediction region may be calculated, and an optimal intra prediction mode for the n th prediction region may be determined based on the prediction error and / or the prediction bit amount.
  • an intra prediction mode of the prediction region is signaled (S830).
  • the transform encoding on the prediction region is performed (S840). For each transform region (s) belonging to the n th prediction region, each transform region for the prediction error signal according to the optimal intra prediction mode of the n th prediction region, according to the partition structure and the processing order of the transform region determined in step S820. By encoding the transform coefficient of, transform encoding on the n th prediction region may be performed.
  • step S820 which is a step for the first prediction area, may not be performed from the second prediction area.
  • FIG. 9 is a flowchart schematically illustrating an operation of a decoder for performing the above-described intra prediction method in a system to which the present invention is applied.
  • the decoder parses a bit stream received from an encoder to obtain a prediction mode for a first prediction region (first prediction region) (S910).
  • the decoder determines the partition structure and the processing order for the current block (S920).
  • the decoder determines the partition structure of the prediction region and the transform region for the current block (block to be decoded) and the processing order of the prediction region and the transform region according to the prediction mode obtained in step S910.
  • the number N of all prediction regions may be determined together with the partition structure of the prediction region and the transform region.
  • the transform coefficients are decoded for each transform region in the prediction region (S930). For example, transform coefficients of each transform region belonging to the prediction region may be parsed and decoded from the bit stream for each N prediction regions in the current block. For the prediction region after the second prediction region (second prediction region), the prediction modes available for the prediction of the first prediction region (first prediction region) and the first prediction region (first prediction region) are excluded. Prediction mode candidate (s) for prediction of the nth prediction region may be determined by using prediction modes of the region adjacent to the nth prediction region among the reconstructed prediction regions.
  • a reconstruction signal for each transform region is generated (S940).
  • a residual signal is obtained by inversely transforming transform coefficients of each transform region according to the partition structure and the processing order of the transform region determined in step S910. Restore it.
  • the n th prediction region may be reconstructed by generating a reconstruction signal for each transform region by adding the reconstructed residual signal and the prediction result performed according to the prediction mode for the n th prediction region.
  • step S930 which is a step for the first prediction region, may not be performed from the second prediction region.
  • FIG. 10 schematically illustrates an example of a partition structure of a coding region CU, a prediction region PU, and a transform region TU.
  • Fig. 10 examples of divisions of a 64x64 sized encoding region 1010, a 32x32 sized encoding region 1020, a 16x16 sized encoding region 1030, and an 8x8 sized encoding region 1040 are sequentially performed from the topmost column. Is shown schematically.
  • a division structure of a prediction region PU and a transform region TU with respect to the size of each encoding region CU may be checked.
  • a short distance intra prediction (SDIP) region can be defined for a predetermined encoding region size.
  • SDIP adds a rectangular and line partition structure to the conventional partition structure.
  • the coding region is not divided into a square prediction region, and a non-square, for example, the height (or width) is the same as that of the coding region.
  • Width may be divided into a rectangular prediction region that is 1/2 or 1/4 of an encoding region.
  • FIG. 10 schematically illustrates two methods of using an upper reference sample and a method of using a left reference sample among methods of differently partitioning or additionally partitioning partitions according to an intra prediction mode.
  • the prediction of the current block may be performed using a non-directional intra prediction mode such as a DC mode and a planar mode.
  • the partitioning scheme of the prediction region and the transform region may vary according to the size of the current block.
  • the partition structure does not vary depending on the size of the block except for the size of the minimum prediction region / minimum transform region (4 ⁇ 4), which is no longer partitionable.
  • the partitioning structure may vary according to the prediction mode.
  • the partition structure of the prediction region and the partition structure of the transform region are fixed to 1 or 2 according to the prediction mode and optimized.
  • the coding performance can be improved and the complexity can be reduced.
  • the candidate set of the prediction mode can be reduced according to the partitioning structure for each prediction mode, thereby lowering the coding complexity.
  • FIG. 11 schematically illustrates an example of dividing a current block (coding target block) into two prediction regions according to a prediction mode in a system to which the present invention is applied.
  • region is split into a square prediction area
  • the first prediction region P1 divided in the coding regions 1100, 1135, and 1170 is intra based on neighboring decoded samples 1115, 1150, and 1185. Can be predicted.
  • the encoding of the divided prediction region may be performed by performing prediction / restore of the first prediction region P1 and then performing prediction / restore of the second prediction region P2. Therefore, when performing prediction on the second prediction region P2 having many residual signals, the prediction efficiency may be increased by using the reconstructed sample of the first prediction region P1 as a reference sample.
  • the first prediction region 1105 of the current block 1100 may be predicted based on a reference sample capable of obtaining excellent compression efficiency.
  • the upper / above-right sample 1120 of the current block is a reference sample that can obtain the best extrusion efficiency with respect to the first prediction block 1105.
  • prediction of the first prediction region 1105 may be performed using the reference sample 1120.
  • the prediction mode ⁇ 20, 11, 21, 0, 22, 12, 23, 5, 24, 13, 25, 6 ⁇ , etc. may be the reference sample 1120.
  • Prediction of the second prediction region 1110 may be performed using a sample of the reconstructed first prediction region 1105.
  • the first prediction region 1135 of the current block 1135 may be predicted based on a reference sample capable of obtaining excellent compression efficiency.
  • the left / low-left sample 1155 of the current block is a reference sample that can obtain the best extrusion efficiency with respect to the first prediction block 1140.
  • prediction of the first prediction region 1140 may be performed using the reference sample 1155.
  • prediction modes ⁇ 28, 15, 29, 1, 30, 16, 31, 8, 32, 17, 33, 9 ⁇ and the like may be reference samples 1155.
  • Prediction of the second prediction region 1145 may be performed using a sample of the reconstructed first prediction region 1140.
  • the first prediction region 1175 of the current block 1170 may be predicted based on a reference sample capable of obtaining excellent compression efficiency.
  • the case of using one of the DC mode and the planner mode as the prediction mode is illustrated. That is, the upper / above-left samples 1190-1 and the left / above-left samples 1190-2 of the current block include the first prediction block ( Assuming that the reference sample obtains the most excellent extrusion efficiency with respect to 1175, the prediction for the first prediction region 1175 may be performed using the reference samples 1190-1 and 1190-2.
  • prediction modes ⁇ 2, 34, 4, 19, 10, 18, 3, 26, 14, 27, 7 ⁇ and the like may be reference samples 1190-1 and 1190-2.
  • Prediction of the second prediction region 1180 may be performed using a sample of the reconstructed first prediction region 1175.
  • T1, T2, T3, and T4 represent respective conversion areas, and perform conversion / restore.
  • the order may be T1 ⁇ T2 ⁇ T3 ⁇ T4 or T1 ⁇ T3 ⁇ T2 ⁇ T4. Therefore, the transform region T4 corresponding to the second prediction region may be processed last.
  • FIG. 11 as shown in the division structures 1125, 1160, and 1195 of the conversion region, an example in which the conversion regions T1 to T4 are divided in the forward direction has been described.
  • FIG. 12 schematically illustrates another example of dividing a current block (block to be encoded) into two prediction regions according to a prediction mode in a system to which the present invention is applied.
  • the coding region is divided into non-square prediction regions.
  • the first prediction region P1 divided in the coding regions 1120, 1230, and 1260 is intra based on the neighboring decoded samples 1209, 1239, and 1269. Can be predicted.
  • the encoding of the divided prediction region may be performed by performing prediction / restore of the first prediction region P1 and then performing prediction / restore of the second prediction region P2. Therefore, when performing prediction on the second prediction region P2 having many residual signals, the prediction efficiency may be increased by using the reconstructed sample of the first prediction region P1 as a reference sample.
  • Samples 1210-1 and 1210-2 of the above / above-left or left / above-left of the current block 1200 in the example of FIG. 12A Assuming a reference sample that can obtain the most excellent extrusion efficiency with respect to the first prediction region 1203, the first prediction region 1203 through the prediction mode 1213 using the reference samples 1210-1 and 1210-2. ) Can be predicted.
  • the use of the reference samples 1210-1 and 1210-2 includes a case of using any one of the DC mode and the planner mode as the prediction mode.
  • the upper / above-right sample 1216 of the current block is a reference sample that can obtain the best compression efficiency for the first prediction region 1203, the reference sample 1216
  • Prediction of the first prediction region 1203 may be performed through the prediction mode 1219 using.
  • Prediction of the second prediction region 1206 may be performed using a sample of the reconstructed first prediction region 1103.
  • Samples 1240-1 and 1240-2 of the above / above-left or left / above-left of the current block 1230 in the example of FIG. 12B Assuming that the reference sample obtains the most excellent extrusion efficiency with respect to the first prediction region 1233, the first prediction region 1233 through the prediction modes 1243 using the reference samples 1240-1 and 1240-2. ) Can be predicted.
  • the use of the reference samples 1240-1 and 1240-2 includes a case in which any one of the DC mode and the planner mode is used as the prediction mode.
  • the reference sample 1246 Prediction on the first prediction area 1233 may be performed through the prediction mode 1249 using.
  • Prediction of the second prediction region 1236 may be performed using a sample of the reconstructed first prediction region 1233.
  • the prediction mode 1213 which uses the upper / left reference samples 1210-1, 1240-1, 1270-1, and the left / left reference samples 1210-2, 1240-2, 1270-2. 1243 is applicable to both the case of FIGS. 12A and 12B, and therefore, when the intra prediction modes are the prediction modes 1213 and 1243, one of FIGS. 12A and 12B. It may signal to the indicator what partition structure it has.
  • FIG. 12C shows an example of using a division structure different from the example of FIGS. 12A and 12B for prediction modes using reference samples of upper / left or left / left. It is shown.
  • an upper / above-left sample 1270-1 or a left / above-left sample of the current block 1260 ( Assuming that 1270-2 is a reference sample that can obtain the best extrusion efficiency with respect to the first prediction block 1263, the first through the prediction mode 1273 using the reference samples 1270-1 and 1270-2.
  • Prediction on the prediction area 1263 may be performed.
  • the use of the reference samples 1270-1 and 1270-2 includes a case of using either the DC mode or the planner mode as the prediction mode.
  • Prediction of the second prediction region 1266 may be performed using a sample of the reconstructed first prediction region 1263.
  • T1, T2, T3, and T4 represent respective conversion areas, and conversion / restore is performed.
  • the order of execution may be T1 ⁇ T2 ⁇ T3 ⁇ T4. Therefore, the transform region T4 corresponding to the second prediction region may be processed last.
  • regions 1270, 1280, and 1290 are regions corresponding to the current block 1260 and are examples of various division structures and transformation orders of the transformation region.
  • T1 to T16 of the regions 1270 and 1280 and T1 to T10 of the region 1290 represent respective conversion regions.
  • T1 to T9 may always be converted / restored before the conversion region after T10. In this case, in order to use a closer reconstructed sample as a reference sample, it is preferable that the changed region adjacent to the upper side and the left side is reconstructed before the target transform region.
  • the region 1270 is an example in which the transform regions T1 to T9 corresponding to the first prediction region P1 are converted / restored in the zigzag order
  • the region 1280 is the first prediction region P1.
  • transform regions T1 to T9 corresponding to the first prediction region P1 are converted / restored in the order of right-below corners from left-above corners.
  • the region 1290 shows an example in which a plurality of transform regions belonging to the same prediction region are combined into one transform region for processing. In the area 1290, compared to the areas 1270 and 1280, the upper left four conversion areas and the lower four conversion areas of the areas 1270 and 1280 are each processed as one conversion area.
  • the respective transform regions T1 to T4 are divided in a non-forward direction.
  • the division structures 1270, 1280, and 1290 of the conversion region illustrated in FIG. 12C an example in which each of the transformation regions Ts may be divided into non-squares has been described.
  • FIG. 13 schematically illustrates examples of dividing a current block (coding target block) into three prediction regions according to an intra prediction mode in a system to which the present invention is applied.
  • FIG. 13 illustrates a case where the current block is divided into a square prediction region and a non-square prediction region.
  • the first prediction region P1 divided in the encoding regions 1300 and 1330 may be intra predicted based on the neighboring decoded samples 1310 and 1340.
  • the encoding of the split prediction region is performed by performing prediction / restore on the first prediction region P1 and then performing prediction / restore on the second prediction region P2 and the third prediction region P3.
  • the second prediction region P2 may be processed first or the third prediction region P3 may be processed first between the second prediction region P2 and the third prediction region P3.
  • Left / above-left or upper / above-left samples 1343-1 and 1343-2 of the current block 1330 in the example of FIG. 13B Assuming a reference sample capable of obtaining the best extrusion efficiency with respect to the first prediction region 1333, the first prediction region 1333 through the prediction modes 1345 using the reference samples 1343-1 and 1343-2. ) Can be predicted.
  • the use of the reference samples 1343-1 and 1343-2 includes the case of using any one of the DC mode and the planner mode as the prediction mode.
  • the left / low-left sample 1347 of the current block 1330 is a reference sample that can obtain the best compression efficiency for the first prediction region 1333.
  • Prediction of the first prediction region 1333 may be performed through the prediction mode 1350 using the sample 1347.
  • Prediction of the second prediction region 1335 and the third prediction region 1335 may be performed using samples of the reconstructed first prediction region 1333.
  • T1, T2, T3, and T4 represent respective conversion areas, and conversion / restore is performed.
  • the order of execution may be T1 ⁇ T2 ⁇ T3 ⁇ T4 or T1 ⁇ T2 ⁇ T4 ⁇ T3. Therefore, the transform regions T3 and T4 corresponding to the second prediction region and the third prediction region may be processed last.
  • the divided structures 1323 and 1353 of the transform regions shown in the examples of FIGS. 13A and 13B in FIG. 13, the case where the transform region is divided into a mixture of square and non-square is shown. It was described by way of example.
  • FIG. 11 to 13 described above may be used independently of each other, only a part may be used in the process according to each embodiment, some or all of each embodiment may be used in combination with some or all of other embodiments. have.
  • Figs. 13A and 13B and Fig. 12C can be used in combination. In this case, if a case in which one prediction mode may have multiple block partition structures occurs, a separate indicator may be signaled to indicate which structure among the multiple block partition structures.
  • a non-square, for example, a rectangular transform may be applied to the non-square transform region, or a square transform may be applied after reordering signal values, such as a residual signal, into a square.
  • FIG. 14 schematically illustrates examples of transforming a non-square transform region in a system to which the present invention is applied.
  • a signal value relocation process is described in an encoder by taking an 8 ⁇ 2 non-square transform region as an example.
  • FIG. 14A shows an example of horizontally scanning the residual signal values of an 8x2 non-square transformed region.
  • Fig. 14B shows an example of vertically scanning the residual signal values of an 8x2 non-square transform region.
  • Fig. 14C shows an example of zigzag scanning of a residual signal value of an 8x2 non-square transform region.
  • 14A to 14C may be predetermined according to the prediction mode of the first prediction region (first prediction region) in the coding region.
  • signal values of the scanned transform region may be rearranged in the square transform region.
  • a signal value in an 8 ⁇ 2 non-square transform region may be rearranged in a 4 ⁇ 4 square transform region.
  • the rearranged signal value can be transformed into the frequency domain by a transform method such as a discrete cosine transform (DCT) and / or a discrete sine transform (DST).
  • DCT discrete cosine transform
  • DST discrete sine transform
  • the decoder inversely transforms the transform coefficients arranged in the square transform region.
  • the inverse transform may be performed by inversely applying a transform scheme applied when generating transform coefficients. For example, an Inverse Discrete Cosine Transform (IDCT) and / or an Inverse Discrete Sine Transform (IDST) may be applied to the transform coefficients.
  • IDCT Inverse Discrete Cosine Transform
  • IDST Inverse Discrete Sine Transform
  • the decoder scans the inverse transform coefficients in the reverse direction of Fig. 14 (D) and rearranges them in the reverse direction of Figs. Can be.
  • the prediction mode of the lower order prediction area and the prediction mode of the priority prediction area may have a high correlation in order of the prediction area. Therefore, by using this characteristic, it is possible to reduce the signaling overhead required for encoding the prediction mode for each prediction region and to improve the encoding performance.
  • FIG. 15 schematically illustrates examples of performing prediction on a subordinate prediction region using a reconstructed sample of a priority prediction region based on a correlation between a priority prediction region and a subordinate prediction region in a current block in a system to which the present invention is applied. It is shown as.
  • prediction is performed on the first prediction area 1503 of the current block 1500 by using an upper / upper right reference sample 1507.
  • Prediction may be performed on the second prediction region 1505 by using the left / lower reference sample 1510 and / or some samples 1513 of the first prediction region 1503 that are already reconstructed.
  • some samples 1513 of the first prediction region 1503 used for the prediction of the second prediction region 1505 are second predictions as samples of the first prediction region 1503 adjacent to the second prediction region 1505. The sample is reconstructed before prediction for region 1505 is performed.
  • the upper / left reference sample 1523-1 or the left / left reference sample 1523-2 is used for the first prediction region 1517 of the current block 1515.
  • Make predictions. Prediction may be performed on the second prediction area 1520 using some samples 1525 of the first prediction area 1517 that have already been reconstructed.
  • some samples 1525 of the first prediction region 1517 used to predict the second prediction region 1520 are second predictions as samples of the first prediction region 1517 adjacent to the second prediction region 1520. The sample is reconstructed before the prediction for the region 1520 is performed.
  • the upper / left reference sample 1537-1 or the left / left reference sample 1537-2 is used for the first prediction region 1533 of the current block 1530. You can make predictions.
  • prediction of the first prediction region 1533 may be performed using the upper / right reference sample 1540.
  • Prediction may be performed on the second prediction area 1535 using the left / lower reference samples 1543 and / or some samples 1545 of the first prediction area 1533 that are already reconstructed.
  • some samples 1545 of the first prediction area 1533 used for prediction of the second prediction area 1535 are second predictions as samples of the first prediction area 1533 adjacent to the second prediction area 1535. The sample is reconstructed before prediction for region 1535 is performed.
  • the upper / left reference sample 1557-1 or the left / left reference sample 1557-2 is used for the first prediction region 1553 of the current block 1550.
  • an upper / upper right reference sample 1560-1, a left / lower reference sample 1560-2, and / or some samples 1563 of an already reconstructed first prediction region 1553. ) Can be used to make predictions.
  • some samples 1563 of the first prediction area 1553 used for prediction of the second prediction area 1555 are second predictions as samples of the first prediction area 1553 adjacent to the second prediction area 1555. The sample is reconstructed before prediction for region 1555 is performed.
  • the first prediction area 1567 of the current block 1565 using the upper / left reference sample 1575-1 or the left / left reference sample 1575-2. You can make predictions.
  • prediction of the first prediction region 1567 may be performed using the upper / right reference sample 1577.
  • prediction may be performed using the left / lower and upper / left reference samples 1580 and / or some samples 1583 of the first prediction region 1567 that have already been reconstructed. have.
  • the third prediction area 1573 may be predicted using the upper / upper right reference sample 1589 and / or some samples 1587 of the first prediction area 1567 that have already been reconstructed.
  • some samples 1583 and 1587 of the first prediction area 1567 used for the prediction of the second prediction area 1570 and the third prediction area 1573 are respectively the second prediction area 1570 and the third prediction.
  • a sample of the first prediction region 1567 adjacent to the region 1573 is a sample reconstructed before prediction of the second prediction region 1570 and the third prediction region 1573 is performed.
  • the prediction modes that were available (reference samples 1507, 1523-1, 1523-2, 1537-1, 1537-2, 1540, 1557-1, 1557-2, 1575-1, 1575-2, 1577, etc.) The prediction mode to be used).
  • the second prediction region P2 and the third prediction region may be formed using only prediction modes used for prediction of the region adjacent to the second prediction region P2 and the third prediction region P3 of FIGS. 15A to 15E.
  • a prediction mode candidate for the prediction region P3 may also be configured.
  • the prediction modes (reference samples 1507, 1523-1, 1523-2, 1537-1, 1537-2, 1540, 1557) that were available for prediction of the first prediction region P1 are combined.
  • Prediction mode candidates of the second prediction region P2 or the third prediction region P3 may be configured as prediction modes of the region adjacent to P3).
  • the shapes and ranges of samples available for prediction of the prediction regions vary according to the shapes and positions of the prediction regions.
  • the prediction is bidirectionally predicted, for example, by a weighted sum of both prediction values. Performing predictions on regions may lead to better prediction efficiency.
  • a candidate prediction mode for each prediction region may be set, and the encoder may select one of the prediction modes to perform prediction for the prediction region.
  • the selection of the prediction mode may be determined in consideration of compression efficiency such as Rate Distortion Optimization (RDO).
  • RDO Rate Distortion Optimization
  • the encoder may transmit information about the selected prediction mode to the decoder.
  • the decoder may set the candidate prediction mode by the same method as the encoder, select a prediction mode to be applied to the current prediction region, or apply the prediction mode indicated by the information transmitted from the encoder to the current prediction block. have.
  • reference samples (prediction modes) for the prediction regions are selected as shown, but these are only examples for convenience of description, and reference samples (prediction modes) for each prediction region are based on characteristics of the prediction regions. It can be selected in various ways.
  • FIG. 16 schematically illustrates examples of determining a prediction mode of a current prediction region in a system to which the present invention is applied.
  • FIG. 16A illustrates the prediction mode 1610 of the first prediction region P1 and the second prediction region P2 for the first prediction region P1 and the second prediction region P2 in the current block 1600.
  • the example used for the prediction mode 1620 of FIG. instead of using the prediction mode 1610 of the first prediction region P1 as the prediction mode of the second prediction region P2, the angle similar to that of the prediction mode 1610 of the first prediction region P1 is used.
  • a prediction mode having a may be used as the prediction mode of the second prediction region 1620.
  • the prediction mode of the first prediction region P1 is the DC mode or the planar mode
  • the DC mode or the planner mode may be used as the prediction mode of the second prediction region P2.
  • FIG. 16B illustrates a prediction mode of a block adjacent to the second prediction region P2 with respect to the first prediction region P1 and the second prediction region P2 in the current block 1630.
  • the example used for the prediction mode 1660 of FIG. For example, the prediction mode 1650 of the left region of the second prediction region P2 or the prediction mode 1640 of the first prediction region P1 in contact with the second prediction region P2 may be used.
  • the prediction mode 1660 may be determined.
  • FIG. 16C shows an example in which FIG. 16A and FIG. 16B are combined.
  • a prediction mode of the second prediction region P2 is selected from among prediction modes 1680 and 1690 of a block adjacent to the second prediction region P2 and prediction modes similar to those of the adjacent blocks. 1699 can be determined.
  • the encoder may select a prediction mode to be applied to the current prediction region.
  • the prediction mode may be determined in consideration of compression efficiency such as Rate Distortion Optimization (RDO).
  • RDO Rate Distortion Optimization
  • the encoder may transmit information about the selected prediction mode to the decoder.
  • the decoder may select the prediction mode to be applied to the current prediction region after setting the candidate prediction mode by the same method as the encoder. Therefore, which prediction mode is selected in the prediction region (the third prediction region, the fourth prediction region, ...) after the second prediction region (second prediction region) is determined by the prediction mode and / or the prediction region for the first prediction region. It may be set in advance between the encoder and the decoder in accordance with the division structure of? In addition, the decoder may apply the prediction mode indicated by the information transmitted from the encoder to the current prediction block.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

La présente invention se rapporte à un procédé de codage de données d'image, à un procédé de décodage de données d'image, et à un appareil dans lequel lesdits procédés sont mis en œuvre. Selon l'un des modes de réalisation de la présente invention, le procédé de décodage des données d'image comprend les étapes consistant : à diviser une zone de prédiction en une première zone de prédiction et une seconde zone de prédiction conformément à un mode de prédiction Intra ; à exécuter une prédiction Intra sur la première zone de prédiction et à restaurer la première zone de prédiction ; et à exécuter une prédiction sur la seconde zone de prédiction et à restaurer la seconde zone de prédiction. Au cours de l'étape consistant à exécuter une prédiction Intra sur la seconde zone de prédiction et à restaurer la seconde zone de prédiction, une prédiction Intra peut être exécutée sur la seconde zone de prédiction en se référant à un échantillon de référence relatif à la première zone de prédiction ou en se référant à un échantillon prédéterminé dans la première zone de prédiction restaurée.
PCT/KR2012/005252 2011-07-05 2012-07-02 Procédé de codage de données d'image et procédé de décodage de données d'image Ceased WO2013005967A2 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/130,716 US20140133559A1 (en) 2011-07-05 2012-07-02 Method for encoding image information and method for decoding same

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
KR10-2011-0066402 2011-07-05
KR20110066402 2011-07-05
KR1020120071616A KR102187246B1 (ko) 2011-07-05 2012-07-02 영상 정보 부호화 방법 및 복호화 방법
KR10-2012-0071616 2012-07-02

Publications (2)

Publication Number Publication Date
WO2013005967A2 true WO2013005967A2 (fr) 2013-01-10
WO2013005967A3 WO2013005967A3 (fr) 2013-03-14

Family

ID=47437546

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2012/005252 Ceased WO2013005967A2 (fr) 2011-07-05 2012-07-02 Procédé de codage de données d'image et procédé de décodage de données d'image

Country Status (2)

Country Link
KR (1) KR102333153B1 (fr)
WO (1) WO2013005967A2 (fr)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017131475A1 (fr) * 2016-01-27 2017-08-03 한국전자통신연구원 Procédé et dispositif de codage et de décodage de vidéo à l'aide d'une prédiction
WO2018026219A1 (fr) * 2016-08-03 2018-02-08 주식회사 케이티 Procédé et dispositif de traitement de signal vidéo
WO2018212577A1 (fr) * 2017-05-17 2018-11-22 주식회사 케이티 Procédé et dispositif de traitement de signal vidéo
CN110169060A (zh) * 2017-01-09 2019-08-23 Sk电信有限公司 用于对图像进行编码或解码的设备和方法
CN111885379A (zh) * 2015-03-23 2020-11-03 Lg 电子株式会社 在内预测模式的基础上处理图像的方法及其装置
CN112166605A (zh) * 2018-06-21 2021-01-01 株式会社Kt 用于处理视频信号的方法和设备

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140133559A1 (en) * 2011-07-05 2014-05-15 Electronics And Telecommunications Research Institute Method for encoding image information and method for decoding same

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002209213A (ja) * 2000-12-28 2002-07-26 Sony Corp 動きベクトル検出方法及び装置、並びに画像符号化装置
KR100750128B1 (ko) * 2005-09-06 2007-08-21 삼성전자주식회사 영상의 인트라 예측 부호화, 복호화 방법 및 장치
KR100727972B1 (ko) * 2005-09-06 2007-06-14 삼성전자주식회사 영상의 인트라 예측 부호화, 복호화 방법 및 장치
WO2010113227A1 (fr) 2009-03-31 2010-10-07 パナソニック株式会社 Dispositif de codage d'image
US9083974B2 (en) * 2010-05-17 2015-07-14 Lg Electronics Inc. Intra prediction modes
US20140133559A1 (en) * 2011-07-05 2014-05-15 Electronics And Telecommunications Research Institute Method for encoding image information and method for decoding same

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111885379B (zh) * 2015-03-23 2023-10-27 Lg 电子株式会社 在内预测模式的基础上处理图像的方法及其装置
CN111885379A (zh) * 2015-03-23 2020-11-03 Lg 电子株式会社 在内预测模式的基础上处理图像的方法及其装置
WO2017131475A1 (fr) * 2016-01-27 2017-08-03 한국전자통신연구원 Procédé et dispositif de codage et de décodage de vidéo à l'aide d'une prédiction
WO2018026219A1 (fr) * 2016-08-03 2018-02-08 주식회사 케이티 Procédé et dispositif de traitement de signal vidéo
US12022066B2 (en) 2016-08-03 2024-06-25 Kt Corporation Video signal processing method and device for performing intra-prediction for an encoding/decoding target block
CN109565591A (zh) * 2016-08-03 2019-04-02 株式会社Kt 视频信号处理方法和装置
CN109565591B (zh) * 2016-08-03 2023-07-18 株式会社Kt 用于对视频进行编码和解码的方法和装置
US11438582B2 (en) 2016-08-03 2022-09-06 Kt Corporation Video signal processing method and device for performing intra-prediction for an encoding/decoding target block
CN110169060A (zh) * 2017-01-09 2019-08-23 Sk电信有限公司 用于对图像进行编码或解码的设备和方法
CN110169060B (zh) * 2017-01-09 2023-06-27 Sk电信有限公司 用于对图像进行编码或解码的设备和方法
US11051011B2 (en) 2017-05-17 2021-06-29 Kt Corporation Method and device for video signal processing
AU2018270853B2 (en) * 2017-05-17 2021-04-01 Kt Corporation Method and device for video signal processing
US20200077086A1 (en) * 2017-05-17 2020-03-05 Kt Corporation Method and device for video signal processing
CN110651478A (zh) * 2017-05-17 2020-01-03 株式会社Kt 用于视频信号处理的方法和装置
CN110651478B (zh) * 2017-05-17 2023-11-21 株式会社Kt 用于视频信号处理的方法和装置
WO2018212577A1 (fr) * 2017-05-17 2018-11-22 주식회사 케이티 Procédé et dispositif de traitement de signal vidéo
US12267483B2 (en) 2017-05-17 2025-04-01 Kt Corporation Method and device for video signal processing
CN112166605A (zh) * 2018-06-21 2021-01-01 株式会社Kt 用于处理视频信号的方法和设备
US11956418B2 (en) 2018-06-21 2024-04-09 Kt Corporation Method and apparatus for processing video signal
US12375641B2 (en) 2018-06-21 2025-07-29 Kt Corporation Method and apparatus for processing video signal
US12425569B2 (en) 2018-06-21 2025-09-23 Kt Corporation Method and apparatus for processing video signal

Also Published As

Publication number Publication date
KR20200139116A (ko) 2020-12-11
KR102333153B1 (ko) 2021-12-01
WO2013005967A3 (fr) 2013-03-14

Similar Documents

Publication Publication Date Title
KR102438021B1 (ko) 영상 정보 부호화 방법 및 복호화 방법
US12273537B2 (en) Image encoding/decoding method and apparatus using intra-screen prediction
KR102442270B1 (ko) 인트라 예측 방법 및 그 장치
US11943485B2 (en) Method of coding and decoding images, coding and decoding device and computer programs corresponding thereto
JP2022087158A (ja) イントラ予測方法とそれを利用した符号化器及び復号化器
KR102333153B1 (ko) 영상 정보 부호화 방법 및 복호화 방법
AU2012310514B2 (en) Method for inducing a merge candidate block and device using same
KR102061201B1 (ko) 블록 정보에 따른 변환 방법 및 이러한 방법을 사용하는 장치
KR20190120145A (ko) 인루프 필터링을 적용한 예측 방법을 이용한 영상 부호화/복호화 방법 및 장치
US11102494B2 (en) Method for scanning transform coefficient and device therefor
JP2017184274A (ja) ビデオ符号化での分割ブロック符号化方法、ビデオ復号化での分割ブロック復号化方法及びこれを実現する記録媒体
WO2013051794A1 (fr) Procédé pour coder/décoder un mode de prédiction d'image intra au moyen de deux modes de prédiction intra candidats, et appareil mettant en œuvre un tel procédé
KR20210008105A (ko) 영상 부호화/복호화 방법 및 장치
WO2013048033A1 (fr) Procédé et appareil de codage/décodage en mode de prédiction intra
WO2012081895A1 (fr) Procédé et appareil de prédiction intra
KR20190087329A (ko) 복수의 예측 모드 후보군을 사용하여 화면내 예측을 수행하는 영상 부호화/복호화 방법 및 장치
WO2012118358A2 (fr) Procédé de numérisation de coefficient de transformée et dispositif associé
KR101688085B1 (ko) 고속 인트라 예측을 위한 영상 부호화 방법 및 장치
WO2012121575A2 (fr) Procédé et dispositif pour une prévision interne
KR20190113611A (ko) 영상 부호화/복호화 방법 및 장치

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 14130716

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12807388

Country of ref document: EP

Kind code of ref document: A2