[go: up one dir, main page]

WO2019244809A1 - Dispositif de codage, dispositif de décodage, procédé de codage, et procédé de décodage - Google Patents

Dispositif de codage, dispositif de décodage, procédé de codage, et procédé de décodage Download PDF

Info

Publication number
WO2019244809A1
WO2019244809A1 PCT/JP2019/023781 JP2019023781W WO2019244809A1 WO 2019244809 A1 WO2019244809 A1 WO 2019244809A1 JP 2019023781 W JP2019023781 W JP 2019023781W WO 2019244809 A1 WO2019244809 A1 WO 2019244809A1
Authority
WO
WIPO (PCT)
Prior art keywords
prediction
motion vector
block
unit
mode
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/JP2019/023781
Other languages
English (en)
Japanese (ja)
Inventor
遠間 正真
西 孝啓
安倍 清史
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Panasonic Intellectual Property Corp of America
Original Assignee
Panasonic Intellectual Property Corp of America
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Panasonic Intellectual Property Corp of America filed Critical Panasonic Intellectual Property Corp of America
Publication of WO2019244809A1 publication Critical patent/WO2019244809A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/109Selection of coding mode or of prediction mode among a plurality of temporal predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/537Motion estimation other than block-based
    • H04N19/54Motion estimation other than block-based using feature points or meshes

Definitions

  • the present disclosure relates to an encoding device, a decoding device, an encoding method, and a decoding method.
  • H.264 is used as a standard for encoding a moving image.
  • 265 are present.
  • H. H.265 is also called HEVC (High Efficiency Video Coding).
  • Each of the configurations and methods disclosed in the embodiments of the present disclosure or a part thereof can be implemented by, for example, improving encoding efficiency, reducing the amount of encoding / decoding processing, reducing the circuit scale, and encoding / decoding speed. , And / or the appropriate selection of components / operations such as filters, blocks, sizes, motion vectors, reference pictures, reference blocks, etc., in encoding and decoding.
  • the present disclosure also includes disclosure of configurations or methods that can provide benefits other than those described above. For example, there is a configuration or a method for improving the coding efficiency while suppressing an increase in the processing amount.
  • An encoding device is an encoding device that performs motion compensation and encodes a moving image, and includes a circuit and a memory, wherein the circuit uses the memory,
  • the circuit uses the memory,
  • the inter prediction mode in which an affine motion vector is calculated in units of sub-blocks constituting the current block based on motion vectors of a plurality of peripheral blocks of a current block of an image in a moving image. Only uni-prediction of uni-prediction and bi-prediction is used. Calculates the affine motion vector in the sub-block unit, and performs the motion compensation in the sub-block unit using the calculated affine motion vector.
  • a decoding device is a decoding device that performs motion compensation and decodes a moving image, and includes a circuit and a memory, and the circuit uses the memory to In the inter prediction mode of calculating an affine motion vector in units of sub-blocks constituting the current block based on the motion vectors of a plurality of peripheral blocks of the current block of the image in The affine motion vector is calculated for each sub-block, and the motion compensation is performed for each sub-block using the calculated affine motion vector.
  • the present disclosure can provide an encoding device, a decoding device, an encoding method, or a decoding method that can improve processing efficiency.
  • FIG. 1 is a block diagram showing a functional configuration of the encoding device according to Embodiment 1.
  • FIG. 2 is a diagram illustrating an example of block division according to the first embodiment.
  • FIG. 3 is a table showing conversion basis functions corresponding to each conversion type.
  • FIG. 4A is a diagram illustrating an example of the shape of a filter used in ALF.
  • FIG. 4B is a diagram illustrating another example of the shape of the filter used in the ALF.
  • FIG. 4C is a diagram illustrating another example of the shape of the filter used in the ALF.
  • FIG. 5A is a diagram showing 67 intra prediction modes in intra prediction.
  • FIG. 5B is a flowchart for explaining an outline of the predicted image correction processing by the OBMC processing.
  • FIG. 5A is a diagram showing 67 intra prediction modes in intra prediction.
  • FIG. 5B is a flowchart for explaining an outline of the predicted image correction processing by the OBMC processing.
  • FIG. 5A is a diagram
  • FIG. 5C is a conceptual diagram for describing an outline of a predicted image correction process by the OBMC process.
  • FIG. 5D is a diagram illustrating an example of the FRUC.
  • FIG. 6 is a diagram for explaining pattern matching (bilateral matching) between two blocks along a motion trajectory.
  • FIG. 7 is a diagram for explaining pattern matching (template matching) between a template in the current picture and a block in the reference picture.
  • FIG. 8 is a diagram for explaining a model assuming constant velocity linear motion.
  • FIG. 9A is a diagram for describing derivation of a motion vector in sub-block units based on motion vectors of a plurality of adjacent blocks.
  • FIG. 9B is a diagram for describing the outline of the motion vector derivation process in the merge mode.
  • FIG. 9A is a diagram for describing derivation of a motion vector in sub-block units based on motion vectors of a plurality of adjacent blocks.
  • FIG. 9B is a diagram for
  • FIG. 9C is a conceptual diagram for explaining the outline of the DMVR process.
  • FIG. 9D is a diagram for explaining an outline of a predicted image generation method using the luminance correction processing by the LIC processing.
  • FIG. 10 is a block diagram showing a functional configuration of the decoding device according to Embodiment 1.
  • FIG. 11 is a flowchart illustrating an operation example of the affine motion compensation prediction mode performed by the inter prediction unit of the encoding device according to the first example of Embodiment 1.
  • FIG. 12 shows an operation example in the case where the motion vector of the control point is only uni-prediction in the normal mode of the affine motion compensation prediction mode performed by the inter prediction unit of the encoding device according to the first example of Embodiment 1. It is a flowchart.
  • FIG. 11 is a flowchart illustrating an operation example of the affine motion compensation prediction mode performed by the inter prediction unit of the encoding device according to the first example of Embodiment 1.
  • FIG. 12 shows an
  • FIG. 13 is a block diagram illustrating an implementation example of the encoding device according to Embodiment 1.
  • FIG. 14 is a flowchart illustrating an operation example of the encoding device illustrated in FIG.
  • FIG. 15 is a block diagram illustrating an implementation example of the decoding device according to the first embodiment.
  • FIG. 16 is a flowchart showing an operation example of the decoding device shown in FIG.
  • FIG. 17 is an overall configuration diagram of a content supply system that realizes a content distribution service.
  • FIG. 18 is a diagram illustrating an example of an encoding structure at the time of scalable encoding.
  • FIG. 19 is a diagram illustrating an example of an encoding structure at the time of scalable encoding.
  • FIG. 20 is a diagram illustrating an example of a display screen of a web page.
  • FIG. 21 is a diagram illustrating an example of a display screen of a web page.
  • FIG. 22 is a diagram illustrating an example of a smartphone.
  • FIG. 23 is a block diagram illustrating a configuration example of a smartphone.
  • an encoding device is an encoding device that performs motion compensation and encodes a moving image, and includes a circuit and a memory, and the circuit uses the memory.
  • the inter prediction mode for calculating an affine motion vector in units of sub-blocks constituting the current block based on the motion vectors of a plurality of peripheral blocks of the current block of the image in the moving image The affine motion vector in the sub-block unit is calculated only by prediction, and the motion compensation is performed in the sub-block unit using the calculated affine motion vector.
  • the encoding device can reduce the amount of processing while suppressing a decrease in encoding efficiency, and thus can improve the processing efficiency.
  • the circuit calculates one of the first reference picture list and the second reference picture list that are commonly used in the inter prediction mode when calculating the affine motion vector.
  • a reference picture is selected only from the selected reference picture, and an encoded block for deriving a predicted motion vector of a control point is determined using only uni-prediction from among a plurality of encoded blocks constituting the selected reference picture.
  • the circuit when the circuit performs an affine motion compensation mode, which is an inter prediction mode for calculating the affine motion vector, on the current block, the circuit is adjacent to the current block to which the affine motion compensation mode is applied.
  • the motion vector of the coded block Based on the motion vector of the coded block to be, based on the merge mode to determine the predicted motion vector of the control point, and the reference picture for each control point determined from the coded block near the control point of the current block,
  • the control is performed from an encoded block near the control point of the current block in only uni-prediction. Determine the predicted motion vector for each point Doing, the affine motion vector is calculated only a single prediction.
  • a decoding device is a decoding device that performs motion compensation and decodes a moving image, and includes a circuit and a memory, and the circuit uses the memory,
  • the inter prediction mode in which an affine motion vector is calculated in units of sub-blocks constituting the current block based on motion vectors of a plurality of peripheral blocks of a current block of an image in a moving image, only uni-prediction of uni-prediction and bi-prediction is used.
  • Calculates the affine motion vector in the sub-block unit and performs the motion compensation in the sub-block unit using the calculated affine motion vector.
  • the decoding device can reduce the amount of processing while suppressing a decrease in encoding efficiency, and thus can improve processing efficiency.
  • the circuit may use only one of the first reference picture list and the second reference picture list commonly used in the inter prediction mode. , And from among a plurality of coded blocks constituting the selected reference picture, a coded block for deriving a predicted motion vector of a control point using only uni-prediction is determined.
  • the circuit when the circuit performs an affine motion compensation mode, which is an inter prediction mode for calculating the affine motion vector, on the current block, the circuit is adjacent to the current block to which the affine motion compensation mode is applied.
  • the motion vector of the coded block Based on the motion vector of the coded block to be, based on the merge mode to determine the predicted motion vector of the control point, and the reference picture for each control point determined from the coded block near the control point of the current block,
  • the control is performed from an encoded block near the control point of the current block in only uni-prediction. Determine the predicted motion vector for each point Doing, the affine motion vector is calculated only a single prediction.
  • an encoding method is an encoding method that encodes a moving image by performing motion compensation, wherein the motion vectors of a plurality of peripheral blocks of a current block of an image in the moving image are provided. Based on, in the inter prediction mode of calculating an affine motion vector in units of sub-blocks constituting the current block, calculate the affine motion vector in units of sub-blocks only in uni-prediction of uni-prediction and bi-prediction, The motion compensation is performed for each sub-block using the calculated affine motion vector.
  • the encoding method can reduce the amount of processing while suppressing a decrease in encoding efficiency, so that the processing efficiency can be improved.
  • a decoding method for decoding a moving image by performing motion compensation, based on motion vectors of a plurality of peripheral blocks of a current block of an image in the moving image,
  • the affine motion vector in units of sub-blocks is calculated only in uni-prediction of uni-prediction and bi-prediction.
  • the motion compensation is performed for each sub-block using an affine motion vector.
  • the decoding method can reduce the amount of processing while suppressing a decrease in encoding efficiency, and can improve processing efficiency.
  • a non-transitory recording medium such as a system, an apparatus, a method, an integrated circuit, a computer program, or a computer-readable CD-ROM
  • the present invention may be implemented by any combination of a system, an apparatus, a method, an integrated circuit, a computer program, and a recording medium.
  • Embodiment 1 First, an outline of Embodiment 1 will be described as an example of an encoding device and a decoding device to which processing and / or a configuration described in each embodiment of the present disclosure described later can be applied. However, Embodiment 1 is merely an example of an encoding device and a decoding device to which the processing and / or configuration described in each aspect of the present disclosure can be applied, and the processing and / or processing described in each aspect of the present disclosure. The configuration can be implemented in an encoding device and a decoding device different from the first embodiment.
  • the components correspond to the components described in each aspect of the present disclosure.
  • Replacing constituent elements with constituent elements described in each aspect of the present disclosure (2)
  • Encoding device in combination with a component having a part of the provided function or a component that performs a part of a process performed by a component described in each aspect of the present disclosure Or, a component having a part of the functions of some of the components constituting the decoding device, or a plurality of components constituting the encoding device or the decoding device of the first embodiment.
  • a component that implements a part of the processing performed by the component a component that is described in each embodiment of the present disclosure, a component that includes a part of the function that the component described in each embodiment of the present disclosure has, or Implementing in combination with a component that performs a part of the process performed by the component described in each aspect of the disclosure (6)
  • a process corresponding to a process described in each aspect of the present disclosure is replaced with a process described in each aspect of the present disclosure.
  • the manner of implementing the processing and / or configuration described in each aspect of the present disclosure is not limited to the above example.
  • the present invention may be implemented in a device used for a different purpose from the moving image / image encoding device or the moving image / image decoding device disclosed in the first embodiment, and the processing and / or processing described in each aspect may be performed.
  • the configuration may be implemented alone. Further, the processes and / or configurations described in different modes may be implemented in combination.
  • FIG. 1 is a block diagram showing a functional configuration of an encoding device 100 according to Embodiment 1.
  • the encoding device 100 is a moving image / image encoding device that encodes a moving image / image in block units.
  • an encoding apparatus 100 is an apparatus that encodes an image in units of blocks, and includes a division unit 102, a subtraction unit 104, a conversion unit 106, a quantization unit 108, and entropy encoding.
  • Unit 110 inverse quantization unit 112, inverse transform unit 114, addition unit 116, block memory 118, loop filter unit 120, frame memory 122, intra prediction unit 124, inter prediction unit 126, And a prediction control unit 128.
  • the encoding device 100 is realized by, for example, a general-purpose processor and a memory.
  • the processor when the software program stored in the memory is executed by the processor, the processor includes the dividing unit 102, the subtracting unit 104, the transforming unit 106, the quantizing unit 108, the entropy encoding unit 110, and the inverse quantizing unit 112. , The inverse transform section 114, the adder section 116, the loop filter section 120, the intra prediction section 124, the inter prediction section 126, and the prediction control section 128.
  • the encoding apparatus 100 includes a dividing unit 102, a subtracting unit 104, a transforming unit 106, a quantizing unit 108, an entropy encoding unit 110, an inverse quantizing unit 112, an inverse transforming unit 114, an adding unit 116, and a loop filter unit 120. , The intra prediction unit 124, the inter prediction unit 126, and the prediction control unit 128.
  • the division unit 102 divides each picture included in the input moving image into a plurality of blocks, and outputs each block to the subtraction unit 104.
  • the dividing unit 102 first divides a picture into blocks of a fixed size (for example, 128 ⁇ 128).
  • This fixed size block may be referred to as a coding tree unit (CTU).
  • the dividing unit 102 divides each of the fixed-size blocks into blocks of a variable size (for example, 64 ⁇ 64 or less) based on recursive quadtree and / or binary tree block division. .
  • This variable size block may be referred to as a coding unit (CU), a prediction unit (PU), or a transform unit (TU).
  • CUs, PUs, and TUs do not need to be distinguished, and some or all blocks in a picture may be the processing units of the CUs, PUs, and TUs.
  • FIG. 2 is a diagram illustrating an example of block division according to the first embodiment.
  • a solid line represents a block boundary obtained by quadtree block division
  • a broken line represents a block boundary obtained by binary tree block division.
  • the block 10 is a square block of 128 ⁇ 128 pixels (128 ⁇ 128 block).
  • the 128 ⁇ 128 block 10 is first divided into four square 64 ⁇ 64 blocks (quad tree block division).
  • the upper left 64 ⁇ 64 block is further vertically divided into two rectangular 32 ⁇ 64 blocks, and the left 32 ⁇ 64 block is further vertically divided into two rectangular 16 ⁇ 64 blocks (binary tree block division). As a result, the upper left 64 ⁇ 64 block is divided into two 16 ⁇ 64 blocks 11 and 12 and a 32 ⁇ 64 block 13.
  • the upper right 64 ⁇ 64 block is horizontally divided into two rectangular 64 ⁇ 32 blocks 14 and 15 (binary tree block division).
  • the lower left 64 ⁇ 64 block is divided into four square 32 ⁇ 32 blocks (quad tree block division).
  • the upper left block and the lower right block of the four 32 ⁇ 32 blocks are further divided.
  • the upper left 32x32 block is vertically divided into two rectangular 16x32 blocks, and the right 16x32 block is further horizontally divided into two 16x16 blocks (binary tree block division).
  • the lower right 32 ⁇ 32 block is horizontally divided into two 32 ⁇ 16 blocks (binary tree block division).
  • the lower left 64 ⁇ 64 block is divided into a 16 ⁇ 32 block 16, two 16 ⁇ 16 blocks 17 and 18, two 32 ⁇ 32 blocks 19 and 20, and two 32 ⁇ 16 blocks 21 and 22.
  • the block 10 is divided into thirteen variable-size blocks 11 to 23 based on recursive quadtree and binary tree block division.
  • Such division may be referred to as QTBT (quad-tree ⁇ plus ⁇ binary ⁇ tree) division.
  • one block is divided into four or two blocks (quadtree or binary tree block division), but the division is not limited to this.
  • one block may be divided into three blocks (triple tree block division).
  • a division including such a ternary tree block division may be referred to as MBT (multimtype tree) division.
  • the subtraction unit 104 subtracts a prediction signal (prediction sample) from an original signal (original sample) in units of blocks divided by the division unit 102. That is, the subtraction unit 104 calculates a prediction error (also referred to as a residual) of an encoding target block (hereinafter, referred to as a current block). Then, the subtraction unit 104 outputs the calculated prediction error to the conversion unit 106.
  • the original signal is an input signal of the encoding device 100, and is a signal (for example, a luminance (luma) signal and two color difference (chroma) signals) representing an image of each picture constituting a moving image.
  • a signal representing an image may be referred to as a sample.
  • the transform unit 106 transforms the prediction error in the spatial domain into transform coefficients in the frequency domain, and outputs the transform coefficients to the quantization unit 108. Specifically, the transform unit 106 performs a predetermined discrete cosine transform (DCT) or discrete sine transform (DST) on a prediction error in a spatial domain, for example.
  • DCT discrete cosine transform
  • DST discrete sine transform
  • the conversion unit 106 adaptively selects a conversion type from a plurality of conversion types, and converts the prediction error into a conversion coefficient using a conversion basis function (transform basis function) corresponding to the selected conversion type. May be.
  • a conversion basis function transform basis function
  • Such a transformation is sometimes called EMT (explicit ⁇ core ⁇ transform) or AMT (adaptive ⁇ multiple ⁇ transform).
  • the plurality of conversion types include, for example, DCT-II, DCT-V, DCT-VIII, DST-I and DST-VII.
  • FIG. 3 is a table showing conversion basis functions corresponding to each conversion type. In FIG. 3, N indicates the number of input pixels. Selection of a conversion type from among the plurality of conversion types may depend on, for example, the type of prediction (intra prediction and inter prediction) or may depend on the intra prediction mode.
  • ⁇ Information indicating whether to apply such EMT or AMT (for example, called an AMT flag) and information indicating the selected conversion type are signalized at the CU level.
  • the signalization of these pieces of information need not be limited to the CU level, but may be another level (for example, a sequence level, a picture level, a slice level, a tile level, or a CTU level).
  • the conversion unit 106 may re-convert the conversion coefficient (conversion result). Such re-transformation may be referred to as AST (adaptive @ secondary @ transform) or NSST (non-separable @ secondary @ transform). For example, the transform unit 106 performs retransformation for each sub-block (for example, a 4 ⁇ 4 sub-block) included in a block of a transform coefficient corresponding to an intra prediction error. Information indicating whether to apply the NSST and information regarding the transformation matrix used for the NSST are signalized at the CU level. The signalization of these pieces of information need not be limited to the CU level, but may be another level (for example, a sequence level, a picture level, a slice level, a tile level, or a CTU level).
  • the Separable conversion is a method of performing conversion a plurality of times by separating each direction by the number of input dimensions
  • the Non-Separable conversion is a method of performing two or more conversions when the input is multidimensional. Are considered collectively as one dimension, and the conversion is performed collectively.
  • Non-Separable conversion if an input is a 4 ⁇ 4 block, it is regarded as one array having 16 elements, and a 16 ⁇ 16 conversion is performed on the array.
  • One that performs a conversion process using a matrix is exemplified.
  • a 4 ⁇ 4 input block is regarded as one array having 16 elements, and a Givens rotation is performed on the array a plurality of times (Hypercube / Givens / Transform). It is an example of conversion.
  • the quantization unit 108 quantizes the transform coefficient output from the transform unit 106. Specifically, the quantization unit 108 scans the transform coefficients of the current block in a predetermined scanning order, and quantizes the transform coefficients based on a quantization parameter (QP) corresponding to the scanned transform coefficients. Then, the quantization unit 108 outputs the quantized transform coefficients of the current block (hereinafter, referred to as quantization coefficients) to the entropy coding unit 110 and the inverse quantization unit 112.
  • QP quantization parameter
  • the predetermined order is an order for quantization / inverse quantization of transform coefficients.
  • the predetermined scanning order is defined in ascending frequency order (low-frequency to high-frequency) or descending order (high-frequency to low-frequency).
  • the quantization parameter is a parameter that defines a quantization step (quantization width). For example, if the value of the quantization parameter increases, the quantization step also increases. That is, as the value of the quantization parameter increases, the quantization error increases.
  • the entropy coding unit 110 generates a coded signal (coded bit stream) by performing variable-length coding on the quantization coefficient input from the quantization unit 108. Specifically, for example, the entropy encoding unit 110 binarizes the quantization coefficient and arithmetically encodes the binary signal.
  • the inverse quantization unit 112 inversely quantizes the quantization coefficient input from the quantization unit 108. Specifically, the inverse quantization unit 112 inversely quantizes the quantization coefficient of the current block in a predetermined scanning order. Then, the inverse quantization unit 112 outputs the inversely quantized transform coefficient of the current block to the inverse transformation unit 114.
  • the inverse transform unit 114 restores a prediction error by inversely transforming the transform coefficient input from the inverse quantization unit 112. Specifically, the inverse transform unit 114 restores the prediction error of the current block by performing an inverse transform corresponding to the transform by the transform unit 106 on the transform coefficient. Then, inverse transform section 114 outputs the restored prediction error to adder section 116.
  • the restored prediction error does not match the prediction error calculated by the subtraction unit 104 because information is lost due to quantization. That is, the restored prediction error includes a quantization error.
  • the adder 116 reconstructs the current block by adding the prediction error input from the inverse converter 114 and the prediction sample input from the prediction controller 128. Then, the adding unit 116 outputs the reconstructed block to the block memory 118 and the loop filter unit 120.
  • the reconstructed block is sometimes called a local decoding block.
  • the block memory 118 is a storage unit for storing a block that is referred to in intra prediction and is in a current picture (hereinafter, referred to as a current picture). Specifically, the block memory 118 stores the reconstructed block output from the adding unit 116.
  • the loop filter unit 120 applies a loop filter to the block reconstructed by the adding unit 116, and outputs the reconstructed block that has been filtered to the frame memory 122.
  • the loop filter is a filter (in-loop filter) used in the encoding loop, and includes, for example, a deblocking filter (DF), a sample adaptive offset (SAO), an adaptive loop filter (ALF), and the like.
  • a least squares error filter for removing coding distortion is applied. For example, for every 2 ⁇ 2 sub-block in the current block, a plurality of sub-blocks are determined based on the direction and activity of a local gradient. One filter selected from the filters is applied.
  • sub-blocks for example, 2 ⁇ 2 sub-blocks
  • the sub-block is classified into a plurality of classes (for example, 15 or 25 classes).
  • the gradient direction value D is derived, for example, by comparing gradients in a plurality of directions (for example, horizontal, vertical and two diagonal directions).
  • the gradient activation value A is derived, for example, by adding gradients in a plurality of directions and quantizing the addition result.
  • a filter for a sub-block is determined from a plurality of filters based on the result of such classification.
  • FIG. 4A to 4C are views showing a plurality of examples of the shape of the filter used in the ALF.
  • FIG. 4A shows a 5 ⁇ 5 diamond shape filter
  • FIG. 4B shows a 7 ⁇ 7 diamond shape filter
  • FIG. 4C shows a 9 ⁇ 9 diamond shape filter.
  • Information indicating the shape of the filter is signalized at the picture level.
  • the signalization of the information indicating the shape of the filter need not be limited to the picture level, but may be another level (for example, a sequence level, a slice level, a tile level, a CTU level, or a CU level).
  • the ON / OFF of the ALF is determined at the picture level or the CU level, for example. For example, it is determined whether or not to apply ALF at the CU level for luminance, and whether or not to apply ALF at the picture level for color difference.
  • Information indicating ON / OFF of ALF is signaled at a picture level or a CU level.
  • the signalization of the information indicating ON / OFF of the ALF does not need to be limited to the picture level or the CU level, and may be at another level (for example, a sequence level, a slice level, a tile level, or a CTU level). Good.
  • a set of coefficients for a plurality of selectable filters is signaled at the picture level.
  • the signalization of the coefficient set need not be limited to the picture level, but may be another level (for example, a sequence level, a slice level, a tile level, a CTU level, a CU level, or a sub-block level).
  • the frame memory 122 is a storage unit for storing reference pictures used for inter prediction, and may be called a frame buffer. Specifically, the frame memory 122 stores the reconstructed blocks filtered by the loop filter unit 120.
  • the intra prediction unit 124 generates a prediction signal (intra prediction signal) by performing intra prediction (also referred to as intra prediction) of the current block with reference to a block in the current picture stored in the block memory 118. Specifically, the intra prediction unit 124 generates an intra prediction signal by performing intra prediction with reference to a sample (for example, a luminance value and a color difference value) of a block adjacent to the current block, and performs prediction control on the intra prediction signal. Output to the unit 128.
  • intra prediction signal intra prediction signal
  • intra prediction also referred to as intra prediction
  • the intra prediction unit 124 performs intra prediction using one of a plurality of intra prediction modes defined in advance.
  • the plurality of intra prediction modes include one or more non-directional prediction modes and a plurality of directional prediction modes.
  • the one or more non-directional prediction modes are, for example, H.264. H.265 / HEVC (High-Efficiency Video Coding) standard (Non-Patent Document 1) includes a Planar prediction mode and a DC prediction mode.
  • the plurality of direction prediction modes are, for example, H.264. It includes a prediction mode in 33 directions defined by the H.265 / HEVC standard. Note that the plurality of directional prediction modes may further include 32 directional prediction modes (total of 65 directional prediction modes) in addition to the 33 directions.
  • FIG. 5A is a diagram illustrating 67 intra prediction modes (two non-directional prediction modes and 65 directional prediction modes) in intra prediction. Solid arrows indicate H.E. H.265 / HEVC standard indicates 33 directions, and dashed arrows indicate the added 32 directions.
  • a luminance block may be referred to. That is, the color difference component of the current block may be predicted based on the luminance component of the current block.
  • Such intra prediction may be referred to as CCLM (cross-component @ linear @ model) prediction.
  • CCLM cross-component @ linear @ model
  • Such an intra prediction mode of a chrominance block that refers to a luminance block (for example, referred to as a CCLM mode) may be added as one of the intra prediction modes of a chrominance block.
  • the intra prediction unit 124 may correct the pixel value after the intra prediction based on the gradient of the reference pixel in the horizontal / vertical direction. Intra prediction with such a correction is sometimes called PDPC (position ⁇ dependent ⁇ intra ⁇ prediction ⁇ combination). Information indicating whether or not PDPC is applied (for example, called a PDPC flag) is signaled at, for example, a CU level.
  • the signalization of this information need not be limited to the CU level, but may be another level (for example, a sequence level, a picture level, a slice level, a tile level, or a CTU level).
  • the inter prediction unit 126 performs inter prediction (also referred to as inter-screen prediction) of the current block with reference to a reference picture stored in the frame memory 122 and being different from the current picture, to thereby generate a prediction signal (inter prediction). A prediction signal).
  • the inter prediction is performed in units of a current block or a sub-block (for example, 4 ⁇ 4 block) in the current block.
  • the inter prediction unit 126 performs motion estimation (motion estimation) on the current block or the sub-block in the reference picture.
  • the inter prediction unit 126 generates an inter prediction signal of the current block or the sub block by performing motion compensation using the motion information (for example, a motion vector) obtained by the motion search.
  • the inter prediction unit 126 outputs the generated inter prediction signal to the prediction control unit 128.
  • the motion information used for motion compensation is signalized.
  • a predicted motion vector (motion ⁇ vector ⁇ predictor) may be used. That is, the difference between the motion vector and the predicted motion vector may be signalized.
  • the inter prediction signal may be generated using not only the motion information of the current block obtained by the motion search but also the motion information of the adjacent block. Specifically, an inter-prediction signal is generated for each sub-block in the current block by weighting and adding a prediction signal based on motion information obtained by a motion search and a prediction signal based on motion information of an adjacent block. May be done.
  • Such inter prediction motion compensation
  • OBMC overlapped ⁇ block ⁇ motion ⁇ compensation
  • OBMC block size information indicating the size of a sub-block for OBMC (for example, referred to as OBMC block size) is signalized at a sequence level.
  • Information indicating whether to apply the OBMC mode (for example, referred to as an OBMC flag) is signaled at the CU level.
  • the level of signalization of these pieces of information need not be limited to the sequence level and the CU level, and may be another level (for example, a picture level, a slice level, a tile level, a CTU level, or a sub-block level). Good.
  • FIG. 5B and FIG. 5C are a flowchart and a conceptual diagram for explaining the outline of the predicted image correction process by the OBMC process.
  • a predicted image (Pred) by normal motion compensation is obtained using the motion vector (MV) assigned to the current block.
  • the motion vector (MV_L) of the encoded left adjacent block is applied to the current block to obtain a predicted image (Pred_L), and the predicted image and Pred_L are weighted and overlapped with each other to perform prediction. Perform the first correction of the image.
  • the motion vector (MV_U) of the coded upper adjacent block is applied to the current block to obtain a predicted image (Pred_U), and the predicted image subjected to the first correction and Pred_U are weighted.
  • the second correction of the predicted image is performed by superimposing and superimposing, and this is used as the final predicted image.
  • the two-stage correction method using the left adjacent block and the upper adjacent block has been described.
  • a configuration in which the correction is performed more than two times using the right adjacent block and the lower adjacent block may be adopted. It is possible.
  • the region to be superimposed may not be the pixel region of the entire block, but may be only a partial region near the block boundary.
  • prediction image correction processing from one reference picture has been described here, the same applies to the case where a prediction image is corrected from a plurality of reference pictures.
  • the obtained predicted image is further superimposed to obtain a final predicted image.
  • the processing target block may be a prediction block unit or a sub-block unit obtained by further dividing the prediction block.
  • the encoding device determines whether the encoding target block belongs to an area with complicated motion, and sets a value 1 as obmc_flag if the block to be encoded belongs to an area with complicated motion. Encoding is performed by applying the OBMC process, and if it does not belong to a region with a complicated motion, the value is set to 0 as obmc_flag and encoding is performed without applying the OBMC process.
  • the decoding device decodes obmc_flag described in the stream, and switches whether or not to apply the OBMC process according to the value to perform decoding.
  • the motion information may be derived on the decoding device side without being signalized.
  • H. A merge mode defined by the H.265 / HEVC standard may be used.
  • the motion information may be derived by performing a motion search on the decoding device side. In this case, the motion search is performed without using the pixel values of the current block.
  • the mode in which a motion search is performed on the decoding device side may be referred to as a PMMVD (pattern matched motion vector derivation) mode or a FRUC (frame rate up-conversion) mode.
  • PMMVD pattern matched motion vector derivation
  • FRUC frame rate up-conversion
  • FIG. 5D shows an example of the FRUC processing.
  • a list of a plurality of candidates each having a predicted motion vector (which may be common to a merge list) is generated with reference to a motion vector of an encoded block spatially or temporally adjacent to the current block. Is done.
  • the best candidate MV is selected from the plurality of candidate MVs registered in the candidate list. For example, the evaluation value of each candidate included in the candidate list is calculated, and one candidate is selected based on the evaluation value.
  • a motion vector for the current block is derived based on the selected candidate motion vector.
  • the motion vector of the selected candidate (best candidate MV) is directly derived as a motion vector for the current block.
  • a motion vector for the current block may be derived by performing pattern matching in a peripheral region of a position in the reference picture corresponding to the selected candidate motion vector. That is, a search is performed in a similar manner to the area around the best candidate MV, and if there is an MV having a better evaluation value, the best candidate MV is updated to the MV and the MV is updated to the current block. May be the final MV. It is also possible to adopt a configuration in which the processing is not performed.
  • the evaluation value is calculated by calculating a difference value of a reconstructed image by pattern matching between a region in a reference picture corresponding to a motion vector and a predetermined region.
  • the evaluation value may be calculated using other information in addition to the difference value.
  • the first pattern matching or the second pattern matching is used.
  • the first pattern matching and the second pattern matching may be referred to as bilateral matching and template matching, respectively.
  • pattern matching is performed between two blocks in two different reference pictures and along a motion trajectory of the current block (motion @ trajectory). Therefore, in the first pattern matching, an area in another reference picture along the motion trajectory of the current block is used as a predetermined area for calculating the above-described candidate evaluation value.
  • FIG. 6 is a diagram for explaining an example of pattern matching (bilateral matching) between two blocks along a motion trajectory.
  • pattern matching bilateral matching
  • two blocks along the motion trajectory of the current block (Cur @ block) and a pair of two blocks in two different reference pictures (Ref0, Ref1) are used.
  • Ref0, Ref1 two motion vectors
  • a reconstructed image at a designated position in a first encoded reference picture (Ref0) designated by a candidate MV, and a symmetric MV obtained by scaling the candidate MV at a display time interval is derived, and an evaluation value is calculated using the obtained difference value.
  • the candidate MV having the best evaluation value among the plurality of candidate MVs may be selected as the final MV.
  • a motion vector (MV0, MV1) pointing to two reference blocks is a temporal distance between a current picture (Cur @ Pic) and two reference pictures (Ref0, Ref1). (TD0, TD1).
  • a reflection-symmetric bidirectional motion vector is used. Is derived.
  • pattern matching is performed between a template in the current picture (a block adjacent to the current block in the current picture (for example, an upper and / or left adjacent block)) and a block in the reference picture. Therefore, in the second pattern matching, a block adjacent to the current block in the current picture is used as a predetermined area for calculating the above-described candidate evaluation value.
  • FIG. 7 is a diagram for explaining an example of pattern matching (template matching) between a template in a current picture and a block in a reference picture.
  • the current block (Cur @ Pic) is searched for a block that matches the block adjacent to the current block (Cur @ block) in the reference picture (Ref0), thereby searching for the current block.
  • the reference picture (Ref0)
  • ⁇ Information indicating whether or not to apply such a FRUC mode (for example, called a FRUC flag) is signaled at the CU level.
  • a FRUC flag information indicating whether or not to apply such a FRUC mode (for example, called a FRUC flag) is signaled at the CU level.
  • information for example, called a FRUC mode flag
  • a pattern matching method first pattern matching or second pattern matching
  • BIO bi-directional optical flow
  • FIG. 8 is a diagram for explaining a model assuming constant velocity linear motion.
  • (v x , v y ) indicates a velocity vector
  • ⁇ 0 and ⁇ 1 are time between a current picture (Cur Pic) and two reference pictures (Ref 0 , Ref 1 ), respectively.
  • (MVx 0 , MVy 0 ) indicates a motion vector corresponding to the reference picture Ref 0
  • (MVx 1 , MVy 1 ) indicates a motion vector corresponding to the reference picture Ref 1 .
  • This optical flow equation includes (i) the time derivative of the luminance value, (ii) the product of the horizontal velocity and the horizontal component of the spatial gradient of the reference image, and (iii) the vertical velocity and the spatial gradient of the reference image. Indicates that the sum of the product of the vertical components of and is equal to zero. Based on a combination of the optical flow equation and Hermite interpolation, a block-by-block motion vector obtained from a merge list or the like is corrected in pixel units.
  • the motion vector may be derived on the decoding device side by a method different from the method for deriving the motion vector based on a model assuming uniform linear motion. For example, a motion vector may be derived for each sub-block based on the motion vectors of a plurality of adjacent blocks.
  • This mode may be referred to as an affine motion compensated prediction (affine ⁇ motion ⁇ compensation ⁇ prediction) mode.
  • FIG. 9A is a diagram for describing derivation of a motion vector in sub-block units based on motion vectors of a plurality of adjacent blocks.
  • the current block includes 16 4 ⁇ 4 sub-blocks.
  • the motion vector v 0 of the upper left corner control point of the current block is derived based on the motion vector of the adjacent block
  • the motion vector v 1 of the upper right corner control point of the current block is derived based on the motion vector of the adjacent sub block. Is done.
  • the motion vector (v x , v y ) of each sub-block in the current block is derived by the following equation (2).
  • x and y indicate the horizontal position and vertical position of the sub-block, respectively, and w indicates a predetermined weighting factor.
  • the affine motion compensation prediction mode may include several modes in which the method of deriving the motion vector of the upper left and upper right control points is different.
  • Information indicating such an affine motion compensation prediction mode (for example, called an affine flag) is signalized at the CU level.
  • the signalization of the information indicating the affine motion compensation prediction mode does not need to be limited to the CU level, but may be performed at another level (for example, a sequence level, a picture level, a slice level, a tile level, a CTU level, or a sub-block level). ).
  • the prediction control unit 128 selects one of the intra prediction signal and the inter prediction signal, and outputs the selected signal to the subtraction unit 104 and the addition unit 116 as a prediction signal.
  • FIG. 9B is a diagram for describing the outline of the motion vector derivation process in the merge mode.
  • a predicted MV list in which predicted MV candidates are registered is generated.
  • the spatially adjacent prediction MV which is the MV of a plurality of encoded blocks spatially located around the current block, and the position of the current block in the coded reference picture are projected.
  • Temporal adjacent prediction MV which is the MV of a nearby block
  • combined prediction MV which is an MV generated by combining the MV values of the spatial adjacent prediction MV and the temporal adjacent prediction MV
  • zero prediction MV which is an MV having a value of zero, etc.
  • one prediction MV is selected from a plurality of prediction MVs registered in the prediction MV list, and is determined as the MV of the encoding target block.
  • variable-length coding unit describes and encodes a signal “merge_idx” indicating which prediction MV is selected in the stream.
  • the prediction MV registered in the prediction MV list described in FIG. 9B is an example, and may be different from the number in the figure, or may not include some types of the prediction MV in the figure,
  • the configuration may be such that a prediction MV other than the type of the prediction MV in the drawing is added.
  • the final MV may be determined by performing a DMVR process described later using the MV of the encoding target block derived in the merge mode.
  • FIG. 9C is a conceptual diagram for explaining the outline of the DMVR process.
  • the optimal MVP set in the processing target block is set as a candidate MV.
  • a first reference picture that is a processed picture in the L0 direction and a second reference picture that is a processed picture in the L1 direction are referred to as reference pixels.
  • a template is generated by averaging each reference pixel.
  • ⁇ Circle around (2) ⁇ Next, using the template, search the surrounding areas of the candidate MV for the first reference picture and the second reference picture, respectively, and determine the MV with the lowest cost as the final MV.
  • the cost value is calculated using a difference value between each pixel value of the template and each pixel value of the search area, an MV value, and the like.
  • processing may be used as long as it is a processing that can search for the periphery of the candidate MV and derive the final MV without being the processing itself described here.
  • FIG. 9D is a diagram for describing an outline of a predicted image generation method using the luminance correction processing by the LIC processing.
  • an MV for obtaining a reference image corresponding to the current block from a reference picture which is a coded picture is derived.
  • the shape of the peripheral reference area in FIG. 9D is an example, and other shapes may be used.
  • a predicted image is generated after performing the brightness correction processing by the method.
  • lic_flag is a signal indicating whether or not to apply the LIC processing.
  • the encoding device it is determined whether the encoding target block belongs to an area in which a luminance change has occurred. If the encoding target block belongs to an area in which a luminance change has occurred, lik_flag is used.
  • the value 1 is set and coding is performed by applying the LIC processing, and when the pixel does not belong to the area where the luminance change occurs, the value is set as ric_flag and the coding is performed without applying the LIC processing.
  • the decoding device decodes lic_flag described in the stream, and switches whether or not to apply LIC processing according to the value to perform decoding.
  • determining whether or not to apply the LIC processing for example, there is a method of determining whether to apply the LIC processing to a peripheral block.
  • a method of determining whether to apply the LIC processing to a peripheral block For example, when the current block is in the merge mode, it is determined whether or not the peripheral coded block selected at the time of derivation of the MV in the merge mode process has been coded by applying the LIC process. Judgment is performed, and coding is performed by switching whether or not to apply LIC processing according to the result. In the case of this example, the processing in the decoding is exactly the same.
  • FIG. 10 is a block diagram showing a functional configuration of the decoding device 200 according to Embodiment 1.
  • the decoding device 200 is a moving image / image decoding device that decodes a moving image / image in block units.
  • the decoding device 200 includes an entropy decoding unit 202, an inverse quantization unit 204, an inverse transformation unit 206, an addition unit 208, a block memory 210, a loop filter unit 212, and a frame memory 214. , An intra prediction unit 216, an inter prediction unit 218, and a prediction control unit 220.
  • the decoding device 200 is realized by, for example, a general-purpose processor and a memory.
  • the processor when the software program stored in the memory is executed by the processor, the processor includes an entropy decoding unit 202, an inverse quantization unit 204, an inverse transformation unit 206, an addition unit 208, a loop filter unit 212, an intra prediction unit 216, and functions as the inter prediction unit 218 and the prediction control unit 220.
  • the decoding device 200 is a dedicated device corresponding to the entropy decoding unit 202, the inverse quantization unit 204, the inverse transformation unit 206, the addition unit 208, the loop filter unit 212, the intra prediction unit 216, the inter prediction unit 218, and the prediction control unit 220. May be realized as one or more electronic circuits.
  • the entropy decoding unit 202 performs entropy decoding on the encoded bit stream. Specifically, the entropy decoding unit 202 arithmetically decodes, for example, an encoded bit stream into a binary signal. Then, the entropy decoding unit 202 multi-values (binaries) the binary signal. As a result, the entropy decoding unit 202 outputs the quantization coefficients to the inverse quantization unit 204 in block units.
  • the inverse quantization unit 204 inversely quantizes a quantization coefficient of a decoding target block (hereinafter, referred to as a current block) input from the entropy decoding unit 202. Specifically, the inverse quantization unit 204 inversely quantizes each of the quantization coefficients of the current block based on the quantization parameter corresponding to the quantization coefficient. Then, the inverse quantization unit 204 outputs the inversely quantized quantized coefficients (that is, transform coefficients) of the current block to the inverse transform unit 206.
  • the inverse transform unit 206 restores a prediction error by inversely transforming the transform coefficient input from the inverse quantization unit 204.
  • the inverse transform unit 206 may perform the current block based on the information indicating the read conversion type. Is inversely transformed.
  • the inverse transform unit 206 applies inverse retransformation to the transform coefficients.
  • the addition unit 208 reconstructs the current block by adding the prediction error input from the inverse conversion unit 206 and the prediction sample input from the prediction control unit 220. Then, the adding unit 208 outputs the reconstructed block to the block memory 210 and the loop filter unit 212.
  • the block memory 210 is a storage unit for storing a block that is referred to in intra prediction and is in a current picture to be decoded (hereinafter, referred to as a current picture). Specifically, the block memory 210 stores the reconstructed block output from the adder 208.
  • the loop filter unit 212 applies a loop filter to the block reconstructed by the adding unit 208, and outputs the filtered reconstructed block to the frame memory 214, a display device, and the like.
  • one filter is selected from the plurality of filters based on the local gradient direction and activity. The selected filter is applied to the reconstruction block.
  • the frame memory 214 is a storage unit for storing a reference picture used for inter prediction, and is sometimes called a frame buffer. Specifically, the frame memory 214 stores the reconstructed blocks filtered by the loop filter unit 212.
  • the intra prediction unit 216 performs intra prediction with reference to a block in the current picture stored in the block memory 210 based on the intra prediction mode read from the coded bit stream, thereby obtaining a prediction signal (intra prediction). Signal). Specifically, the intra prediction unit 216 generates an intra prediction signal by performing intra prediction with reference to a sample (for example, a luminance value and a color difference value) of a block adjacent to the current block, and performs prediction control on the intra prediction signal. Output to the unit 220.
  • a sample for example, a luminance value and a color difference value
  • the intra prediction unit 216 may predict the chrominance component of the current block based on the luminance component of the current block. .
  • the intra prediction unit 216 corrects the pixel value after intra prediction based on the gradient of the reference pixel in the horizontal / vertical directions.
  • the inter prediction unit 218 predicts the current block with reference to the reference picture stored in the frame memory 214.
  • the prediction is performed in units of the current block or sub-blocks (for example, 4 ⁇ 4 blocks) in the current block.
  • the inter prediction unit 218 generates an inter prediction signal of a current block or a sub block by performing motion compensation using motion information (for example, a motion vector) read from an encoded bit stream. Output to the prediction control unit 220.
  • the inter prediction unit 218 determines not only the motion information of the current block obtained by the motion search but also the motion information of the adjacent block. To generate an inter prediction signal.
  • the inter prediction unit 218 uses the pattern matching method (bilateral matching or template matching) read from the encoded stream.
  • the motion information is derived by performing a motion search. Then, the inter prediction unit 218 performs motion compensation using the derived motion information.
  • the inter prediction unit 218 derives a motion vector based on a model assuming uniform linear motion. If the information read from the coded bit stream indicates that the affine motion compensation prediction mode is to be applied, the inter prediction unit 218 uses the motion vector of each of a plurality of adjacent blocks as a sub-block unit. Is derived.
  • the prediction control unit 220 selects one of the intra prediction signal and the inter prediction signal, and outputs the selected signal to the addition unit 208 as a prediction signal.
  • the inter prediction unit 126 of the encoding device 100 performs motion compensation in at least the affine motion compensation prediction mode among the inter prediction modes using only uni-prediction, and does not use bi-prediction.
  • the simple prediction is forward prediction or backward prediction, and is also referred to as one-way prediction.
  • Bi-prediction is also called bi-prediction.
  • the inter prediction unit 126 first derives each predicted motion vector of the control point of the current block. Next, the inter prediction unit 126 calculates the motion vector of each of the plurality of sub-blocks included in the current block as the affine motion vector using the derived predicted motion vector. Then, the inter prediction unit 126 performs motion compensation on the sub-block using the calculated affine motion vector and the encoded reference picture.
  • the affine motion compensation prediction mode includes two types of modes in which the method of determining a control point predicted motion vector is different, that is, a normal mode (affine inter mode) and a merge mode (affine merge mode).
  • the ⁇ ⁇ ⁇ normal mode is a mode for deriving a predicted motion vector of a control point by selecting a motion vector of one of encoded blocks near each control point of the current block.
  • index information indicating a reference picture and motion vector information are encoded for each control point.
  • the index information may be common to all control points.
  • the motion vector information can include an MVP index indicating a motion vector prediction candidate and an MVD indicating a difference between the predicted motion vector and an actual motion vector. When only the MVP index is included, the motion vector information may be common to all control points.
  • the merge mode is a mode for calculating a predicted motion vector of each control point based on a plurality of motion vectors corresponding to a block encoded in the affine mode among encoded blocks adjacent to the current block. . That is, the merge mode is a mode in which the motion vector of the control point is determined based on the motion vector of the peripheral CU to which the affine motion compensation mode has been applied. Therefore, if uni-prediction is used in the normal mode, when the merge mode is subsequently performed, the predicted motion vector of the control point is determined based on the motion vector of the coded block using the uni-prediction. As described above, if the simple prediction is used in the normal mode, the simple prediction is also used in the merge mode. Therefore, at least the normal mode may be performed using the simple prediction.
  • the reference picture may be selected based on one of the first reference picture list and the second reference picture list commonly used in the inter prediction mode. That is, the reference picture may be selected from one of the first list and the second list commonly used in the inter prediction mode (inter prediction mode).
  • the first list and the second list are, for example, an L0 list and an L1 list.
  • the reference picture may be selected, for example, from the L0 list.
  • an encoded block for deriving a predicted motion vector of a control point may be determined using only uni-prediction from among a plurality of encoded blocks constituting the selected reference picture. Accordingly, motion compensation in the affine motion compensation prediction mode can be performed using only uni-prediction.
  • the predicted motion vector of the control point may be determined based on the motion vector of the peripheral CU to which the inter prediction mode (inter prediction mode) other than the affine motion compensation prediction mode is applied.
  • a reference picture may be selected from an L0 list or the like.
  • affine motion compensation prediction mode motion compensation is performed in units of sub-CUs, which are units obtained by dividing a CU.
  • sub-CUs which are units obtained by dividing a CU.
  • the inter prediction unit 126 prohibits the use of bi-prediction and determines the predicted motion vector of the control point using only uni-prediction. Calculate the affine motion vector in units.
  • the encoding device 100 may be able to reduce the processing amount while suppressing a decrease in encoding efficiency.
  • FIG. 11 is a flowchart illustrating an operation example of the affine motion compensation prediction mode performed by the inter prediction unit 126 of the encoding device 100 according to the first example of Embodiment 1. Note that a flowchart showing a decoding operation example of the stream coded in the affine motion compensation prediction mode by the inter prediction unit 218 of the decoding device 200 is the same, and therefore, here, the affine motion by the inter prediction unit 126 of the coding device 100 is described. The operation example of the compensation mode will be described as an example.
  • the inter prediction unit 126 checks whether the prediction mode is the affine motion compensation prediction mode (S101).
  • step S101 when the prediction mode is the affine motion compensation prediction mode (Yes in S101), the inter prediction unit 126 determines a prediction motion vector of a control point using uni-prediction (S102). That is, in step S102, the inter prediction unit 126 determines the predicted motion vector of the control point using only uni-prediction, and does not use bi-prediction.
  • the inter prediction unit 126 calculates a motion vector for each sub-CU of the current block based on the predicted motion vector of the control point determined in step S102 (S103).
  • the inter prediction unit 126 calculates a motion vector by a predetermined method according to the prediction mode (S104).
  • a prediction mode other than the affine motion compensation prediction mode for example, there is a normal inter mode or a merge mode.
  • the inter prediction unit 126 may indicate only the index information of the reference picture for the first reference picture list.
  • the inter prediction unit 126 always determines the predicted motion vector of the control point by limiting the prediction mode to the simple prediction when the prediction mode is the affine motion compensation prediction mode.
  • the present invention is not limited thereto. Even if the prediction mode is the affine motion compensation prediction mode, the inter prediction unit 126 adaptively switches between using only uni-prediction or enabling bi-prediction according to the size of the sub-CU or the size of the CU. You may.
  • the processing amount is proportional to the total number of sub-CUs constituting a CU to which the affine motion compensation prediction mode is applied in a slice or a picture. Therefore, when the size of the sub-CU is equal to or smaller than the threshold, the inter prediction unit 126 may determine the predicted motion vector of the control point using only the simple prediction. On the other hand, when the size of the sub-CU exceeds the threshold, the inter prediction unit 126 may determine that the prediction motion vector of the control point is to be enabled by using the bi-prediction.
  • the inter prediction unit 126 prohibits the bi-prediction and predicts the control point.
  • a motion vector may be determined.
  • the inter prediction unit 126 may enable the bi-prediction and determine the predicted motion vector of the control point.
  • the affine motion compensation prediction mode may have a higher effect when a region having a certain size is rotated or enlarged or reduced. That is, the affine motion compensation prediction mode can be effective in a CU larger than a certain size.
  • the inter prediction unit 126 may prohibit the bi-prediction and determine the predicted motion vector of the control point. .
  • the inter prediction unit 126 may determine the predicted motion vector of the control point with bi-prediction enabled. For these bi-prediction prohibition rules, identification information may be encoded as header information.
  • the affine motion compensation prediction mode is not limited to the case where the affine motion compensation prediction mode is used for each sub-CU, and may be used for both the sub-CU unit and the CU unit. In this case, the increase in the processing amount becomes remarkable in the case of using a sub-CU unit. Therefore, when performing the affine motion compensation prediction mode in units of sub-CUs, the inter prediction unit 126 may determine the predicted motion vector of the control point using only the simple prediction. On the other hand, when performing the affine motion compensated prediction mode in CU units, the inter prediction unit 126 may determine that the prediction motion vector of the control point is to be determined by also using bi-prediction.
  • Whether the affine motion compensation prediction mode is performed in the sub-CU unit mode or the CU unit mode may be performed by coding identification information for each CU or CTU, or by coding header information to a sequence, slice, or picture. Switching may be performed in units.
  • the motion prediction in sub-CU units may be limited to uni-prediction. For example, in a prediction mode using a sub-CU unit motion vector in a block at the same position in a temporally different reference picture or a block shifted by an amount corresponding to a motion vector calculated based on a motion vector of a peripheral block.
  • it may be limited to application, that is, simple prediction.
  • FIG. 12 is an operation example in the case where the motion vector of the control point is only uni-prediction in the normal mode of the affine motion compensation prediction mode performed by the inter prediction unit 126 of the encoding device 100 according to the first example of Embodiment 1. It is a flowchart which shows.
  • the inter prediction unit 126 checks whether the prediction mode is the affine motion compensation prediction mode (S201).
  • step S201 when the prediction mode is the affine motion compensation prediction mode (Yes in S201), the inter prediction unit 126 further checks whether the affine motion compensation prediction mode is the normal mode for encoding motion vector information. (S202).
  • step S202 when the affine motion compensation prediction mode is the normal mode (Yes in S202), the inter prediction unit 126 determines a prediction motion vector of a control point using uni-prediction (S203). That is, in step S203, the inter prediction unit 126 determines the predicted motion vector of the control point using only uni-prediction, and does not use bi-prediction.
  • the inter prediction unit 126 calculates a motion vector for each sub-CU of the current block based on the predicted motion vector of the control point determined in step S203 (S205).
  • step S202 when the affine motion compensation prediction mode is not the normal mode (No in S202), the inter prediction unit 126 determines, based on the motion vector (MV) of the peripheral block to which the affine motion compensation prediction mode has been applied, The predicted motion vector of the control point is determined (S204).
  • the case where the affine motion compensation prediction mode is not the normal mode is the case where the affine motion compensation prediction mode is the merge mode.
  • the inter prediction unit 126 calculates a motion vector by a predetermined method according to the prediction mode (S206).
  • uni-prediction is used in the merge mode. That is, in the affine motion compensation prediction mode, when performing the normal mode only in the uni-prediction, the inter prediction unit 126 always performs the merge mode using the uni-prediction.
  • step S204 that is, in the merge mode of the affine motion compensation prediction mode, a case where a motion vector of a peripheral block to which an inter prediction mode other than the affine motion compensation prediction mode is applied may be used.
  • the inter prediction unit 126 may determine the predicted motion vector of the control point using the simple prediction even in the merge mode. Then, the inter prediction unit 126 selects one of the first list and the second list that commonly use the motion vector used for the uni-prediction in the inter prediction mode (inter-screen prediction method) as in the normal mode. You may choose from. Further, the inter prediction unit 126 may select a motion vector to be used for uni-prediction according to the same rule from the same list in the normal mode and the merge mode, for example.
  • the inter prediction unit 126 prohibits the use of bi-prediction and determines the predicted motion vector of the control point using only uni-prediction.
  • An affine motion vector is calculated for each sub-block.
  • the encoding device 100 may be able to reduce the processing amount while suppressing a decrease in encoding efficiency.
  • FIG. 13 is a block diagram illustrating an implementation example of the encoding device 100 according to Embodiment 1.
  • the encoding device 100 includes a circuit 160 and a memory 162.
  • a plurality of components of the encoding device 100 illustrated in FIG. 1 are implemented by the circuit 160 and the memory 162 illustrated in FIG.
  • the circuit 160 is a circuit that performs information processing, and is a circuit that can access the memory 162.
  • the circuit 160 is a dedicated or general-purpose electronic circuit that encodes a moving image.
  • the circuit 160 may be a processor such as a CPU.
  • the circuit 160 may be an aggregate of a plurality of electronic circuits.
  • the circuit 160 may play the role of a plurality of components of the encoding device 100 illustrated in FIG. 1 and the like, excluding a component for storing information.
  • the memory 162 is a dedicated or general-purpose memory in which information for the circuit 160 to encode a moving image is stored.
  • the memory 162 may be an electronic circuit and may be connected to the circuit 160. Further, the memory 162 may be included in the circuit 160. Further, the memory 162 may be an aggregate of a plurality of electronic circuits. Further, the memory 162 may be a magnetic disk or an optical disk, or may be expressed as a storage or a recording medium. Further, the memory 162 may be a nonvolatile memory or a volatile memory.
  • the memory 162 may store a moving image to be coded, or may store a bit string corresponding to the coded moving image. Further, the memory 162 may store a program for the circuit 160 to encode a moving image.
  • the memory 162 may serve as a component for storing information among a plurality of components of the encoding device 100 illustrated in FIG. 1 and the like. Specifically, the memory 162 may serve as the block memory 118 and the frame memory 122 shown in FIG. More specifically, the memory 162 may store reconstructed blocks, reconstructed pictures, and the like.
  • the encoding device 100 not all of the plurality of components illustrated in FIG. 1 and the like need to be implemented, and all of the plurality of processes described above need not be performed. Some of the components illustrated in FIG. 1 and the like may be included in another device, or some of the above-described processes may be performed by another device. Then, in the encoding device 100, some of the plurality of components illustrated in FIG. 1 and the like are implemented, and a part of the plurality of processes described above is performed, whereby an interface for calculating an affine motion vector is calculated. The prediction process in the prediction mode is performed efficiently.
  • FIG. 14 is a flowchart illustrating an operation example of the encoding device 100 illustrated in FIG.
  • the encoding device 100 illustrated in FIG. 13 performs the operation illustrated in FIG. 14 when encoding a moving image.
  • the circuit 160 of the encoding device 100 uses the memory 162 to determine the affine motion vector in units of sub-blocks constituting the current block based on the motion vectors of a plurality of peripheral blocks of the current block of the moving image.
  • the following prediction processing is performed. That is, first, the circuit 160 calculates an affine motion vector in sub-block units only in uni-prediction of uni-prediction and bi-prediction (S311). Next, the circuit 160 performs motion compensation on a sub-block basis using the affine motion vector calculated in step S311 (S312).
  • the encoding device 100 prohibits the use of bi-prediction and determines the predicted motion vector of the control point using only uni-prediction. Calculate the affine motion vector. Accordingly, there is a possibility that the processing amount can be reduced while suppressing a decrease in the coding efficiency, so that the coding device 100 can improve the processing efficiency.
  • FIG. 15 is a block diagram illustrating an implementation example of the decoding device 200 according to Embodiment 1.
  • the decoding device 200 includes a circuit 260 and a memory 262.
  • a plurality of components of the decoding device 200 illustrated in FIG. 10 are implemented by the circuit 260 and the memory 262 illustrated in FIG.
  • the circuit 260 is a circuit that performs information processing, and is a circuit that can access the memory 262.
  • the circuit 260 is a dedicated or general-purpose electronic circuit for decoding a moving image.
  • Circuit 260 may be a processor such as a CPU.
  • the circuit 260 may be an aggregate of a plurality of electronic circuits.
  • the circuit 260 may play the role of a plurality of components of the decoding device 200 shown in FIG. 10 and the like, excluding a component for storing information.
  • the memory 262 is a dedicated or general-purpose memory in which information for the circuit 260 to decode a moving image is stored.
  • the memory 262 may be an electronic circuit and may be connected to the circuit 260. Further, the memory 262 may be included in the circuit 260. Further, the memory 262 may be an aggregate of a plurality of electronic circuits. Further, the memory 262 may be a magnetic disk or an optical disk, or may be expressed as a storage or a recording medium. Further, the memory 262 may be a nonvolatile memory or a volatile memory.
  • the memory 262 may store a bit sequence corresponding to an encoded moving image, or may store a moving image corresponding to a decoded bit sequence. Further, the memory 262 may store a program for the circuit 260 to decode a moving image.
  • the memory 262 may play a role of a component for storing information among a plurality of components of the decoding device 200 illustrated in FIG. 10 and the like.
  • the memory 262 may serve as the block memory 210 and the frame memory 214 shown in FIG. More specifically, the memory 262 may store reconstructed blocks, reconstructed pictures, and the like.
  • all of the plurality of components illustrated in FIG. 10 and the like do not need to be implemented, and all of the plurality of processes described above do not need to be performed. Some of the components illustrated in FIG. 10 and the like may be included in another device, or some of the above-described processes may be performed by another device. Then, in the decoding device 200, a part of the plurality of components illustrated in FIG. 10 and the like is implemented, and a part of the plurality of processes described above is performed, so that motion compensation is efficiently performed. .
  • FIG. 16 is a flowchart illustrating an operation example of the decoding device 200 illustrated in FIG.
  • the decoding device 200 illustrated in FIG. 15 performs the operation illustrated in FIG.
  • the circuit 260 of the decoding device 200 uses the memory 262 to generate an affine motion vector in units of sub-blocks constituting the current block based on the motion vectors of a plurality of peripheral blocks of the current block of the video in the moving image.
  • the following prediction processing is performed. That is, first, the circuit 260 calculates an affine motion vector for each sub-block only in uni-prediction of uni-prediction and bi-prediction (S411). Next, the circuit 260 performs motion compensation on a subblock basis using the affine motion vector calculated in step S411 (S412).
  • the decoding device 200 prohibits the use of bi-prediction and determines the predicted motion vector of the control point using only uni-prediction. Calculate the motion vector. Accordingly, there is a possibility that the processing amount can be reduced while suppressing a decrease in the coding efficiency, so that the decoding device 200 can improve the processing efficiency.
  • the encoding device 100 and the decoding device 200 in the present embodiment may be used as an image encoding device and an image decoding device, respectively, or may be used as a moving image encoding device and a moving image decoding device. Good.
  • the encoding device 100 and the decoding device 200 can be used as inter prediction devices (inter prediction devices), respectively.
  • the encoding device 100 and the decoding device 200 may correspond to only the inter prediction unit (inter prediction unit) 126 and the inter prediction unit (inter prediction unit) 218, respectively.
  • Other components such as the conversion unit 106 and the inverse conversion unit 206 may be included in another device.
  • each component may be configured by dedicated hardware, or may be realized by executing a software program suitable for each component.
  • Each component may be realized by a program execution unit such as a CPU or a processor reading and executing a software program recorded on a recording medium such as a hard disk or a semiconductor memory.
  • each of the encoding device 100 and the decoding device 200 includes a processing circuit (Processing @ Circuitry) and a storage device (Storage) electrically connected to the processing circuit and accessible from the processing circuit. You may have.
  • a processing circuit corresponds to the circuit 160 or 260
  • a storage device corresponds to the memory 162 or 262.
  • the processing circuit includes at least one of dedicated hardware and a program execution unit, and executes processing using a storage device.
  • the processing circuit includes a program execution unit
  • the storage device stores a software program executed by the program execution unit.
  • the software that implements the encoding device 100 or the decoding device 200 of the present embodiment is the following program.
  • this program is a coding method for coding a moving image by performing motion compensation on a computer, and forms a current block based on motion vectors of a plurality of peripheral blocks of a current block of an image in the moving image.
  • an affine motion vector in sub-block units is calculated only in uni-prediction of uni-prediction and bi-prediction, and a sub-block is calculated using the calculated affine motion vector.
  • An encoding method for performing the motion compensation on a unit basis may be executed.
  • the program is a decoding method for decoding a moving image by performing motion compensation, wherein the affine is performed in units of sub-blocks constituting a current block based on motion vectors of a plurality of peripheral blocks of a current block of an image in the moving image.
  • the inter prediction mode for calculating a motion vector an affine motion vector is calculated in sub-block units only in uni-prediction of uni-prediction and bi-prediction, and the motion compensation is performed in sub-block units using the calculated affine motion vector. May be executed by a computer.
  • Each component may be a circuit as described above. These circuits may constitute one circuit as a whole, or may be separate circuits. Further, each component may be realized by a general-purpose processor, or may be realized by a dedicated processor.
  • a process performed by a specific component may be performed by another component.
  • the order in which the processes are performed may be changed, or a plurality of processes may be performed in parallel.
  • the encoding / decoding device may include the encoding device 100 and the decoding device 200.
  • the first and second ordinal numbers used in the description may be appropriately replaced.
  • An ordinal number may be newly given to a component or the like, or may be removed.
  • the aspects of the encoding apparatus 100 and the decoding apparatus 200 have been described based on the embodiment, but the aspects of the encoding apparatus 100 and the decoding apparatus 200 are not limited to this embodiment. As long as the present disclosure does not depart from the spirit of the present disclosure, the coding apparatus 100 and the decoding apparatus 200 may include various modifications conceived by those skilled in the art, and configurations constructed by combining components in different embodiments. May be included in the scope of the embodiment.
  • This embodiment may be implemented in combination with at least a part of other embodiments in the present disclosure. Further, a part of the processing, a part of the configuration of the apparatus, a part of the syntax, and the like described in the flowchart of this embodiment may be implemented in combination with another embodiment.
  • each of the functional blocks can usually be realized by an MPU, a memory, and the like.
  • the processing by each of the functional blocks is generally realized by a program execution unit such as a processor reading and executing software (program) recorded on a recording medium such as a ROM.
  • the software may be distributed by download or the like, or may be recorded on a recording medium such as a semiconductor memory and distributed. Note that it is naturally possible to realize each functional block by hardware (dedicated circuit).
  • each embodiment may be realized by centralized processing using a single device (system), or may be realized by distributed processing using a plurality of devices. Good.
  • the number of processors that execute the program may be one or more. That is, centralized processing or distributed processing may be performed.
  • the system is characterized by having an image encoding device using an image encoding method, an image decoding device using an image decoding method, and an image encoding / decoding device including both.
  • Other configurations in the system can be appropriately changed as necessary.
  • FIG. 17 is a diagram illustrating an overall configuration of a content supply system ex100 that realizes a content distribution service.
  • a communication service providing area is divided into desired sizes, and base stations ex106, ex107, ex108, ex109, and ex110, which are fixed wireless stations, are installed in each cell.
  • each device such as a computer ex111, a game machine ex112, a camera ex113, a home appliance ex114, and a smartphone ex115 is connected to the Internet ex101 via the Internet service provider ex102 or the communication network ex104 and the base stations ex106 to ex110. Is connected.
  • the content supply system ex100 may be connected by combining any of the above elements.
  • the devices may be directly or indirectly connected to each other via a telephone network or short-range wireless communication without using the base stations ex106 to ex110 which are fixed wireless stations.
  • the streaming server ex103 is connected to each device such as a computer ex111, a game machine ex112, a camera ex113, a home appliance ex114, and a smartphone ex115 via the Internet ex101 and the like.
  • the streaming server ex103 is connected to a terminal or the like in a hot spot in the airplane ex117 via the satellite ex116.
  • a wireless access point or a hot spot may be used instead of the base stations ex106 to ex110.
  • the streaming server ex103 may be directly connected to the communication network ex104 without going through the Internet ex101 or the Internet service provider ex102, or may be directly connected to the airplane ex117 without going through the satellite ex116.
  • the camera ex113 is a device such as a digital camera capable of photographing still images and moving images.
  • the smartphone ex115 is a smartphone, a mobile phone, a PHS (Personal Handyphone System), or the like corresponding to a mobile communication system called 2G, 3G, 3.9G, 4G, and 5G in the future.
  • the home appliance ex118 is a refrigerator or a device included in a home fuel cell cogeneration system.
  • a terminal having a photographing function is connected to the streaming server ex103 through the base station ex106 or the like, thereby enabling live distribution or the like.
  • the terminal (computer ex111, game machine ex112, camera ex113, home appliance ex114, smartphone ex115, terminal in the airplane ex117, etc.) performs the above-described processing on the still image or the moving image content shot by the user using the terminal.
  • the encoding process described in each embodiment is performed, the video data obtained by the encoding is multiplexed with the encoded audio data of the sound corresponding to the video, and the obtained data is transmitted to the streaming server ex103. That is, each terminal functions as an image encoding device according to an aspect of the present disclosure.
  • the streaming server ex103 stream-distributes the transmitted content data to the requested client.
  • the client is a computer ex111, a game machine ex112, a camera ex113, a household appliance ex114, a smartphone ex115, a terminal in an airplane ex117, or the like, which can decode the encoded data.
  • Each device that has received the distributed data decodes and reproduces the received data. That is, each device functions as an image decoding device according to an aspect of the present disclosure.
  • the streaming server ex103 may be a plurality of servers or a plurality of computers, and may process, record, or distribute data in a distributed manner.
  • the streaming server ex103 may be realized by a CDN (Contents Delivery Network), and the content distribution may be realized by a number of edge servers distributed around the world and a network connecting the edge servers.
  • CDN Contents Delivery Network
  • edge servers distributed around the world and a network connecting the edge servers.
  • physically close edge servers are dynamically allocated according to clients. Then, the delay can be reduced by caching and distributing the content to the edge server.
  • the processing is distributed among multiple edge servers, the distribution entity is switched to another edge server, and the part of the network where the failure has occurred Since the distribution can be continued by bypass, high-speed and stable distribution can be realized.
  • the encoding processing of the captured data may be performed by each terminal, may be performed on the server side, or may be performed by sharing with each other.
  • a processing loop is performed twice.
  • the first loop the complexity or code amount of an image in units of frames or scenes is detected.
  • the second loop processing for maintaining the image quality and improving the coding efficiency is performed.
  • the terminal performs the first encoding process
  • the server that receives the content performs the second encoding process, thereby improving the quality and efficiency of the content while reducing the processing load on each terminal. it can.
  • the first encoded data performed by the terminal can be received and played back by another terminal, so more flexible real time distribution is possible Become.
  • the camera ex113 or the like extracts a feature amount from an image, compresses data related to the feature amount as metadata, and transmits the metadata to the server.
  • the server performs compression according to the meaning of the image, such as switching the quantization precision by determining the importance of the object from the feature amount.
  • the feature amount data is particularly effective for improving the accuracy and efficiency of motion vector prediction at the time of recompression at the server.
  • the terminal may perform simple coding such as VLC (variable length coding), and the server may perform coding with a large processing load such as CABAC (context adaptive binary arithmetic coding).
  • a plurality of video data in which a plurality of terminals capture substantially the same scene.
  • a GOP Group @ of @ Picture
  • a picture unit or a tile obtained by dividing a picture
  • Distributed processing is performed by assigning encoding processing in units or the like.
  • the server may perform management and / or instructions so that video data shot by each terminal can be referred to each other.
  • the encoded data from each terminal may be received by the server, and the reference relationship may be changed among a plurality of data, or the picture itself may be corrected or replaced to be re-encoded. As a result, it is possible to generate a stream in which the quality and efficiency of each data is improved.
  • the server may distribute the video data after performing transcoding for changing the encoding method of the video data. For example, the server may convert an MPEG-based encoding method to a VP-based encoding method. H.264 to H.P. 265.
  • the encoding process can be performed by the terminal or one or more servers. Therefore, in the following, description such as “server” or “terminal” is used as the subject of processing, but a part or all of the processing performed by the server may be performed by the terminal, or the processing performed by the terminal may be performed. Some or all may be performed on the server. The same applies to the decoding process.
  • the server not only encodes a two-dimensional moving image, but also automatically encodes a still image based on scene analysis of the moving image or at a time designated by the user and transmits the encoded still image to the receiving terminal. Is also good. If the server can further acquire the relative positional relationship between the photographing terminals, the server can determine the three-dimensional shape of the scene based on not only a two-dimensional moving image but also a video of the same scene photographed from different angles. Can be generated. Note that the server may separately encode three-dimensional data generated by a point cloud or the like, or generate a video to be transmitted to the receiving terminal based on a result of recognizing or tracking a person or an object using the three-dimensional data. Alternatively, the image may be selected from images captured by a plurality of terminals or reconstructed and generated.
  • the user can arbitrarily select each video corresponding to each shooting terminal to enjoy the scene, and can generate a video of an arbitrary viewpoint from three-dimensional data reconstructed using a plurality of images or videos. You can also enjoy clipped content.
  • the sound is collected from a plurality of different angles, and the server may transmit the sound multiplexed with the video from a specific angle or space in accordance with the video.
  • the server may create right-eye and left-eye viewpoint images, and perform encoding that allows reference between viewpoint videos by Multi-View @ Coding (MVC) or the like. It may be encoded as a separate stream without reference. At the time of decoding another stream, it is preferable to reproduce them in synchronization with each other so that a virtual three-dimensional space is reproduced according to the viewpoint of the user.
  • MVC Multi-View @ Coding
  • the server superimposes virtual object information in a virtual space on camera information in a real space based on a three-dimensional position or a movement of a user's viewpoint.
  • the decoding device may obtain or hold the virtual object information and the three-dimensional data, generate a two-dimensional image according to the movement of the user's viewpoint, and create superimposed data by connecting the two-dimensional images smoothly.
  • the decoding device transmits the viewpoint movement of the user to the server in addition to the request for the virtual object information, and the server creates superimposed data in accordance with the viewpoint movement received from the three-dimensional data held in the server,
  • the superimposed data may be encoded and distributed to the decoding device.
  • the superimposition data has an ⁇ value indicating transparency other than RGB
  • the server sets the ⁇ value of a portion other than the object created from the three-dimensional data to 0 or the like, and sets the portion in a transparent state.
  • the server may generate data in which a predetermined RGB value such as a chroma key is set as a background and a portion other than the object is set as a background color.
  • the decoding process of the distributed data may be performed by each terminal as a client, may be performed on the server side, or may be performed by sharing each other.
  • a certain terminal may once send a reception request to the server, receive the content corresponding to the request by another terminal, perform a decoding process, and transmit a decoded signal to a device having a display.
  • High-quality data can be reproduced by distributing processing and selecting appropriate content regardless of the performance of the terminal itself capable of communication.
  • a partial area such as a tile obtained by dividing a picture may be decoded and displayed on a personal terminal of a viewer. As a result, while sharing the whole image, it is possible to check at hand the field in which the user is in charge or the area to be checked in more detail.
  • access to encoded data on a network such as when encoded data is cached on a server that can be accessed from a receiving terminal in a short time or copied to an edge server in a content delivery service. It is also possible to switch the bit rate of the received data based on ease.
  • [Scalable encoding] Switching of content will be described using a scalable stream that is compression-encoded by applying the moving picture encoding method shown in each of the above-described embodiments shown in FIG.
  • the server may have a plurality of streams having the same contents and different qualities as individual streams, but the temporal / spatial scalable realization is realized by performing encoding in layers as shown in the figure.
  • a configuration in which the content is switched by utilizing the characteristics of the stream may be employed.
  • the decoding side determines the layer to be decoded according to an internal factor such as performance and an external factor such as a communication band state, so that the decoding side can separate the low-resolution content and the high-resolution content. You can switch freely to decode.
  • the device only has to decode the same stream to a different layer, so that the burden on the server side is high. Can be reduced.
  • the picture is encoded for each layer, and in addition to the configuration for achieving scalability in which the enhancement layer exists above the base layer, the enhancement layer includes meta information based on image statistical information and the like.
  • the decoding side may generate high-quality content by super-resolution of the base layer picture based on the meta information.
  • the super-resolution may be either improvement of the SN ratio at the same resolution or enlargement of the resolution.
  • the meta information includes information for specifying a linear or non-linear filter coefficient used for super-resolution processing, or information for specifying a parameter value in filter processing, machine learning, or least-squares operation used for super-resolution processing. .
  • the picture may be divided into tiles or the like according to the meaning of an object or the like in the image, and the decoding side may decode only a part of the area by selecting a tile to be decoded.
  • the decoding side can determine the position of the desired object based on the meta information.
  • the tile that contains the object can be determined.
  • the meta information is stored using a data storage structure different from the pixel data such as an SEI message in HEVC. This meta information indicates, for example, the position, size, color, or the like of the main object.
  • meta information may be stored in units composed of a plurality of pictures, such as a stream, a sequence, or a random access unit.
  • the decoding side can obtain the time at which the specific person appears in the video and the like, and can specify the picture in which the object exists and the position of the object in the picture by matching the information with the picture unit information.
  • FIG. 20 is a diagram illustrating an example of a display screen of a web page on the computer ex111 or the like.
  • FIG. 21 is a diagram illustrating an example of a display screen of a web page on the smartphone ex115 or the like.
  • a web page may include a plurality of link images which are links to image contents, and the appearance differs depending on a viewing device.
  • the display device is operated until the user explicitly selects the link image, or until the link image approaches the center of the screen or the entire link image enters the screen.
  • the (decoding device) displays a still image or an I picture included in each content as a link image, displays a video such as a gif animation with a plurality of still images or I pictures, or receives only a base layer to receive a video. And display it.
  • the display device When a link image is selected by the user, the display device performs decoding with the base layer given top priority. Note that if there is information indicating that the content is scalable in the HTML constituting the web page, the display device may decode the content up to the enhancement layer. In addition, in order to ensure real-time performance, before selection or when the communication band is extremely severe, the display device decodes only forward-referenced pictures (I-pictures, P-pictures, and B-pictures with only forward-reference). And display, the delay between the decoding time of the first picture and the display time (the delay from the start of decoding of the content to the start of display) can be reduced. In addition, the display device may intentionally ignore the reference relation of pictures, perform coarse decoding with all B pictures and P pictures being forward-referenced, and perform normal decoding as time passes and the number of received pictures increases.
  • the display device may intentionally ignore the reference relation of pictures, perform coarse decoding with all B pictures and P pictures being forward-referenced, and perform normal decoding as time
  • the receiving terminal may perform meta-information in addition to image data belonging to one or more layers. Weather or construction information may also be received and associated with them for decoding. Note that the meta information may belong to a layer or may be simply multiplexed with image data.
  • the receiving terminal since a car, a drone or an airplane including the receiving terminal moves, the receiving terminal transmits the location information of the receiving terminal at the time of the reception request, thereby seamlessly receiving and decoding while switching between the base stations ex106 to ex110. Can be realized.
  • the receiving terminal can dynamically switch how much the meta information is received or how much the map information is updated according to the user's selection, the user's situation, or the state of the communication band. become.
  • the client can receive, decode, and reproduce the encoded information transmitted by the user in real time.
  • the server may perform the encoding process after performing the editing process. This can be realized, for example, by the following configuration.
  • the server performs a recognition process such as a shooting error, a scene search, a meaning analysis, and an object detection from the original image or the encoded data after shooting in real time or after storing the image. Then, the server manually or automatically corrects out-of-focus or camera shake based on the recognition result, or deletes a less important scene such as a scene whose brightness is lower or out of focus compared to other pictures. Perform editing such as deleting, emphasizing the edges of the object, and changing the color. The server encodes the edited data based on the edited result.
  • a recognition process such as a shooting error, a scene search, a meaning analysis, and an object detection from the original image or the encoded data after shooting in real time or after storing the image. Then, the server manually or automatically corrects out-of-focus or camera shake based on the recognition result, or deletes a less important scene such as a scene whose brightness is lower or out of focus compared to other pictures. Perform editing such as deleting, emphasizing the edges of the object
  • the server will not only move the scenes with low importance as described above so that the content will be within a specific time range according to the shooting time. For example, a scene with few images may be automatically clipped based on the image processing result. Alternatively, the server may generate and encode a digest based on the result of the semantic analysis of the scene.
  • the server may dare to change the image of a person's face in the periphery of the screen or the inside of a house into an image that is out of focus. Further, the server recognizes whether or not a face of a person different from the person registered in advance is reflected in the image to be encoded, and if so, performs processing such as mosaicing the face part. You may.
  • a user specifies a person or a background area where the user wants to process an image from the viewpoint of copyright, and the server replaces the specified area with another video or defocuses. It is also possible to perform such a process. If it is a person, it is possible to replace the video of the face part while tracking the person in the moving image.
  • the decoding device first receives the base layer with the highest priority and decodes and reproduces it, depending on the bandwidth.
  • the decoding device may receive the enhancement layer during this time, and may reproduce high-quality video including the enhancement layer when the reproduction is performed twice or more, such as when the reproduction is looped.
  • the stream is scalable encoded in this way, it is a coarse moving image when not selected or when it is started to be viewed, but it is possible to provide an experience in which the stream gradually becomes smarter and the image becomes better.
  • a similar experience can be provided even when the coarse stream reproduced at the first time and the second stream encoded with reference to the first moving image are configured as one stream. .
  • these encoding or decoding processes are generally performed in the LSI ex500 included in each terminal.
  • the LSI ex500 may be a single chip or a configuration including a plurality of chips.
  • the moving image encoding or decoding software is incorporated into any recording medium (CD-ROM, flexible disk, hard disk, or the like) readable by the computer ex111 or the like, and the encoding or decoding processing is performed using the software. Is also good.
  • moving image data acquired by the camera may be transmitted. The moving image data at this time is data that has been encoded by the LSI ex500 of the smartphone ex115.
  • the LSI ex500 may be configured to download and activate application software.
  • the terminal first determines whether the terminal supports the content encoding method or has the ability to execute the specific service. If the terminal does not support the content encoding method or does not have the ability to execute a specific service, the terminal downloads a codec or application software, and then acquires and reproduces the content.
  • the digital broadcasting system at least the moving picture coding apparatus (picture coding apparatus) or the moving picture decoding apparatus (picture decoding apparatus) of each of the above embodiments.
  • the multiplexed data in which video and sound are multiplexed on the radio wave for broadcasting using a satellite or the like is transmitted and received, there is a difference that the content supply system ex100 is suitable for multicasting in contrast to the configuration in which unicast is easily performed.
  • similar applications are possible for the encoding process and the decoding process.
  • FIG. 22 is a diagram illustrating the smartphone ex115.
  • FIG. 23 is a diagram illustrating a configuration example of the smartphone ex115.
  • the smartphone ex115 receives an antenna ex450 for transmitting and receiving radio waves to and from the base station ex110, a camera unit ex465 capable of taking video and still images, a video image captured by the camera unit ex465, and an antenna ex450.
  • a display unit ex458 for displaying data obtained by decoding a video or the like.
  • the smartphone ex115 further includes an operation unit ex466 such as a touch panel, a sound output unit ex457 such as a speaker for outputting sound or sound, a sound input unit ex456 such as a microphone for inputting sound, and shooting.
  • a memory unit ex467 that can store encoded data such as encoded video or still images, recorded audio, received video or still images, mail, etc., or decoded data;
  • a slot unit ex464 is provided as an interface unit with the SIMex468 for authenticating access to various data. Note that an external memory may be used instead of the memory unit ex467.
  • a main control unit ex460 that comprehensively controls the display unit ex458 and the operation unit ex466, etc., a power supply circuit unit ex461, an operation input control unit ex462, a video signal processing unit ex455, a camera interface unit ex463, a display control unit ex459, modulation The / demodulation unit ex452, the multiplexing / demultiplexing unit ex453, the audio signal processing unit ex454, the slot unit ex464, and the memory unit ex467 are connected via a bus ex470.
  • the power supply circuit unit ex461 activates the smartphone ex115 by supplying power to each unit from the battery pack.
  • the smartphone ex115 performs processing such as telephone communication and data communication based on the control of the main control unit ex460 having a CPU, a ROM, a RAM, and the like.
  • the audio signal collected by the audio input unit ex456 is converted into a digital audio signal by the audio signal processing unit ex454, which is subjected to spectrum spread processing by the modulation / demodulation unit ex452, and digital / analog conversion by the transmission / reception unit ex451.
  • the signal is transmitted via the antenna ex450.
  • the received data is amplified, subjected to frequency conversion processing and analog-to-digital conversion processing, subjected to spectrum despreading processing by a modulation / demodulation unit ex452, converted to an analog audio signal by an audio signal processing unit ex454, and then converted to an audio output unit ex457.
  • Output from In the data communication mode text, still image, or video data is transmitted to the main control unit ex460 via the operation input control unit ex462 by an operation of the operation unit ex466 or the like of the main unit, and transmission and reception processing is performed in the same manner.
  • the video signal processing unit ex455 converts the video signal stored in the memory unit ex467 or the video signal input from the camera unit ex465 into each of the above embodiments.
  • the video data is compression-encoded by the moving image encoding method shown in the embodiment, and the encoded video data is transmitted to the multiplexing / demultiplexing unit ex453.
  • the audio signal processing unit ex454 encodes an audio signal collected by the audio input unit ex456 while capturing a video or a still image by the camera unit ex465, and transmits the encoded audio data to the multiplexing / demultiplexing unit ex453. I do.
  • the multiplexing / demultiplexing unit ex453 multiplexes the coded video data and the coded audio data by a predetermined method, and modulates and converts the multiplexed data in the modulation / demodulation unit (modulation / demodulation circuit unit) ex452 and the transmission / reception unit ex451. Processing is performed and transmission is performed via the antenna ex450.
  • the multiplexing / demultiplexing unit ex453 When receiving an image attached to an e-mail or a chat or an image linked to a web page or the like, the multiplexing / demultiplexing unit ex453 performs multiplexing to decode the multiplexed data received via the antenna ex450. By separating the data, the multiplexed data is divided into a bit stream of video data and a bit stream of audio data, and the coded video data is supplied to the video signal processing unit ex455 via the synchronous bus ex470, and The converted audio data is supplied to the audio signal processing unit ex454.
  • the video signal processing unit ex455 decodes the video signal by the video decoding method corresponding to the video encoding method described in each of the above embodiments, and is linked from the display unit ex458 via the display control unit ex459.
  • a video or a still image included in the moving image file is displayed.
  • the audio signal processing unit ex454 decodes the audio signal, and the audio is output from the audio output unit ex457. Since real-time streaming is widespread, depending on the situation of the user, there may be places where sound reproduction is not socially appropriate. Therefore, as an initial value, a configuration in which only the video data is reproduced without reproducing the audio signal is more preferable.
  • the audio may be reproduced in synchronization only when the user performs an operation such as clicking on the video data.
  • the smartphone ex115 has been described as an example here, as a terminal, in addition to a transmission / reception type terminal having both an encoder and a decoder, a transmission terminal having only an encoder and a reception terminal having only a decoder are provided.
  • multiplexed data in which audio data and the like are multiplexed with video data is received or transmitted, but multiplexed data includes character data related to video in addition to audio data.
  • the data may be multiplexed, or the video data itself may be received or transmitted instead of the multiplexed data.
  • the terminal often includes a GPU. Therefore, a configuration in which a wide area is collectively processed by utilizing the performance of the GPU by using a memory shared by the CPU and the GPU or a memory whose addresses are managed so as to be commonly used may be used. As a result, the encoding time can be reduced, real-time performance can be ensured, and low delay can be realized. In particular, it is efficient to perform the motion search, deblocking filter, SAO (Sample Adaptive Offset), and conversion / quantization processes collectively in units of pictures or the like by the GPU instead of the CPU.
  • SAO Sample Adaptive Offset
  • the present disclosure is applicable to, for example, a television receiver, a digital video recorder, a car navigation, a mobile phone, a digital camera, a digital video camera, a video conference system, an electronic mirror, and the like.
  • REFERENCE SIGNS LIST 100 Encoding device 102 Divider 104 Subtractor 106 Transformer 108 Quantizer 110 Entropy encoder 112, 204 Inverse quantizer 114, 206 Inverse transformer 116, 208 Adder 118, 210 Block memory 120, 212 Loop filter Units 122 and 214 Frame memories 124 and 216 Intra prediction unit (intra-screen prediction unit) 126, 218 Inter prediction unit (inter-screen prediction unit) 128, 220 prediction control unit 160, 260 circuit 162, 262 memory 200 decoding device 202 entropy decoding unit

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

L'invention concerne un dispositif de codage (100) comprenant un circuit (160) et une mémoire (162). À l'aide de la mémoire (162), le circuit (160) calcule, dans un mode d'inter-prédiction dans lequel, sur la base de vecteurs de mouvement d'une pluralité de blocs périphériques d'un bloc courant d'une image dans une image mobile, un vecteur de mouvement affine est calculé en unités de sous-blocs constituant le bloc courant, le vecteur de mouvement affine en unités de sous-blocs par uni-prédiction uniquement parmi une uni-prédiction et une bi-prédiction, et à l'aide du vecteur de mouvement affine calculé, effectue une compensation de mouvement en unités de sous-blocs.
PCT/JP2019/023781 2018-06-21 2019-06-14 Dispositif de codage, dispositif de décodage, procédé de codage, et procédé de décodage Ceased WO2019244809A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201862688096P 2018-06-21 2018-06-21
US62/688,096 2018-06-21

Publications (1)

Publication Number Publication Date
WO2019244809A1 true WO2019244809A1 (fr) 2019-12-26

Family

ID=68984030

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/023781 Ceased WO2019244809A1 (fr) 2018-06-21 2019-06-14 Dispositif de codage, dispositif de décodage, procédé de codage, et procédé de décodage

Country Status (1)

Country Link
WO (1) WO2019244809A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111050168A (zh) * 2019-12-27 2020-04-21 浙江大华技术股份有限公司 仿射预测方法及其相关装置
US12250396B2 (en) 2019-12-27 2025-03-11 Zhejiang Dahua Technology Co., Ltd. Affine prediction method and related devices

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017148345A1 (fr) * 2016-03-01 2017-09-08 Mediatek Inc. Procédé et appareil de codage vidéo à compensation de mouvement affine
WO2017200771A1 (fr) * 2016-05-16 2017-11-23 Qualcomm Incorporated Prédiction de mouvement affine destinée au codage vidéo
WO2018061563A1 (fr) * 2016-09-27 2018-04-05 シャープ株式会社 Dispositif de dérivation de vecteur de mouvement affinée, dispositif de génération d'image de prédiction, dispositif de décodage d'image animée, et dispositif de codage d'image animée

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017148345A1 (fr) * 2016-03-01 2017-09-08 Mediatek Inc. Procédé et appareil de codage vidéo à compensation de mouvement affine
WO2017200771A1 (fr) * 2016-05-16 2017-11-23 Qualcomm Incorporated Prédiction de mouvement affine destinée au codage vidéo
WO2018061563A1 (fr) * 2016-09-27 2018-04-05 シャープ株式会社 Dispositif de dérivation de vecteur de mouvement affinée, dispositif de génération d'image de prédiction, dispositif de décodage d'image animée, et dispositif de codage d'image animée

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111050168A (zh) * 2019-12-27 2020-04-21 浙江大华技术股份有限公司 仿射预测方法及其相关装置
US12250396B2 (en) 2019-12-27 2025-03-11 Zhejiang Dahua Technology Co., Ltd. Affine prediction method and related devices

Similar Documents

Publication Publication Date Title
JP7422811B2 (ja) 非一時的記憶媒体
JP6946419B2 (ja) 復号装置、復号方法及びプログラム
JP7331052B2 (ja) 復号装置及び符号化装置
JP7260685B2 (ja) 符号化装置及び符号化方法
JP6669938B2 (ja) 符号化装置、復号装置、符号化方法及び復号方法
JP7339890B2 (ja) 符号化装置及び復号装置
JPWO2018212110A1 (ja) 符号化装置、復号装置、符号化方法及び復号方法
JP2019017066A (ja) 符号化装置、復号装置、符号化方法及び復号方法
JPWO2018199050A1 (ja) 符号化装置、復号装置、符号化方法及び復号方法
JP7026747B2 (ja) 復号装置及び復号方法
JPWO2019013235A1 (ja) 符号化装置、符号化方法、復号装置及び復号方法
JPWO2018225594A1 (ja) 符号化装置、復号装置、符号化方法及び復号方法
JP2023016991A (ja) 符号化装置及び復号装置
JP2021180494A (ja) 復号装置、復号方法、及び、非一時的記憶媒体
JP6767579B2 (ja) 符号化装置、符号化方法、復号装置及び復号方法
WO2019049912A1 (fr) Dispositif de codage, dispositif de décodage, procédé de codage et procédé de décodage
WO2019031369A1 (fr) Dispositif de codage, dispositif de décodage, procédé de codage et procédé de décodage
WO2019244809A1 (fr) Dispositif de codage, dispositif de décodage, procédé de codage, et procédé de décodage
WO2019146718A1 (fr) Dispositif et procédé de codage, et dispositif et procédé de décodage
WO2019087978A1 (fr) Dispositif de codage, dispositif de décodage, procédé de codage et procédé de décodage
WO2019021803A1 (fr) Dispositif de codage, dispositif de décodage, procédé de codage, et procédé de décodage
WO2020050281A1 (fr) Dispositif de codage, dispositif de décodage, procédé de codage, et procédé de décodage
WO2019203036A1 (fr) Dispositif de codage, procédé de codage, dispositif de décodage, et procédé de décodage
WO2019031136A1 (fr) Dispositif de codage, dispositif de décodage, procédé de codage et procédé de décodage
WO2018225595A1 (fr) Dispositif de codage, dispositif de décodage, procédé de codage et procédé de décodage

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19822485

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19822485

Country of ref document: EP

Kind code of ref document: A1