[go: up one dir, main page]

WO2025084770A1 - Procédé et dispositif de codage/décodage d'image, et support d'enregistrement stockant un flux binaire - Google Patents

Procédé et dispositif de codage/décodage d'image, et support d'enregistrement stockant un flux binaire Download PDF

Info

Publication number
WO2025084770A1
WO2025084770A1 PCT/KR2024/015662 KR2024015662W WO2025084770A1 WO 2025084770 A1 WO2025084770 A1 WO 2025084770A1 KR 2024015662 W KR2024015662 W KR 2024015662W WO 2025084770 A1 WO2025084770 A1 WO 2025084770A1
Authority
WO
WIPO (PCT)
Prior art keywords
current block
pixels
filter
prediction
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/KR2024/015662
Other languages
English (en)
Korean (ko)
Inventor
허진
최정아
박승욱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hyundai Motor Co
Kia Corp
Original Assignee
Hyundai Motor Co
Kia Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hyundai Motor Co, Kia Corp filed Critical Hyundai Motor Co
Priority claimed from KR1020240141037A external-priority patent/KR20250054742A/ko
Publication of WO2025084770A1 publication Critical patent/WO2025084770A1/fr
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/132Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/184Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/59Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • H04N19/82Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop

Definitions

  • the present invention relates to a video encoding/decoding method, a device, and a recording medium storing a bitstream. Specifically, the present invention relates to a video encoding/decoding method, a device, and a recording medium storing a bitstream based on an extrapolation filter-based intra prediction method.
  • An image decoding method comprises the steps of: determining that a current block is predicted by an extrapolation filter-based intra-screen prediction mode; deriving filter coefficients of an extrapolation filter for intra-screen prediction of the current block based on pixels of a peripheral area of the current block; and sequentially predicting pixels of the current block in a predetermined order using the extrapolation filter, wherein pixels of the current block are predicted based on a reference pixel and the filter coefficients of the extrapolation filter, and the reference pixels are pixels at predetermined positions adjacent to pixels of the current block among pixels of the current block and the peripheral area.
  • the pixel of the current block can be predicted by applying a weight to the product of the value of the reference pixel and the filter coefficient of the extrapolation filter.
  • the weight can be determined based on the position of the pixel of the current block within the current block.
  • the method further includes a step of dividing the current block into a predetermined number of regions, and the weight can be determined based on a region among the divided regions that includes pixels of the current block.
  • the pixel of the current block predicted using the extrapolation filter can be determined by performing sampling on the current block.
  • prediction according to the extrapolation filter can be performed on the first pixel of the current block determined by sampling the current block.
  • the first pixel of the current block may be a pixel at a predetermined position of a sub-block of a predetermined size within the current block.
  • the first pixel of the current block may be a pixel located at the upper left of a 2X2 sized sub-block within the current block.
  • a second pixel among the pixels included in the current block is predicted by interpolating a first pixel of the current block, and the second pixel may be a pixel other than the first pixel among the pixels included in the current block.
  • the prediction value of the second pixel can be derived based on horizontal interpolation and vertical interpolation of the first pixel.
  • the interpolation method for predicting the second pixel is determined as one of a linear interpolation method and a nonlinear interpolation method, and the nonlinear interpolation method may include a polynomial interpolation method, a spline interpolation method, and a cubic interpolation method.
  • the interpolation method can be determined based on the size of the current block.
  • the step of determining a sampling ratio of the current block is further included, and the first pixel of the current block can be determined according to sampling of the current block based on the sampling ratio.
  • the vertical sampling rate of the current block and the horizontal sampling rate of the current block can be independently determined.
  • the step of deriving filter coefficients of an extrapolation filter for prediction within a screen of the current block based on pixels in a surrounding area of the current block may include collecting input pixels and output pixels in units of predetermined sub-blocks in the surrounding area, and deriving the filter coefficients using a correlation between the input pixels and the output pixels.
  • the position of the input pixel in the predetermined sub-block unit is determined based on the sampling rate of the current block, and the size of the predetermined sub-block unit can be determined according to the size of an extrapolation filter.
  • the shape of the extrapolation filter can be determined based on the sampling ratio.
  • the shape of the extrapolation filter can be any one of a 4X4 shape, a 2X8 shape, and an 8X2 shape.
  • a video encoding method includes the steps of: determining that a current block is predicted by an extrapolation filter-based intra-screen prediction mode; deriving filter coefficients of an extrapolation filter for intra-screen prediction of the current block based on pixels of a peripheral area of the current block; and predicting pixels of the current block sequentially in a predetermined order using the extrapolation filter, wherein pixels of the current block are predicted based on reference pixels and the filter coefficients of the extrapolation filter, and the reference pixels are pixels at predetermined positions adjacent to pixels of the current block among pixels of the current block and the peripheral area.
  • a non-transitory computer-readable recording medium can store a bitstream generated by the image encoding method.
  • a bitstream transmission method can transmit a bitstream generated by the image encoding method.
  • a video encoding/decoding method and device with improved encoding/decoding efficiency can be provided.
  • the prediction efficiency of an intra-screen prediction mode based on an extrapolation filter can be improved.
  • Figure 1 is a block diagram showing the configuration according to one embodiment of an encoding device to which the present invention is applied.
  • FIG. 2 is a block diagram showing the configuration of one embodiment of a decryption device to which the present invention is applied.
  • FIG. 3 is a diagram schematically showing a video coding system to which the present invention can be applied.
  • FIG. 4 is a drawing for explaining an L-shaped peripheral area according to one embodiment of the present invention.
  • FIG. 5 is a drawing for explaining an upper peripheral area according to one embodiment of the present invention.
  • FIG. 6 is a drawing for explaining a left peripheral area according to one embodiment of the present invention.
  • FIG. 7 is a drawing for explaining the shape of an extrapolation filter according to one embodiment of the present invention.
  • FIG. 8 is a diagram for explaining an intra-screen prediction method based on an extrapolation filter according to one embodiment of the present invention.
  • FIG. 9 is a diagram for explaining an intra-screen prediction method based on an extrapolation filter applying weights according to one embodiment of the present invention.
  • FIG. 10 is a flowchart illustrating an intra-screen prediction method based on an extrapolation filter based on sampling performance according to one embodiment of the present invention.
  • FIG. 11 is a diagram for explaining an intra-screen prediction method based on an extrapolation filter based on sampling performance according to one embodiment of the present invention.
  • FIG. 12 is a diagram for explaining an extended size extrapolation filter in an intra-screen prediction method based on an extrapolation filter performed according to one embodiment of the present invention.
  • FIGS. 13 and 14 are diagrams for explaining interpolation in intra-screen prediction based on an extrapolation filter based on sampling performance according to one embodiment of the present invention.
  • Figure 15 is a flowchart illustrating an image decoding method according to one embodiment of the present invention.
  • FIG. 16 is a drawing exemplarily showing a content streaming system to which an embodiment according to the present invention can be applied.
  • An image decoding method comprises the steps of: determining that a current block is predicted by an extrapolation filter-based intra-screen prediction mode; deriving filter coefficients of an extrapolation filter for intra-screen prediction of the current block based on pixels of a peripheral area of the current block; and sequentially predicting pixels of the current block in a predetermined order using the extrapolation filter, wherein pixels of the current block are predicted based on a reference pixel and the filter coefficients of the extrapolation filter, and the reference pixels are pixels at predetermined positions adjacent to pixels of the current block among pixels of the current block and the peripheral area.
  • first, second, etc. may be used to describe various components, but the components should not be limited by the terms. The terms are only used for the purpose of distinguishing one component from another.
  • the first component may be referred to as the second component, and similarly, the second component may also be referred to as the first component.
  • the term and/or includes a combination of a plurality of related described items or any item among a plurality of related described items.
  • each component shown in the embodiments of the present invention are independently depicted to indicate different characteristic functions, and do not mean that each component is formed as a separate hardware or software configuration unit. That is, each component is listed and included as a separate component for convenience of explanation, and at least two components among each component may be combined to form a single component, or one component may be divided into multiple components to perform a function, and such integrated embodiments and separate embodiments of each component are also included in the scope of the present invention as long as they do not deviate from the essence of the present invention.
  • the terminology used in the present invention is only used to describe specific embodiments and is not intended to limit the present invention.
  • the singular expression includes the plural expression unless the context clearly indicates otherwise.
  • some components of the present invention are not essential components that perform essential functions in the present invention and may be optional components that merely enhance performance.
  • the present invention may be implemented by including only essential components for implementing the essence of the present invention excluding components used only for enhancing performance, and a structure including only essential components excluding optional components used only for enhancing performance is also included in the scope of the present invention.
  • the term "at least one” can mean one of a number greater than or equal to 1, such as 1, 2, 3, and 4.
  • the term "a plurality of” can mean one of a number greater than or equal to 2, such as 2, 3, and 4.
  • video may mean one picture constituting a video, and may also represent the video itself.
  • encoding and/or decoding of a video may mean “encoding and/or decoding of a video,” and may also mean “encoding and/or decoding of one of the videos constituting the video.”
  • the target image may be an encoding target image that is a target of encoding and/or a decoding target image that is a target of decoding.
  • the target image may be an input image input to an encoding device and may be an input image input to a decoding device.
  • the target image may have the same meaning as the current image.
  • encoder and image encoding device may be used interchangeably and have the same meaning.
  • decoder and image decoding device may be used interchangeably and interchangeably.
  • image may be used with the same meaning and may be used interchangeably.
  • target block may be an encoding target block that is a target of encoding and/or a decoding target block that is a target of decoding.
  • target block may be a current block that is a target of current encoding and/or decoding.
  • target block and current block may be used with the same meaning and may be used interchangeably.
  • a coding tree unit may be composed of one luma component (Y) coding tree block (CTB) and two chroma component (Cb, Cr) coding tree blocks related to it.
  • sample may represent a basic unit constituting a block.
  • Figure 1 is a block diagram showing the configuration according to one embodiment of an encoding device to which the present invention is applied.
  • the encoding device (100) may be an encoder, a video encoding device, or an image encoding device.
  • the video may include one or more images.
  • the encoding device (100) may sequentially encode one or more images.
  • an encoding device (100) may include an image segmentation unit (110), an intra prediction unit (120), a motion prediction unit (121), a motion compensation unit (122), a switch (115), a subtractor (113), a transformation unit (130), a quantization unit (140), an entropy encoding unit (150), an inverse quantization unit (160), an inverse transformation unit (170), an adder (117), a filter unit (180), and a reference picture buffer (190).
  • the encoding device (100) can generate a bitstream including encoded information through encoding an input image, and output the generated bitstream.
  • the generated bitstream can be stored in a computer-readable recording medium, or can be streamed through a wired/wireless transmission medium.
  • the video segmentation unit (110) can segment the input video into various forms to increase the efficiency of video encoding/decoding. That is, the input video is composed of multiple pictures, and one picture can be hierarchically segmented and processed for compression efficiency, parallel processing, etc. For example, one picture can be segmented into one or multiple tiles or slices, and then segmented again into multiple CTUs (Coding Tree Units). Alternatively, one picture can be segmented into multiple sub-pictures defined as groups of rectangular slices, and each sub-picture can be segmented into the tiles/slices. Here, the sub-pictures can be utilized to support the function of partially independently encoding/decoding and transmitting the picture.
  • multiple sub-pictures can be individually restored, they have the advantage of being easy to edit in applications that configure multi-channel input into one picture.
  • tiles can be segmented horizontally to generate bricks.
  • a brick can be utilized as a basic unit of intra-picture parallel processing.
  • one CTU can be recursively split into a quad tree (QT: Quadtree), and the terminal node of the split can be defined as a CU (Coding Unit).
  • the CU can be split into a prediction unit (PU) and a transformation unit (TU) to perform prediction and splitting. Meanwhile, the CU can be utilized as a prediction unit and/or a transformation unit itself.
  • each CTU can be recursively split into not only a quad tree (QT) but also a multi-type tree (MTT: Multi-Type Tree).
  • MTT Multi-Type Tree
  • Splitting of a CTU into a multi-type tree can start from the terminal node of a QT, and the MTT can be composed of a BT (Binary Tree) and a TT (Triple Tree).
  • the MTT structure can be distinguished into vertical binary split mode (SPLIT_BT_VER), horizontal binary split mode (SPLIT_BT_HOR), vertical ternary split mode (SPLIT_TT_VER), and horizontal ternary split mode (SPLIT_TT_HOR).
  • the minimum block size (MinQTSize) of the quad tree of the luma block during splitting can be set to 16x16
  • the maximum block size (MaxBtSize) of the binary tree can be set to 128x128, and the maximum block size (MaxTtSize) of the triple tree can be set to 64x64.
  • the minimum block size (MinBtSize) of the binary tree and the minimum block size (MinTtSize) of the triple tree can be set to 4x4
  • the maximum depth (MaxMttDepth) of the multi-type tree can be set to 4.
  • a dual tree that uses different CTU split structures for luma and chrominance components can be applied to improve the encoding efficiency of the I slice.
  • the luminance and chrominance CTBs (Coding Tree Blocks) within the CTU can be split into a single tree sharing the coding tree structure.
  • the encoding device (100) may perform encoding on the input image in the intra mode and/or the inter mode.
  • the encoding device (100) may perform encoding on the input image in a third mode (e.g., IBC mode, Palette mode, etc.) other than the intra mode and the inter mode.
  • a third mode e.g., IBC mode, Palette mode, etc.
  • the third mode may be classified as the intra mode or the inter mode for convenience of explanation. In the present invention, the third mode will be classified and described separately only when a specific explanation is required.
  • the switch (115) can be switched to intra, and when the inter mode is used as the prediction mode, the switch (115) can be switched to inter.
  • the intra mode can mean an intra-screen prediction mode
  • the inter mode can mean an inter-screen prediction mode.
  • the encoding device (100) can generate a prediction block for an input block of an input image.
  • the encoding device (100) can encode a residual block using a residual of the input block and the prediction block.
  • the input image can be referred to as a current image which is a current encoding target.
  • the input block can be referred to as a current block which is a current encoding target or an encoding target block.
  • the intra prediction unit (120) can use samples of blocks already encoded/decoded around the current block as reference samples.
  • the intra prediction unit (120) can perform spatial prediction on the current block using the reference sample, and can generate prediction samples for the input block through spatial prediction.
  • intra prediction can mean prediction within the screen.
  • non-directional prediction modes such as DC mode and Planar mode and directional prediction modes (e.g., 65 directions) can be applied.
  • the intra prediction method can be expressed as an intra prediction mode or an intra-screen prediction mode.
  • the motion prediction unit (121) can search for an area that best matches the input block from the reference image during the motion prediction process, and can derive a motion vector using the searched area. At this time, the search area can be used as the area.
  • the reference image can be stored in the reference picture buffer (190).
  • it when encoding/decoding for the reference image is processed, it can be stored in the reference picture buffer (190).
  • the motion compensation unit (122) can generate a prediction block for the current block by performing motion compensation using a motion vector.
  • inter prediction can mean inter-screen prediction or motion compensation.
  • the above motion prediction unit (121) and motion compensation unit (122) can generate a prediction block by applying an interpolation filter to a portion of an area within a reference image when the value of a motion vector does not have an integer value.
  • the AFFINE mode of sub-PU based prediction the AFFINE mode of sub-PU based prediction, the SbTMVP (Subblock-based Temporal Motion Vector Prediction) mode, and the MMVD (Merge with MVD) mode, the GPM (Geometric Partitioning Mode) mode of PU based prediction can be applied.
  • the SbTMVP Subblock-based Temporal Motion Vector Prediction
  • MMVD Merge with MVD
  • GPM Gaometric Partitioning Mode
  • the HMVP History based MVP
  • the PAMVP Positionwise Average MVP
  • the CIIP Combined Intra/Inter Prediction
  • the AMVR Adaptive Motion Vector Resolution
  • the BDOF Bi-Directional Optical-Flow
  • the BCW Block Predictive with CU Weights
  • the LIC Lical Illumination Compensation
  • the TM Tempolate Matching
  • the OBMC Overlapped Block Motion Compensation
  • AFFINE mode is a technology that is used in both AMVP and MERGE modes and also has high encoding efficiency. Since the conventional video coding standard performs MC (Motion Compensation) by considering only the parallel translation of the block, there was a disadvantage in that it could not properly compensate for motions that occur in reality, such as zoom in/out and rotation. To supplement this, a four-parameter affine motion model using two control point motion vectors (CPMV) and a six-parameter affine motion model using three control point motion vectors can be applied to inter prediction.
  • CPMV is a vector representing an affine motion model of one of the upper left, upper right, and lower left of the current block.
  • the subtractor (113) can generate a residual block using the difference between the input block and the predicted block.
  • the residual block may also be referred to as a residual signal.
  • the residual signal may mean the difference between the original signal and the predicted signal.
  • the residual signal may be a signal generated by transforming, quantizing, or transforming and quantizing the difference between the original signal and the predicted signal.
  • the residual block may be a residual signal in block units.
  • the transform unit (130) can perform a transform on the residual block to generate a transform coefficient and output the generated transform coefficient.
  • the transform coefficient can be a coefficient value generated by performing a transform on the residual block.
  • the transform unit (130) can also skip the transform on the residual block.
  • a quantized level can be generated by applying quantization to a transform coefficient or a residual signal.
  • a quantized level may also be referred to as a transform coefficient.
  • a 4x4 luminance residual block generated through within-screen prediction can be transformed using a basis vector based on DST (Discrete Sine Transform), and a basis vector based on DCT (Discrete Cosine Transform) can be used to transform the remaining residual blocks.
  • a transform block can be divided into a quad tree shape for one block using RQT (Residual Quad Tree) technology, and after performing transformation and quantization on each transform block divided through RQT, a coded block flag (cbf) can be transmitted to increase encoding efficiency when all coefficients become 0.
  • RQT Residual Quad Tree
  • the Multiple Transform Selection (MTS) technique can be applied to perform transformation by selectively using multiple transformation bases. That is, instead of dividing the CU into TUs through the RQT, a function similar to TU division can be performed through the Sub-block Transform (SBT) technique.
  • SBT Sub-block Transform
  • the SBT is applied only to inter-screen prediction blocks, and unlike the RQT, the current block can be divided into 1 ⁇ 2 or 1 ⁇ 4 sizes in the vertical or horizontal direction, and then the transformation can be performed on only one of the blocks. For example, if it is divided vertically, the transformation can be performed on the leftmost or rightmost block, and if it is divided horizontally, the transformation can be performed on the topmost or bottommost block.
  • LFNST Low Frequency Non-Separable Transform
  • a secondary transform technique that additionally transforms the residual signal converted to the frequency domain through DCT or DST, can be applied.
  • LFNST additionally performs a transform on the low-frequency region of 4x4 or 8x8 in the upper left, so that the residual coefficients can be concentrated in the upper left.
  • the quantization unit (140) can generate a quantized level by quantizing a transform coefficient or a residual signal according to a quantization parameter (QP), and can output the generated quantized level. At this time, the quantization unit (140) can quantize the transform coefficient using a quantization matrix.
  • QP quantization parameter
  • a quantizer using QP values of 0 to 51 can be used.
  • 0 to 63 QP can be used.
  • DQ Dependent Quantization
  • DQ performs quantization using two quantizers (e.g., Q0 and Q1), and even without signaling information about the use of a specific quantizer, the quantizer to be used for the next transform coefficient can be selected based on the current state through a state transition model.
  • the entropy encoding unit (150) can generate a bitstream by performing entropy encoding according to a probability distribution on values produced by the quantization unit (140) or coding parameter values produced in the encoding process, and can output the bitstream.
  • the entropy encoding unit (150) can perform entropy encoding on information about image samples and information for decoding the image. For example, information for decoding the image can include syntax elements, etc.
  • the entropy encoding unit (150) can use an encoding method such as exponential Golomb, Context-Adaptive Variable Length Coding (CAVLC), or Context-Adaptive Binary Arithmetic Coding (CABAC) for entropy encoding.
  • CAVLC Context-Adaptive Variable Length Coding
  • CABAC Context-Adaptive Binary Arithmetic Coding
  • the entropy encoding unit (150) can perform entropy encoding using a Variable Length Coding/Code (VLC) table.
  • VLC Variable Length Coding/Code
  • the entropy encoding unit (150) may derive a binarization method of a target symbol and a probability model of a target symbol/bin, and then perform arithmetic encoding using the derived binarization method, probability model, and context model.
  • the table probability update method when applying CABAC, in order to reduce the size of the probability table stored in the decryption device, the table probability update method can be changed to a table update method using a simple formula and applied.
  • two different probability models can be used to obtain more accurate symbol probability values.
  • the entropy encoding unit (150) can change a two-dimensional block form coefficient into a one-dimensional vector form through a transform coefficient scanning method to encode a transform coefficient level (quantized level).
  • Coding parameters may include information (flags, indexes, etc.) encoded in an encoding device (100) and signaled to a decoding device (200), such as syntax elements, as well as information derived during an encoding process or a decoding process, and may mean information necessary when encoding or decoding an image.
  • signaling a flag or index may mean that the encoder entropy encodes the flag or index and includes it in the bitstream, and that the decoder entropy decodes the flag or index from the bitstream.
  • the encoded current image can be used as a reference image for other images to be processed later. Therefore, the encoding device (100) can restore or decode the encoded current image again, and store the restored or decoded image as a reference image in the reference picture buffer (190).
  • the quantized level can be dequantized in the dequantization unit (160) and inverse transformed in the inverse transform unit (170).
  • the dequantized and/or inverse transformed coefficients can be combined with a prediction block through an adder (117), and a reconstructed block can be generated by combining the dequantized and/or inverse transformed coefficients and the prediction block.
  • the dequantized and/or inverse transformed coefficients mean coefficients on which at least one of dequantization and inverse transformation has been performed, and may mean a reconstructed residual block.
  • the dequantization unit (160) and the inverse transform unit (170) can be performed in the reverse process of the quantization unit (140) and the transform unit (130).
  • the restoration block may pass through a filter unit (180).
  • the filter unit (180) may apply a deblocking filter, a sample adaptive offset (SAO), an adaptive loop filter (ALF), a bilateral filter (BIF), LMCS (Luma Mapping with Chroma Scaling), etc. as a filtering technique, in whole or in part, to the restoration sample, restoration block, or restoration image.
  • the filter unit (180) may also be called an in-loop filter. In this case, the in-loop filter is also used as a name excluding LMCS.
  • the deblocking filter can remove block distortion that occurs at the boundary between blocks.
  • different filters can be applied depending on the required deblocking filtering strength.
  • a sample adaptive offset can be used to add an appropriate offset value to the sample value to compensate for the encoding error.
  • the sample adaptive offset can correct the offset from the original image on a sample basis for the image on which deblocking has been performed.
  • a method can be used in which the samples included in the image are divided into a certain number of regions, and then the region to be offset is determined and the offset is applied to the region, or a method can be used in which the offset is applied by considering the edge information of each sample.
  • Bilateral filter can also compensate for the offset from the original image on a sample-by-sample basis for the deblocked image.
  • An adaptive loop filter can perform filtering based on a comparison value between a restored image and an original image. After dividing samples included in an image into a predetermined group, a filter to be applied to each group can be determined, and filtering can be performed differentially for each group. Information related to whether to apply an adaptive loop filter can be signaled for each coding unit (CU), and the shape and filter coefficients of the adaptive loop filter to be applied can vary for each block.
  • CU coding unit
  • LMCS Luma Mapping with Chroma Scaling
  • LM luma mapping
  • CS chroma scaling
  • LMCS can be utilized as an HDR correction technique that reflects the characteristics of HDR (High Dynamic Range) images.
  • the restored block or restored image that has passed through the filter unit (180) may be stored in the reference picture buffer (190).
  • the restored block that has passed through the filter unit (180) may be a part of the reference image.
  • the reference image may be a restored image composed of restored blocks that have passed through the filter unit (180).
  • the stored reference image may be used for inter-screen prediction or motion compensation thereafter.
  • FIG. 2 is a block diagram showing the configuration of one embodiment of a decryption device to which the present invention is applied.
  • the decoding device (200) may be a decoder, a video decoding device, or an image decoding device.
  • the decoding device (200) may include an entropy decoding unit (210), an inverse quantization unit (220), an inverse transformation unit (230), an intra prediction unit (240), a motion compensation unit (250), an adder (201), a switch (203), a filter unit (260), and a reference picture buffer (270).
  • an entropy decoding unit (210) may include an entropy decoding unit (210), an inverse quantization unit (220), an inverse transformation unit (230), an intra prediction unit (240), a motion compensation unit (250), an adder (201), a switch (203), a filter unit (260), and a reference picture buffer (270).
  • the decoding device (200) can receive a bitstream output from the encoding device (100).
  • the decoding device (200) can receive a bitstream stored in a computer-readable recording medium, or can receive a bitstream streamed through a wired/wireless transmission medium.
  • the decoding device (200) can perform decoding on the bitstream in an intra mode or an inter mode.
  • the decoding device (200) can generate a restored image or a decoded image through decoding, and can output the restored image or the decoded image.
  • the switch (203) can be switched to intra. If the prediction mode used for decryption is inter mode, the switch (203) can be switched to inter.
  • the decoding device (200) can obtain a reconstructed residual block by decoding the input bitstream and can generate a prediction block. When the reconstructed residual block and the prediction block are obtained, the decoding device (200) can generate a reconstructed block to be decoded by adding the reconstructed residual block and the prediction block.
  • the decoding target block can be referred to as a current block.
  • the entropy decoding unit (210) can generate symbols by performing entropy decoding according to a probability distribution for the bitstream.
  • the generated symbols can include symbols in the form of quantized levels.
  • the entropy decoding method can be the reverse process of the entropy encoding method described above.
  • the entropy decoding unit (210) can change a one-dimensional vector-shaped coefficient into a two-dimensional block-shaped coefficient through a transform coefficient scanning method to decode a transform coefficient level (quantized level).
  • the quantized level can be dequantized in the dequantization unit (220) and detransformed in the inverse transform unit (230).
  • the quantized level can be generated as a restored residual block as a result of the dequantization and/or detransformation.
  • the dequantization unit (220) can apply a quantization matrix to the quantized level.
  • the dequantization unit (220) and the detransform unit (230) applied to the decoding device can apply the same technology as the dequantization unit (160) and the detransform unit (170) applied to the encoding device described above.
  • the intra prediction unit (240) can generate a prediction block by performing spatial prediction on the current block using sample values of already decoded blocks surrounding the block to be decoded.
  • the intra prediction unit (240) applied to the decoding device can apply the same technology as the intra prediction unit (120) applied to the encoding device described above.
  • the motion compensation unit (250) can perform motion compensation using a motion vector and a reference image stored in the reference picture buffer (270) for the current block to generate a prediction block.
  • the motion compensation unit (250) can apply an interpolation filter to a part of the reference image to generate a prediction block when the value of the motion vector does not have an integer value.
  • the motion compensation unit (250) applied to the decoding device can apply the same technology as the motion compensation unit (122) applied to the encoding device described above.
  • the adder (201) can add the restored residual block and the prediction block to generate a restored block.
  • the filter unit (260) can apply at least one of an Inverse-LMCS, a deblocking filter, a sample adaptive offset, and an adaptive loop filter to the restored block or the restored image.
  • the filter unit (260) applied to the decoding device can apply the same filtering technology as that applied to the filter unit (180) applied to the encoding device described above.
  • the filter unit (260) can output a restored image.
  • the restored block or restored image can be stored in the reference picture buffer (270) and used for inter prediction.
  • the restored block that has passed through the filter unit (260) can be a part of the reference image.
  • the reference image can be a restored image composed of restored blocks that have passed through the filter unit (260).
  • the stored reference image can be used for inter-screen prediction or motion compensation thereafter.
  • FIG. 3 is a diagram schematically showing a video coding system to which the present invention can be applied.
  • a video coding system may include an encoding device (10) and a decoding device (20).
  • the encoding device (10) may transmit encoded video and/or image information or data to the decoding device (20) in the form of a file or streaming through a digital storage medium or a network.
  • An encoding device (10) may include a video source generating unit (11), an encoding unit (12), and a transmitting unit (13).
  • a decoding device (20) may include a receiving unit (21), a decoding unit (22), and a rendering unit (23).
  • the encoding unit (12) may be called a video/image encoding unit, and the decoding unit (22) may be called a video/image decoding unit.
  • the transmitting unit (13) may be included in the encoding unit (12).
  • the receiving unit (21) may be included in the decoding unit (22).
  • the rendering unit (23) may include a display unit, and the display unit may be configured as a separate device or an external component.
  • the video source generation unit (11) can obtain a video/image through a process of capturing, synthesizing, or generating a video/image.
  • the video source generation unit (11) can include a video/image capture device and/or a video/image generation device.
  • the video/image capture device can include, for example, one or more cameras, a video/image archive including previously captured video/image, etc.
  • the video/image generation device can include, for example, a computer, a tablet, a smartphone, etc., and can (electronically) generate a video/image.
  • a virtual video/image can be generated through a computer, etc., and in this case, the video/image capture process can be replaced with a process of generating related data.
  • the encoding unit (12) can encode the input video/image.
  • the encoding unit (12) can perform a series of procedures such as prediction, transformation, and quantization for compression and encoding efficiency.
  • the encoding unit (12) can output encoded data (encoded video/image information) in the form of a bitstream.
  • the detailed configuration of the encoding unit (12) can also be configured in the same manner as the encoding device (100) of FIG. 1 described above.
  • the transmission unit (13) can transmit encoded video/image information or data output in the form of a bitstream to the reception unit (21) of the decoding device (20) through a digital storage medium or a network in the form of a file or streaming.
  • the digital storage medium can include various storage media such as USB, SD, CD, DVD, Blu-ray, HDD, SSD, etc.
  • the transmission unit (13) can include an element for generating a media file through a predetermined file format and can include an element for transmission through a broadcasting/communication network.
  • the reception unit (21) can extract/receive the bitstream from the storage medium or network and transmit it to the decoding unit (22).
  • the decoding unit (22) can decode video/image by performing a series of procedures such as inverse quantization, inverse transformation, and prediction corresponding to the operation of the encoding unit (12).
  • the detailed configuration of the decoding unit (22) can also be configured in the same manner as the decoding device (200) of FIG. 2 described above.
  • the rendering unit (23) can render the decrypted video/image.
  • the rendered video/image can be displayed through the display unit.
  • an improved extrapolation filter-based intra prediction method is proposed. Unlike the conventional extrapolation filter-based intra prediction, the improved extrapolation filter-based intra prediction method can improve prediction accuracy by applying weights in the prediction block generation process, and can reduce prediction complexity by performing sampling in the extrapolation filter coefficient derivation process.
  • An intra-screen prediction method based on an extrapolation filter may refer to an intra-screen prediction method in which a prediction block is generated using an extrapolation filter.
  • the extrapolation filter may refer to a filter in which filter coefficients are derived based on pixels in a surrounding area of a current block.
  • the filter coefficient of the extrapolation filter is derived based on the pixels of the surrounding area of the current block, and the pixel of the current block can be predicted based on the reference pixel and the filter coefficient of the extrapolation filter.
  • the reference pixel can be a pixel adjacent to the pixel of the current block on which prediction is performed among the pixels of the current block and the surrounding area of the current block.
  • Figures 4 to 6 are diagrams for explaining a peripheral area that is the basis for deriving filter coefficients according to one embodiment of the present invention.
  • the filter coefficients of the extrapolation filter are derived based on pixels in the peripheral area of the current block.
  • the peripheral area of the current block may mean a reconstructed area adjacent to the current block.
  • FIG. 4 is a drawing for explaining an L-shaped peripheral area according to one embodiment of the present invention.
  • the surrounding area that serves as the basis for deriving filter coefficients may be an L-shaped surrounding area (403) that includes the lower left, left, upper left, upper and upper right areas of the current block (400).
  • the size of the surrounding area can be variably determined based on the shape of the current block (400) and the shape of the extrapolation filter.
  • the left size (leftSize, 404), the top size (aboveSize, 405), the height (Rec_height, 406), and the width (Rec_width, 407) of the surrounding area can be determined based on the shape of the current block and the shape of the extrapolation filter. For example, as the width (Cur_width, 401) of the current block increases, the height (406) and the top size (405) of the surrounding area can increase, and as the height (Cur_height, 402) of the current block increases, the width (407) and the left size (404) of the surrounding area can increase.
  • the width (fWidth, 409) of the extrapolation filter increases, the width (407) and the left size (404) of the surrounding area may increase, and as the height (fHeight, 408) of the extrapolation filter increases, the height (406) and the top size (405) of the surrounding area may increase.
  • FIG. 5 is a drawing for explaining an upper peripheral area according to one embodiment of the present invention.
  • the surrounding area that serves as the basis for deriving filter coefficients may be an irregular surrounding area (503) that includes the upper left, upper, and upper right areas of the current block (500).
  • the size of the surrounding area can be variably determined based on the shape of the current block (500) and the shape of the extrapolation filter.
  • the left size (leftSize, 504), the top size (aboveSize, 505), and the width (Rec_width, 506) of the surrounding area can be determined based on the shape of the current block and the shape of the extrapolation filter. For example, as the width (Cur_width, 501) of the current block increases, the top size (505) of the surrounding area can increase, and as the height (Cur_height, 502) of the current block increases, the width (506) and the left size (504) of the surrounding area can increase.
  • the width (fWidth, 507) of the extrapolation filter increases, the width (506) and the left size (504) of the surrounding area can increase, and as the height (fHeight, 508) of the extrapolation filter increases, the top size (505) of the surrounding area can increase.
  • FIG. 6 is a drawing for explaining a left peripheral area according to one embodiment of the present invention.
  • the surrounding area that serves as the basis for deriving filter coefficients may be an irregular surrounding area (603) that includes the upper left, left, and lower left areas of the current block (600).
  • the size of the surrounding area can be variably determined based on the shape of the current block (600) and the shape of the extrapolation filter.
  • the left size (leftSize, 604), the top size (aboveSize, 605), and the height (Rec_height, 606) of the surrounding area can be determined based on the shape of the current block and the shape of the extrapolation filter. For example, as the width (Cur_width, 601) of the current block increases, the top size (605) and the height (606) of the surrounding area can increase, and as the height (Cur_height, 602) of the current block increases, the left size (604) of the surrounding area can increase.
  • the left size (604) of the surrounding area can increase, and as the height (fHeight, 608) of the extrapolation filter increases, the top size (605) of the surrounding area can increase.
  • information about the size and shape of the surrounding area which is the basis for deriving filter coefficients, can be determined by the encoder and transmitted to the decoder.
  • FIGS. 4 to 6 illustrate three shapes of surrounding areas, but this is only one example, and the shape of the surrounding area, which is the basis for deriving the filter coefficients of the extrapolation filter, can be any shape.
  • the filter coefficients of the extrapolation filter for prediction within the screen of the current block can be derived based on the pixels of the determined surrounding area.
  • FIG. 7 is a drawing for explaining the shape of an extrapolation filter according to one embodiment of the present invention.
  • the shape of the extrapolation filter can be a 4X4 shape (700), an 8X2 shape (701), or a 2X8 shape (702).
  • the black area of each type of extrapolation filter may represent an input pixel of the extrapolation filter-based intra prediction mode (input of EIP), and the white area may represent an output pixel of the extrapolation filter-based intra prediction mode (output of EIP).
  • the extrapolation filter can move one pixel unit in the surrounding area, which is the basis for deriving the filter coefficients, to collect input pixels and output pixels of the extrapolation filter-based screen prediction mode.
  • the extrapolation filter coefficients can be derived based on the collected input pixels and output pixels.
  • the extrapolation filter coefficients can be derived by calculating an auto-correlation matrix and a cross-correlation vector based on the collected input pixels and output pixels.
  • information about the shape of the extrapolation filter can be determined by the encoder and transmitted to the decoder.
  • the shape of the extrapolation filter can be predefined in the encoder/decoder.
  • the shape of the extrapolation filter in Fig. 7 is only one example, and the extrapolation filter can have any shape and size.
  • the extrapolation filter moves in units of 1 pixel in the surrounding area, which is the basis for deriving the filter coefficients.
  • the extrapolation filter can move in units of K pixels to collect input pixels and output pixels of the prediction mode within the screen based on the extrapolation filter.
  • K is a positive integer greater than or equal to 2.
  • K can be determined based on the sampling ratio of the current block. For example, if sampling is performed in a predetermined pixel unit according to the sampling ratio of the current block, K can be equal to the predetermined pixel unit on which sampling is performed.
  • FIG. 8 is a diagram for explaining an intra-screen prediction method based on an extrapolation filter according to one embodiment of the present invention.
  • prediction in the intra-screen prediction method based on the extrapolation filter, prediction can be performed sequentially in a diagonal order from a pixel located at the upper left to a pixel located at the lower right of the current block (Current block, 800).
  • the pixel in the white area means the pixel of the current block for which prediction is performed as the current pixel (Current pixel, 802)
  • the pixel in the black area means the reference pixel (Reference pixel, 803).
  • the reference pixel is a pixel at a predetermined position adjacent to the pixel of the current block for which prediction is performed among the pixels included in the current block and the pixels included in the neighboring reconstructed area (Neighboring reconstructed area, 801).
  • the pixels of the current block can be predicted based on the reference pixels (803) and the filter coefficients of the extrapolation filter.
  • the predicted value of the current block pixel can be derived according to mathematical expression 1.
  • pred (x, y) refers to the pixel of the current block where prediction is performed.
  • (x, y) refers to the position of the pixel where prediction is performed within the current block.
  • t (x-offsetXi, y-offsetYi) means the value of the reference pixel that is the basis of the prediction.
  • (x-offsetX i , y-offsetY i ) means the position of the reference pixel
  • offsetX i and offsetY i mean the position offset along the x direction and the y direction, respectively, with respect to the pixel of the current block where the prediction is performed.
  • c i denotes the filter coefficient of the extrapolation filter. Specifically, c i denotes the (i+1)th filter coefficient derived from the extrapolation filter.
  • the filter coefficients of the extrapolation filter can be derived as described in Figs. 4 to 7.
  • the reference pixels that serve as the basis for prediction within the screen are pixels included in a 4X4 sized area adjacent to the pixels of the current block, but this is an example, and the reference pixels may be pixels included in any area adjacent to the current block.
  • the size and shape of the area including the reference pixels may be the same as the size and shape of the extrapolation filter.
  • prediction is performed sequentially in a diagonal order from the pixel located at the upper left to the pixel located at the lower right, but this is only an example, and prediction can be performed sequentially in a predetermined order.
  • prediction can be performed sequentially in a vertical order from the pixel located at the lower left to the pixel located at the upper right.
  • prediction can be performed sequentially in a horizontal order from the pixel located at the upper right to the pixel located at the lower left.
  • the current pixel is predicted based on 15 reference pixels, but this is only one example, and prediction can be performed based on N reference pixels, where N is any positive integer.
  • FIG. 9 is a diagram for explaining an intra-screen prediction method based on an extrapolation filter applying weights according to one embodiment of the present invention.
  • the prediction accuracy of intra-screen prediction can be improved by applying weights during the prediction block generation process.
  • the weight can be determined based on the position of the pixel of the current block where the prediction is performed within the current block. For example, as the distance between the pixel of the current block where the prediction is performed and the restoration area increases, the prediction value of the current block is derived based on a prediction value with low prediction accuracy, so to compensate for this, the weight can be determined based on the position of the pixel of the current block.
  • the current block in the weighted extrapolation filter-based screen prediction, can be divided into three areas. Specifically, in Fig. 9, the current block can be divided into a first area (First area, 910), a second area (Second area, 920), and a third area (Third area, 930). In addition, the weight can be determined depending on which area the pixels of the current block for which prediction is performed are included.
  • weights applied to extrapolation filter coefficients may be determined differently for each area. Specifically, weights may be determined for pixels included in the first area using weighted values in the first area (Weighted values in the first area, 911), weights may be determined for pixels included in the second area using weighted values in the second area (Weighted values in the second area, 921), and weights may be determined for pixels included in the third area using weighted values in the third area (Weighted values in the third area, 931).
  • the numbers included in the weighted values of each area represent respective weights applied to each extrapolation filter coefficient. For example, the upper left number 3 of the weighted values (921) of the second area represents the weight applied to the first extrapolation filter coefficient.
  • the predicted value of the pixel of the current block can be derived as in mathematical expression 2.
  • pred (x, y) refers to the pixel of the current block where prediction is performed.
  • (x, y) refers to the position of the pixel where prediction is performed within the current block.
  • t (x-offsetXi,y-offsetYi) means the value of the reference pixel that is the basis of prediction.
  • (x-offsetX i , y-offsetY i ) represents the position of the reference pixel
  • offsetX i and offsetY i represent the position offsets in the x and y directions, respectively, for the pixels of the current block where prediction is performed.
  • c i means the filter coefficient of the extrapolation filter. That is, c i means the (i+1)th filter coefficient derived from the extrapolation filter.
  • w i means the determined weight. That is, w i means the weight applied to the filter coefficient c i .
  • the prediction can be performed based on the weight value (911) of the first region.
  • the weight w 0 applied to the first filter coefficient (c 0 ) derived from the extrapolation filter can be 3
  • the weight w 7 applied to the 8th filter coefficient (c 7 ) can be 2.
  • the current block is divided into three areas, but this is only one example, and the current block can be divided into M areas, where M is a positive integer greater than or equal to 2.
  • the weight value of the extrapolation filter coefficient applied to the pixels of each region in Fig. 9 is an example, and an arbitrary weight value can be applied to each region.
  • information about the weight value of each region can be determined by the encoder and transmitted to the decoder.
  • the weight value of each region can be determined as a value preset in the encoder/decoder.
  • FIGS. 10 to 14 are diagrams for explaining an intra-screen prediction method based on an extrapolation filter based on sampling performance according to one embodiment of the present invention.
  • Intra-screen prediction based on extrapolation filter based on sampling performance can reduce prediction complexity by performing prediction only for pixels determined based on sampling instead of performing prediction for all pixels included in the current block.
  • FIG. 10 is a flowchart illustrating an intra-screen prediction method based on an extrapolation filter based on sampling performance according to one embodiment of the present invention.
  • a sampling rate for the current block can be determined. Specifically, sampling rates in the horizontal direction and vertical direction can be determined (S1000).
  • sampling ratios in the horizontal and vertical directions can be determined independently.
  • the sampling rate can be determined by the encoder and transmitted to the decoder.
  • the sampling rate can be determined as a value preset in the encoder/decoder.
  • the sampling rate can be determined by the decoder.
  • the filter coefficient of the extrapolation filter can be derived according to the determined sampling ratio (S1010).
  • prediction of pixels of the current block can be performed based on the derived filter coefficients (S1020).
  • the prediction block of the current block can be generated based on the interpolation of the pixels of the current block for which prediction was performed (S1030).
  • FIG. 11 is a diagram for explaining an intra-screen prediction method based on an extrapolation filter based on sampling performance according to one embodiment of the present invention.
  • sampling can be performed on the current block.
  • prediction according to the extrapolation filter can be performed only on the pixels of the current block selected through sampling. That is, the pixels of the current block for which prediction is performed according to the extrapolation filter can be determined by performing sampling on the current block.
  • the first pixel may mean a pixel of a current block for which prediction is performed according to an extrapolation filter determined by sampling in an intra-screen prediction based on an extrapolation filter performed by sampling.
  • sampling is performed on the current block (1100) based on the determined vertical direction sampling ratio and horizontal direction sampling ratio.
  • the vertical direction sampling ratio and the horizontal direction sampling ratio are both 2. That is, sampling is performed in units of 2 pixels in both the vertical direction and the horizontal direction.
  • black pixels among the pixels included in the current block indicate pixels (Selected sample, 1101) for which prediction is performed according to the extrapolation filter selected by performing sampling.
  • White pixels among the pixels included in the current block indicate pixels for which prediction is not performed according to the extrapolation filter.
  • pixels for which prediction is performed according to an extrapolation filter can be determined based on the sampling ratio of the current block.
  • the sampling ratio of the current block is 2 in both the vertical and horizontal directions, prediction according to the extrapolation filter is performed only on the selected pixels.
  • the size of the current block is 8X8, and it can include 16 sub-blocks of 2X2 size. Therefore, when sampling is performed on the current block, the pixels of the current block for which prediction is performed according to the extrapolation filter can be determined as pixels at predetermined positions of each sub-block included in the current block. Referring to Fig. 11, prediction according to the extrapolation filter can be performed on pixels located at the upper left of each sub-block.
  • the pixels on which the prediction according to the extrapolation filter is performed are determined based on the sampling, so the sampling rate can be considered in deriving the filter coefficients of the extrapolation filter.
  • the shape of the extrapolation filter can be a 4X4 shape (1102), an 8X2 shape (1103), or a 2X8 shape (1104).
  • the hatched area of each shape of the extrapolation filter can mean an input pixel (input of EIP, 1105) of the extrapolation filter-based screen prediction mode
  • the grid patterned area can mean an output pixel (output of EIP, 1106) of the extrapolation filter-based screen prediction mode.
  • the extrapolation filter can move by 1 pixel in the neighboring reconstructed area (1107), which is the basis for deriving filter coefficients, to collect input pixels and output pixels of the extrapolation filter-based prediction mode within the screen.
  • the positions of input pixels and output pixels can be determined based on the sampling rate of the current block. For example, referring to Fig. 11, if the shape of the extrapolation filter is determined as a 4X4 shape (1102), the extrapolation filter moves in units of 1 pixel, and pixels in the hatched area, not all pixels in the extrapolation filter, can be collected as input pixels of the extrapolation filter-based screen prediction mode, and pixels in the grid pattern area can be collected as output pixels of the extrapolation filter-based screen prediction mode.
  • the extrapolation filter coefficients can be derived based on the collected input pixels and output pixels.
  • the extrapolation filter coefficients can be derived by calculating the auto-correlation matrix and the cross-correlation vector based on the collected input pixels and output pixels.
  • the pixel of the current block determined by performing sampling can be predicted based on the filter coefficient of the extrapolation filter and the reference pixel.
  • the reference pixel can mean a pixel at a predetermined position adjacent to the pixel of the current block for which prediction is performed according to the extrapolation filter among the pixels of the current block and the surrounding area.
  • the predicted value of the current block pixel can be derived according to mathematical expression 3.
  • t (x-offsetXi,y-offsetYi) means the value of the reference pixel that is the basis of prediction.
  • (x-offsetX i , y-offsetY i ) means the position of the reference pixel
  • offsetX i and offsetY i mean the position offset along the x direction and the y direction, respectively, with respect to the pixel of the current block where prediction is performed.
  • c i denotes the filter coefficient of the extrapolation filter. Specifically, c i denotes the (i+1)th filter coefficient derived from the extrapolation filter.
  • K represents the number of input pixels.
  • K is determined based on the sampling rate.
  • prediction is performed according to an extrapolation filter on a pixel located at the upper left of a 2X2 sized sub-block included in the current block.
  • this is only an example, and prediction according to an extrapolation filter can be performed on a pixel at a predetermined location in a sub-block of a predetermined size.
  • the vertical sampling ratio and the horizontal sampling ratio are both 2, but this is an example, and the vertical sampling ratio can be S and the horizontal sampling ratio can be T.
  • S and T are positive integers greater than or equal to 1.
  • the pixels on which prediction is performed can be determined according to the extrapolation filter.
  • prediction according to the extrapolation filter can be performed in units of 4 pixels. That is, sampling can start from the upper left pixel of the first 4X4 sized sub-block included in the current block.
  • the sampling ratio in the vertical direction and the sampling ratio in the horizontal direction can be determined independently. For example, if the sampling ratio in the vertical direction is determined, the sampling ratio in the horizontal direction can be determined as any positive integer greater than or equal to 1 regardless of the sampling ratio in the vertical direction. As another example, if the sampling ratio in the horizontal direction is determined, the sampling ratio in the vertical direction can be determined as any positive integer greater than or equal to 1 regardless of the sampling ratio in the horizontal direction.
  • sampling starts from the upper left pixel of the first sub-block, but this is only an example, and sampling can start from any pixel of the first sub-block.
  • the positions of input pixels and output pixels in the screen prediction mode based on the extrapolation filter in Fig. 11 are one example, and the input pixels and output pixels can be pixels at arbitrary positions.
  • the size and shape of the surrounding area in Fig. 11 are just one example, and the surrounding area that serves as the basis for deriving the filter coefficients of the extrapolation filter can be an area of any size and shape.
  • the shape of the extrapolation filter in Fig. 11 is determined as one of the 4X4 shape (1102), the 8X2 shape (1103), and the 2X8 shape (1104).
  • the number of input pixels collected based on the sampling rate is 3.
  • the shape of the extrapolation filter is one of the 4X4 shape, the 8X2 shape, and the 2X8 shape, the number of input pixels is small, and the prediction accuracy may be low. Therefore, in order to increase the number of input pixels of the extrapolation filter, the size of the extrapolation filter may be expanded as in Fig. 12.
  • FIG. 12 is a diagram for explaining an extended size extrapolation filter in an intra-screen prediction method based on an extrapolation filter performed according to one embodiment of the present invention.
  • the extrapolation filter when sampling is performed for the current block in the intra-screen prediction based on the extrapolation filter, can be any one of the 5X5 shape (1200), the 8X3 shape (1201), and the 3X8 shape (1202).
  • the black area of each shape of the extrapolation filter indicates the input pixel (input of EIP, 1203) of the intra-screen prediction mode based on the extrapolation filter, and the hatched area indicates the output pixel (output of EIP, 1204) of the intra-screen prediction mode based on the extrapolation filter. Therefore, the number of input pixels can be 8, and the prediction accuracy can be increased.
  • the shape of the extrapolation filter of the expanded size in Fig. 12 is of three types, but this is an example, and the shape of the extrapolation filter can be determined in any shape based on the sampling rate of the current block. For example, if the sampling rate of the current block increases, the size of the extrapolation filter can increase, and if the sampling rate of the current block decreases, the size of the extrapolation filter can decrease.
  • the predicted values of the pixels of the current block that are not selected through sampling are derived through interpolation.
  • the prediction value of a pixel included in the current block that is not predicted by the extrapolation filter can be derived by interpolating the pixels on which prediction was performed.
  • FIGS. 13 and 14 are diagrams for explaining interpolation in intra-screen prediction based on an extrapolation filter based on sampling performance according to one embodiment of the present invention.
  • sampling is performed on the current block (1300) in the prediction within the extrapolation filter screen based on the sampling performance.
  • the black area included in the current block (1300) indicates a down sampled prediction block (1301) generated by prediction according to an extrapolation filter.
  • the white area indicates pixels of the current block on which prediction is not performed according to an extrapolation filter.
  • vertical interpolation (1302) can be performed on the pixels of the current block where sampling is performed. Specifically, in the current block where sampling is performed, the prediction values of the pixels for which prediction according to the extrapolation filter is not performed can be derived based on the prediction values of the vertical pixels for which prediction according to the extrapolation filter is performed. Referring to Fig. 13, the prediction values of the pixels for which prediction according to the extrapolation filter is not performed are derived through vertical interpolation.
  • horizontal interpolation (Horizontal interpolation, 1303) can be performed. Specifically, prediction values of pixels for which prediction was not performed can be derived based on prediction values of horizontal direction pixels for which prediction was performed according to an extrapolation filter and prediction values of horizontal direction pixels derived through vertical interpolation. Referring to Fig. 13, prediction values of pixels for which prediction was not performed according to an extrapolation filter and pixels for which prediction values were not derived through vertical interpolation are derived through horizontal interpolation, thereby generating a prediction block (1304) of the current block.
  • sampling is performed on the current block (1400) in the prediction within the extrapolation filter screen based on the sampling performance.
  • the black area included in the current block (1400) means the down sampled prediction block (1401) generated by prediction according to the extrapolation filter.
  • the white area is the pixels of the current block where prediction is not performed according to the extrapolation filter.
  • horizontal interpolation can be performed on the pixels of the current block where sampling is performed.
  • the prediction values of the pixels for which prediction according to the extrapolation filter is not performed can be derived based on the prediction values of the horizontal pixels for which prediction according to the extrapolation filter is performed in the current block where sampling is performed.
  • the prediction values of the pixels for which prediction according to the extrapolation filter is not performed are derived through horizontal interpolation.
  • vertical interpolation can be performed (Vertical interpolation, 1403).
  • prediction values of pixels for which prediction was not performed can be derived based on prediction values of vertical direction pixels for which prediction was performed according to an extrapolation filter and prediction values of vertical direction pixels derived through horizontal interpolation.
  • prediction values of pixels for which prediction was not performed according to an extrapolation filter and pixels for which prediction values were not derived through horizontal interpolation are derived through vertical interpolation, thereby generating a prediction block (1404) of the current block.
  • interpolation is performed using a linear interpolation method, but this is only an example, and interpolation may be performed using any interpolation method.
  • pixels of the current block for which prediction is performed according to an extrapolation filter may be interpolated using a nonlinear interpolation method.
  • the nonlinear interpolation method may include a polynomial interpolation method, a spline interpolation method, a cubic interpolation method, etc.
  • pixels of the current block for which prediction is performed according to an extrapolation filter may be interpolated using a smoothing filter.
  • the filter coefficients of the smoothing filter may be derived based on a Gaussian function, a mean filter, a median filter, etc.
  • the interpolation method can be determined based on the size of the current block. That is, the interpolation method can be determined based on the result of comparing the width and height of the current block with a predetermined threshold value.
  • the interpolation method may be determined as a nonlinear interpolation method.
  • the interpolation method may be determined as a linear interpolation method.
  • the pixels of the current block for which prediction was performed according to the extrapolation filter may be interpolated using a smoothing filter.
  • the pixels of the current block for which prediction was performed according to the extrapolation filter may be interpolated using a smoothing filter.
  • the pixels of the current block for which prediction was performed according to the extrapolation filter may be interpolated using a smoothing filter.
  • a predetermined threshold value that serves as the basis for determining an interpolation method can be determined by the encoder and transmitted to the decoder.
  • the predetermined threshold value can be predefined in the encoder/decoder.
  • both the vertical and horizontal sampling ratios are 2, but this is only one example, and the vertical sampling ratio may be S and the horizontal sampling ratio may be T.
  • S and T are positive integers greater than or equal to 1.
  • the vertical sampling ratio and the horizontal sampling ratio may be determined independently.
  • Fig. 15 is a flowchart illustrating an image decoding method according to one embodiment of the present invention. Specifically, Fig. 15 is a flowchart for explaining an intra-screen prediction method based on an extrapolation filter in an image decoding method. The intra-screen prediction method of Fig. 15 can be performed by a decoding device.
  • the video decoding device can determine that the current block is predicted by an extrapolation filter-based intra-picture prediction mode (S1500).
  • the filter coefficients of the extrapolation filter for the prediction within the screen of the current block can be derived (S1510).
  • the step of deriving filter coefficients of an extrapolation filter for prediction within the screen of the current block based on pixels in the surrounding area of the current block may include collecting input pixels and output pixels in units of predetermined sub-blocks in the surrounding area, and deriving the filter coefficients using the correlation between the input pixels and the output pixels.
  • the position of the input pixel in the predetermined sub-block unit is determined based on the sampling rate of the current block, and the size of the predetermined sub-block unit can be determined according to the size of the extrapolation filter.
  • the image decoding device can sequentially predict the pixels of the current block in a predetermined order using an extrapolation filter (S1520).
  • the pixel of the current block can be predicted by applying a weight to the product of the value of the reference pixel and the filter coefficient of the extrapolation filter.
  • the weight may be determined based on the location of the pixel of the current block within the current block.
  • the image decoding device can divide the current block into a predetermined number of regions, and the weight can be determined based on a region among the divided regions that contains pixels of the current block.
  • prediction according to the extrapolation filter can be performed on the first pixel of the current block determined by sampling of the current block.
  • the first pixel of the current block may be a pixel at a predetermined position of a sub-block of a predetermined size within the current block.
  • the first pixel of the current block may be a pixel located at the upper left of a 2X2 sized sub-block within the current block.
  • a second pixel among the pixels included in the current block can be predicted by interpolating a first pixel of the current block, and the second pixel can be a pixel other than the first pixel among the pixels included in the current block.
  • the prediction value of the second pixel can be derived based on the horizontal interpolation and vertical interpolation of the first pixel.
  • the interpolation method for the second pixel prediction is determined as one of a linear interpolation method and a nonlinear interpolation method, and the nonlinear interpolation method may include a polynomial interpolation method, a spline interpolation method, and a cubic interpolation method.
  • the interpolation method can be determined based on the size of the current block.
  • the image decoding device can determine a sampling rate of the current block, and the first pixel of the current block can be determined according to sampling of the current block based on the sampling rate.
  • pixels of the current block can be predicted based on reference pixels and filter coefficients of the extrapolation filter.
  • the reference pixels may be pixels at predetermined positions adjacent to pixels of the current block among pixels of the current block and the surrounding area.
  • the pixel of the current block can be predicted by applying a weight to the product of the value of the reference pixel and the filter coefficient of the extrapolation filter.
  • the weight may be determined based on the location of the pixel of the current block within the current block.
  • the method further includes a step of dividing the current block into a predetermined number of regions, and the weight may be determined based on a region among the divided regions that contains pixels of the current block.
  • the pixels of the current block predicted using the extrapolation filter can be determined by performing sampling on the current block.
  • the method further includes a step of determining a sampling ratio of the current block, and the pixels of the current block predicted using the extrapolation filter can be determined by performing sampling on the current block based on the sampling ratio.
  • the vertical sampling ratio of the current block and the horizontal sampling ratio of the current block can be independently determined.
  • the pixel of the current block predicted using the extrapolation filter may be a pixel of a predetermined position of a sub-block of a predetermined size within the current block.
  • the pixel of the current block predicted using the extrapolation filter can be located at the upper left of a 2X2 sized sub-block within the current block.
  • the prediction value of a pixel among the pixels included in the current block that is not predicted according to the extrapolation filter can be derived by interpolating the pixels of the current block on which the prediction was performed.
  • the predicted values of pixels not predicted by the extrapolation filter can be derived based on horizontal interpolation and vertical interpolation.
  • the interpolation method of the pixels of the current block for which the above prediction is performed can be determined as one of a linear interpolation method, a nonlinear interpolation method, a spline interpolation method, and a cubic interpolation method.
  • the interpolation method can be determined based on the size of the current block.
  • the interpolation method may be determined as a nonlinear interpolation method.
  • the shape of the extrapolation filter can be determined based on the sampling ratio.
  • the shape of the extrapolation filter can be any one of a 4X4 shape, a 2X8 shape, and an 8X2 shape.
  • a bitstream can be generated by an image encoding method including these steps.
  • the bitstream can be stored in a non-transitory computer-readable recording medium, and can also be transmitted (or streamed).
  • FIG. 16 is a drawing exemplarily showing a content streaming system to which an embodiment according to the present invention can be applied.
  • a content streaming system to which an embodiment of the present invention is applied may largely include an encoding server, a streaming server, a web server, a media storage, a user device, and a multimedia input device.
  • the encoding server compresses content input from multimedia input devices such as smartphones, cameras, CCTVs, etc. into digital data to generate a bitstream and transmits it to the streaming server.
  • multimedia input devices such as smartphones, cameras, CCTVs, etc. directly generate a bitstream
  • the encoding server may be omitted.
  • the above bitstream can be generated by an image encoding method and/or an image encoding device to which an embodiment of the present invention is applied, and the streaming server can temporarily store the bitstream during the process of transmitting or receiving the bitstream.
  • the above streaming server transmits multimedia data to a user device based on a user request via a web server, and the web server can act as an intermediary that informs the user of any available services.
  • the web server transmits it to the streaming server, and the streaming server can transmit multimedia data to the user.
  • the content streaming system may include a separate control server, and in this case, the control server may perform a role of controlling commands/responses between each device within the content streaming system.
  • the above streaming server can receive content from a media storage and/or an encoding server. For example, when receiving content from the encoding server, the content can be received in real time. In this case, in order to provide a smooth streaming service, the streaming server can store the bitstream for a certain period of time.
  • Examples of the user devices may include mobile phones, smart phones, laptop computers, digital broadcasting terminals, personal digital assistants (PDAs), portable multimedia players (PMPs), navigation devices, slate PCs, tablet PCs, ultrabooks, wearable devices (e.g., smartwatches, smart glasses, HMDs (head mounted displays)), digital TVs, desktop computers, digital signage, etc.
  • PDAs personal digital assistants
  • PMPs portable multimedia players
  • navigation devices slate PCs
  • tablet PCs tablet PCs
  • ultrabooks ultrabooks
  • wearable devices e.g., smartwatches, smart glasses, HMDs (head mounted displays)
  • digital TVs desktop computers, digital signage, etc.
  • Each server within the above content streaming system can be operated as a distributed server, in which case data received from each server can be distributedly processed.
  • an image can be encoded/decoded using at least one or a combination of at least one of the above embodiments.
  • the order in which the above embodiments are applied may be different in the encoding device and the decoding device. Alternatively, the order in which the above embodiments are applied may be the same in the encoding device and the decoding device.
  • the above embodiments can be performed for each of the luminance and chrominance signals, or the above embodiments can be performed identically for the luminance and chrominance signals.
  • the methods are described based on the flowchart as a series of steps or units, but the present invention is not limited to the order of the steps, and some steps may occur in a different order or simultaneously with other steps described above.
  • the steps shown in the flowchart are not exclusive, and other steps may be included, or one or more steps in the flowchart may be deleted without affecting the scope of the present invention.
  • the above embodiments may be implemented in the form of program commands that can be executed through various computer components and recorded on a computer-readable recording medium.
  • the computer-readable recording medium may include program commands, data files, data structures, etc., alone or in combination.
  • the program commands recorded on the computer-readable recording medium may be those specifically designed and configured for the present invention or may be those known to and available to those skilled in the art of computer software.
  • a bitstream generated by an encoding method according to the above embodiment can be stored in a non-transitory computer-readable recording medium.
  • the bitstream stored in the non-transitory computer-readable recording medium can be decoded by a decoding method according to the above embodiment.
  • examples of computer-readable recording media include magnetic media such as hard disks, floppy disks, and magnetic tapes, optical recording media such as CD-ROMs and DVDs, magneto-optical media such as floptical disks, and hardware devices specifically configured to store and execute program instructions such as ROMs, RAMs, and flash memories.
  • Examples of program instructions include not only machine language codes generated by a compiler, but also high-level language codes that can be executed by a computer using an interpreter, etc.
  • the hardware devices may be configured to operate as one or more software modules to perform the processing according to the present invention, and vice versa.
  • the present invention can be used in a device for encoding/decoding an image and a recording medium storing a bitstream.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

L'invention concerne un procédé et un dispositif de codage/décodage d'image, un support d'enregistrement stockant un flux binaire, et un procédé de transmission. Le procédé de décodage d'image comprend les étapes consistant à : déterminer qu'un bloc courant est prédit par un mode de prédiction intra basé sur un filtre d'extrapolation ; déduire un coefficient d'un filtre d'extrapolation pour une prédiction intra du bloc courant sur la base de pixels d'une zone environnante du bloc courant ; et prédire séquentiellement des pixels du bloc courant dans un ordre prédéterminé à l'aide du filtre d'extrapolation, les pixels du bloc courant étant prédits sur la base de pixels de référence et du coefficient du filtre d'extrapolation, et les pixels de référence pouvant être des pixels à des positions prédéterminées adjacentes aux pixels du bloc courant parmi les pixels du bloc courant et de la zone environnante.
PCT/KR2024/015662 2023-10-16 2024-10-16 Procédé et dispositif de codage/décodage d'image, et support d'enregistrement stockant un flux binaire Pending WO2025084770A1 (fr)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
KR20230137437 2023-10-16
KR10-2023-0137437 2023-10-16
KR20240082096 2024-06-24
KR10-2024-0082096 2024-06-24
KR10-2024-0141037 2024-10-16
KR1020240141037A KR20250054742A (ko) 2023-10-16 2024-10-16 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체

Publications (1)

Publication Number Publication Date
WO2025084770A1 true WO2025084770A1 (fr) 2025-04-24

Family

ID=95448209

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2024/015662 Pending WO2025084770A1 (fr) 2023-10-16 2024-10-16 Procédé et dispositif de codage/décodage d'image, et support d'enregistrement stockant un flux binaire

Country Status (1)

Country Link
WO (1) WO2025084770A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101675538B1 (ko) * 2011-06-28 2016-11-11 삼성전자주식회사 비대칭형 보간 필터를 이용하여 영상을 보간하는 방법 및 그 장치
KR101840025B1 (ko) * 2012-04-26 2018-03-20 소니 주식회사 인트라 예측 방향에 따른 예측 유닛들의 필터링
KR20200117031A (ko) * 2018-02-14 2020-10-13 후아웨이 테크놀러지 컴퍼니 리미티드 적응형 보간 필터
KR20210006305A (ko) * 2019-07-08 2021-01-18 현대자동차주식회사 동영상 데이터의 인트라 예측 코딩을 위한 방법 및 장치

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101675538B1 (ko) * 2011-06-28 2016-11-11 삼성전자주식회사 비대칭형 보간 필터를 이용하여 영상을 보간하는 방법 및 그 장치
KR101840025B1 (ko) * 2012-04-26 2018-03-20 소니 주식회사 인트라 예측 방향에 따른 예측 유닛들의 필터링
KR20200117031A (ko) * 2018-02-14 2020-10-13 후아웨이 테크놀러지 컴퍼니 리미티드 적응형 보간 필터
KR20210006305A (ko) * 2019-07-08 2021-01-18 현대자동차주식회사 동영상 데이터의 인트라 예측 코딩을 위한 방법 및 장치

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
L. XU (OPPO), Y. YU (OPPO), H. YU (OPPO), J. GAN (OPPO), D. WANG (OPPO): "EE2-2.7: An extrapolation filter-based intra prediction mode", 32. JVET MEETING; 20231013 - 20231020; HANNOVER; (THE JOINT VIDEO EXPLORATION TEAM OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ), 13 October 2023 (2023-10-13), XP030312107 *

Similar Documents

Publication Publication Date Title
WO2019235822A1 (fr) Procédé et dispositif de traitement de signal vidéo à l'aide de prédiction de mouvement affine
WO2020180119A1 (fr) Procédé de décodage d'image fondé sur une prédiction de cclm et dispositif associé
WO2019194463A1 (fr) Procédé de traitement d'image et appareil associé
WO2024043745A1 (fr) Procédé et appareil de codage/décodage d'image basé sur un mode d'intra-prédiction utilisant une ligne de référence multiple (mrl), et support d'enregistrement pour stocker un flux binaire
WO2023204624A1 (fr) Procédé et dispositif de codage/décodage d'image sur la base d'une prédiction de modèle inter-composantes convolutif (cccm), et support d'enregistrement pour stocker un flux binaire
WO2025084770A1 (fr) Procédé et dispositif de codage/décodage d'image, et support d'enregistrement stockant un flux binaire
WO2021137577A1 (fr) Procédé et appareil de codage/décodage d'image en vue de la réalisation d'une prédiction sur la base d'un type de mode de prédiction reconfiguré d'un nœud terminal, et procédé de transmission de flux binaire
WO2025135643A1 (fr) Procédé et appareil de codage/décodage d'image et support d'enregistrement dans lequel est stocké un flux binaire
WO2024191196A1 (fr) Procédé et appareil de codage/décodage d'image et support d'enregistrement dans lequel est stocké un flux binaire
WO2025018679A1 (fr) Procédé et dispositif de codage/décodage d'image, et support d'enregistrement stockant un flux binaire
WO2025178381A1 (fr) Procédé et dispositif de codage/décodage d'image, et support d'enregistrement sur lequel un flux binaire est stocké
WO2025029091A1 (fr) Procédé et dispositif de codage/décodage d'image, et support d'enregistrement sur lequel un flux binaire est stocké
WO2025042121A1 (fr) Procédé et dispositif de codage/décodage d'image, et support d'enregistrement sur lequel un flux binaire est stocké
WO2019199152A1 (fr) Procédé et dispositif de traitement de signal vidéo par prédiction affine
WO2025048441A1 (fr) Procédé et dispositif de codage/décodage d'image, et support d'enregistrement stockant un flux binaire
WO2025105846A1 (fr) Procédé et dispositif de codage et de décodage d'image, et support d'enregistrement stockant un flux binaire
WO2025178438A1 (fr) Procédé et dispositif de codage et de décodage d'image et support d'enregistrement dans lequel est stocké un flux binaire
WO2024248598A1 (fr) Procédé et dispositif de codage/décodage d'image, et support d'enregistrement stockant un flux binaire
WO2023171988A1 (fr) Procédé et appareil de codage/décodage d'image, et support d'enregistrement stockant un train de bits
WO2025206743A1 (fr) Procédé et dispositif de codage/décodage d'image, et support d'enregistrement sur lequel un flux binaire est enregistré
WO2024191221A1 (fr) Procédé et dispositif de codage/décodage d'image, et support d'enregistrement stockant un flux binaire
WO2024025370A1 (fr) Procédé de codage/décodage d'image, dispositif, et support d'enregistrement dans lequel est stocké un flux binaire
WO2024205266A1 (fr) Procédé et dispositif de codage/décodage d'image et support d'enregistrement stockant un flux binaire
WO2024080849A1 (fr) Procédé et appareil de codage/décodage d'images, et support d'enregistrement sur lequel a été stocké un flux binaire
WO2024144118A1 (fr) Procédé et dispositif d'encodage/de décodage d'images, support d'enregistrement stockant des flux binaires

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24880107

Country of ref document: EP

Kind code of ref document: A1