[go: up one dir, main page]

WO2024016983A1 - Method and apparatus for adaptive loop filter with geometric transform for video coding - Google Patents

Method and apparatus for adaptive loop filter with geometric transform for video coding Download PDF

Info

Publication number
WO2024016983A1
WO2024016983A1 PCT/CN2023/103572 CN2023103572W WO2024016983A1 WO 2024016983 A1 WO2024016983 A1 WO 2024016983A1 CN 2023103572 W CN2023103572 W CN 2023103572W WO 2024016983 A1 WO2024016983 A1 WO 2024016983A1
Authority
WO
WIPO (PCT)
Prior art keywords
sample
difference
current
alf
filter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2023/103572
Other languages
French (fr)
Inventor
Yu-Cheng Lin
Yu-Ling Hsiao
Shih-Chun Chiu
Chih-Wei Hsu
Tzu-Der Chuang
Ching-Yeh Chen
Yu-Wen Huang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
MediaTek Inc
Original Assignee
MediaTek Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by MediaTek Inc filed Critical MediaTek Inc
Priority to TW112126396A priority Critical patent/TW202406337A/en
Publication of WO2024016983A1 publication Critical patent/WO2024016983A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/154Measured or subjectively estimated visual quality after decoding, e.g. measurement of distortion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • H04N19/82Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop

Definitions

  • the present invention is a non-Provisional Application of and claims priority to U.S. Provisional Patent Application No. 63/368,903, filed on July 20, 2022.
  • the U.S. Provisional Patent Application is hereby incorporated by reference in its entirety.
  • the present invention relates to video coding system using ALF (Adaptive Loop Filter) .
  • ALF Adaptive Loop Filter
  • the present invention relates to the ALF using new geometric transform and signalling thereof.
  • VVC Versatile video coding
  • JVET Joint Video Experts Team
  • MPEG ISO/IEC Moving Picture Experts Group
  • ISO/IEC 23090-3 2021
  • Information technology -Coded representation of immersive media -Part 3 Versatile video coding, published Feb. 2021.
  • VVC is developed based on its predecessor HEVC (High Efficiency Video Coding) by adding more coding tools to improve coding efficiency and also to handle various types of video sources including 3-dimensional (3D) video signals.
  • HEVC High Efficiency Video Coding
  • Fig. 1A illustrates an exemplary adaptive Inter/Intra video coding system incorporating loop processing.
  • Intra Prediction the prediction data is derived based on previously coded video data in the current picture.
  • Motion Estimation (ME) is performed at the encoder side and Motion Compensation (MC) is performed based of the result of ME to provide prediction data derived from other picture (s) and motion data.
  • Switch 114 selects Intra Prediction 110 or Inter-Prediction 112 and the selected prediction data is supplied to Adder 116 to form prediction errors, also called residues.
  • the prediction error is then processed by Transform (T) 118 followed by Quantization (Q) 120.
  • T Transform
  • Q Quantization
  • the transformed and quantized residues are then coded by Entropy Encoder 122 to be included in a video bitstream corresponding to the compressed video data.
  • the bitstream associated with the transform coefficients is then packed with side information such as motion and coding modes associated with Intra prediction and Inter prediction, and other information such as parameters associated with loop filters applied to underlying image area.
  • the side information associated with Intra Prediction 110, Inter prediction 112 and in-loop filter 130, are provided to Entropy Encoder 122 as shown in Fig. 1A. When an Inter-prediction mode is used, a reference picture or pictures have to be reconstructed at the encoder end as well.
  • the transformed and quantized residues are processed by Inverse Quantization (IQ) 124 and Inverse Transformation (IT) 126 to recover the residues.
  • the residues are then added back to prediction data 136 at Reconstruction (REC) 128 to reconstruct video data.
  • the reconstructed video data may be stored in Reference Picture Buffer 134 and used for prediction of other frames.
  • incoming video data undergoes a series of processing in the encoding system.
  • the reconstructed video data from REC 128 may be subject to various impairments due to a series of processing.
  • in-loop filter 130 is often applied to the reconstructed video data before the reconstructed video data are stored in the Reference Picture Buffer 134 in order to improve video quality.
  • deblocking filter (DF) may be used.
  • SAO Sample Adaptive Offset
  • ALF Adaptive Loop Filter
  • the loop filter information may need to be incorporated in the bitstream so that a decoder can properly recover the required information. Therefore, loop filter information is also provided to Entropy Encoder 122 for incorporation into the bitstream.
  • DF deblocking filter
  • SAO Sample Adaptive Offset
  • ALF Adaptive Loop Filter
  • Loop filter 130 is applied to the reconstructed video before the reconstructed samples are stored in the reference picture buffer 134.
  • the system in Fig. 1A is intended to illustrate an exemplary structure of a typical video encoder. It may correspond to the High Efficiency Video Coding (HEVC) system, VP8, VP9, H. 264 or VVC.
  • HEVC High Efficiency Video Coding
  • the decoder can use similar or portion of the same functional blocks as the encoder except for Transform 118 and Quantization 120 since the decoder only needs Inverse Quantization 124 and Inverse Transform 126.
  • the decoder uses an Entropy Decoder 140 to decode the video bitstream into quantized transform coefficients and needed coding information (e.g. ILPF information, Intra prediction information and Inter prediction information) .
  • the Intra prediction 150 at the decoder side does not need to perform the mode search. Instead, the decoder only needs to generate Intra prediction according to Intra prediction information received from the Entropy Decoder 140.
  • the decoder only needs to perform motion compensation (MC 152) according to Inter prediction information received from the Entropy Decoder 140 without the need for motion estimation.
  • an input picture is partitioned into non-overlapped square block regions referred as CTUs (Coding Tree Units) , similar to HEVC.
  • CTUs Coding Tree Units
  • Each CTU can be partitioned into one or multiple smaller size coding units (CUs) .
  • the resulting CU partitions can be in square or rectangular shapes.
  • VVC divides a CTU into prediction units (PUs) as a unit to apply prediction process, such as Inter prediction, Intra prediction, etc.
  • Adaptive Loop Filter with new geometric transform and signalling associated with the new geometric transform are disclosed for the emerging video coding development beyond the VVC.
  • a method and apparatus for video coding using ALF are disclosed.
  • reconstructed pixels are received, wherein the reconstructed pixels comprise current reconstructed pixels in a current block.
  • a current to-be-filtered sample among the reconstructed pixels in the current block and a set of filter samples surrounding the current to-be-filtered sample are determined.
  • Difference measures for at least a partial set of filter samples are derived, wherein each of the difference measures is related to sample differences between a pair of respective filter samples and the current to-be-filtered sample, and the pair of respective filter samples are located symmetrically with respect to the current to-be-filtered sample.
  • At least a partial set of ALF coefficients are assigned to at least the partial set of filter samples according to the difference measures.
  • a filtered output sample is derived by applying an ALF with at least the partial set of ALF coefficients to the current to-be-filtered sample.
  • At least the partial set of ALF coefficients are assigned to at least the partial set of filter samples according to an ascending order of the difference measures. In another embodiment, at least the partial set of ALF coefficients are assigned to at least the partial set of filter samples according to a descending order of the difference measures.
  • said each of the difference measures comprises a first term related to a first difference corresponding to a first sample difference between a first one of the pair of respective filter samples and the current to-be-filtered sample, and a second difference corresponding to a second sample difference between a second one of the pair of respective filter samples and the current to-be-filtered sample.
  • said each of the difference measures corresponds to a sum of the first sample difference and the second sample difference.
  • said each of the difference measures corresponds to a sum of absolute value of the first sample difference and absolute value of the second sample difference.
  • said each of the difference measures corresponds to an absolute value of a sum of the first sample difference and the second sample difference.
  • said each of the difference measures corresponds to a sum of a clipped first sample difference and a clipped second sample difference. In yet another embodiment, said each of the difference measures corresponds to a sum of absolute value of a clipped first sample difference and absolute value of a clipped second sample difference. In yet another embodiment, said each of the difference measures corresponds to an absolute value of a sum of a clipped first sample difference and a clipped sample second difference.
  • said at least the partial set of filter samples is the same as the set of filter samples. In another embodiment, said at least the partial set of filter samples is less than the set of filter samples.
  • reconstructed pixels are received, wherein the reconstructed pixels comprise current reconstructed pixels in a current block.
  • a target geometric transformation from a set of geometric transformations according to an index.
  • a filtered block is derived by applying the target geometric transformation to the current reconstructed pixels. The filtered block is provided.
  • the index is signalled or parsed based on per APS (Application Parameter Set) , filter set or class.
  • the index is inferred based on a target ALF classifier selected.
  • the target geometric transformation may correspond to a geometric transformation by gradient in response to the target ALF classifier selected being a gradient classifier.
  • the target geometric transformation may correspond to a geometric transformation by neighbouring reordering in response to the target ALF classifier selected being a band classifier.
  • Fig. 1A illustrates an exemplary adaptive Inter/Intra video coding system incorporating loop processing.
  • Fig. 1B illustrates a corresponding decoder for the encoder in Fig. 1A.
  • Fig. 2 illustrates the ALF filter shapes for the chroma (left) and luma (right) components.
  • Figs. 3A-D illustrates the subsampled Laplacian calculations for g v (3A) , g h (3B) , g d1 (3C) and g d2 (3D) .
  • Fig. 4A illustrates the placement of CC-ALF with respect to other loop filters.
  • Fig. 4B illustrates a diamond shaped filter for the chroma samples.
  • Fig. 5 illustrates an example of neighbouring sample positions used to derive the neighbouring differences for ALF filter coefficient reordering according to an embodiment of the present invention.
  • Fig. 6A illustrates an example of rotating the ALF filter in Fig. 5 by 45 degrees clockwise.
  • Fig. 6B illustrates another example of rotating the ALF filter in Fig. 5 by 45 degrees clockwise.
  • Fig. 7 illustrates an example of rotating only part of ALF coefficients of the filter in Fig. 5.
  • Fig. 8 illustrates a flowchart of an exemplary video coding system that reorders ALF coefficients based on neighbouring differences according to an embodiment of the present invention.
  • Fig. 9 illustrates a flowchart of an exemplary video coding system that uses an index to select a target geometric transformation from a set of geometric transformations according to an embodiment of the present invention.
  • an Adaptive Loop Filter (ALF) with block-based filter adaption is applied.
  • ALF Adaptive Loop Filter
  • the 7 ⁇ 7 diamond shape 220 is applied for luma component and the 5 ⁇ 5 diamond shape 210 is applied for chroma components.
  • each 4 ⁇ 4 block is categorized into one out of 25 classes.
  • the classification index C is derived based on its directionality D and a quantized value of activity as follows:
  • indices i and j refer to the coordinates of the upper left sample within the 4 ⁇ 4 block and R (i, j) indicates a reconstructed sample at coordinate (i, j) .
  • the subsampled 1-D Laplacian calculation is applied to the vertical direction (Fig. 3A) and the horizontal direction (Fig. 3B) .
  • the same subsampled positions are used for gradient calculation of all directions (g d1 in Fig. 3C and g d2 in Fig. 3D) .
  • D maximum and minimum values of the gradients of horizontal and vertical directions are set as:
  • Step 1 If both are true, D is set to 0.
  • Step 2 If continue from Step 3; otherwise continue from Step 4.
  • Step 3 If D is set to 2; otherwise D is set to 1.
  • the activity value A is calculated as:
  • A is further quantized to the range of 0 to 4, inclusively, and the quantized value is denoted as
  • K is the size of the filter and 0 ⁇ k, l ⁇ K-1 are coefficients coordinates, such that location (0,0) is at the upper left corner and location (K-1, K-1) is at the lower right corner.
  • the transformations are applied to the filter coefficients f (k, l) and to the clipping values c (k, l) depending on gradient values calculated for that block. The relationship between the transformation and the four gradients of the four directions are summarized in the following table.
  • each sample R (i, j) within the CU is filtered, resulting in sample value R′ (i, j) as shown below,
  • f (k, l) denotes the decoded filter coefficients
  • K (x, y) is the clipping function
  • c (k, l) denotes the decoded clipping parameters.
  • the variable k and l varies between –L/2 and L/2, where L denotes the filter length.
  • the clipping function K (x, y) min (y, max (-y, x) ) which corresponds to the function Clip3 (-y, y, x) .
  • the clipping operation introduces non-linearity to make ALF more efficient by reducing the impact of neighbour sample values that are too different with the current sample value.
  • CC-ALF uses luma sample values to refine each chroma component by applying an adaptive, linear filter to the luma channel and then using the output of this filtering operation for chroma refinement.
  • Fig. 4A provides a system level diagram of the CC-ALF process with respect to the SAO, luma ALF and chroma ALF processes. As shown in Fig. 4A, each colour component (i.e., Y, Cb and Cr) is processed by its respective SAO (i.e., SAO Luma 410, SAO Cb 412 and SAO Cr 414) .
  • SAO i.e., SAO Luma 410, SAO Cb 412 and SAO Cr 414.
  • ALF Luma 420 is applied to the SAO-processed luma and ALF Chroma 430 is applied to SAO-processed Cb and Cr.
  • ALF Chroma 430 is applied to SAO-processed Cb and Cr.
  • there is a cross-component term from luma to a chroma component i.e., CC-ALF Cb 422 and CC-ALF Cr 424) .
  • the outputs from the cross-component ALF are added (using adders 432 and 434 respectively) to the outputs from ALF Chroma 430.
  • Filtering in CC-ALF is accomplished by applying a linear, diamond shaped filter (e.g. filters 440 and 442 in Fig. 4B) to the luma channel.
  • a linear, diamond shaped filter e.g. filters 440 and 442 in Fig. 4B
  • a blank circle indicates a luma sample and a dot-filled circle indicate a chroma sample.
  • One filter is used for each chroma channel, and the operation is expressed as:
  • (x, y) is chroma component i location being refined
  • (x Y , y Y ) is the luma location based on (x, y)
  • S i is filter support area in luma component
  • c i (x 0 , y 0 ) represents the filter coefficients.
  • the c i in the above equation may correspond to Cb or Cr.
  • the luma filter support is the region collocated with the current chroma sample after accounting for the spatial scaling factor between the luma and chroma planes.
  • CC-ALF filter coefficients are computed by minimizing the mean square error of each chroma channel with respect to the original chroma content.
  • VTM VVC Test Model
  • the VTM (VVC Test Model) algorithm uses a coefficient derivation process similar to the one used for chroma ALF. Specifically, a correlation matrix is derived, and the coefficients are computed using a Cholesky decomposition solver in an attempt to minimize a mean square error metric.
  • a maximum of 8 CC-ALF filters can be designed and transmitted per picture. The resulting filters are then indicated for each of the two chroma channels on a CTU basis.
  • CC-ALF Additional characteristics include:
  • the design uses a 3x4 diamond shape with 8 filters.
  • Each of the transmitted coefficients has a 6-bit dynamic range and is restricted to power-of-2 values.
  • the eighth filter coefficient is derived at the decoder such that the sum of the filter coefficients is equal to 0.
  • An APS may be referenced in the slice header.
  • ⁇ CC-ALF filter selection is controlled at CTU-level for each chroma component.
  • the reference encoder can be configured to enable some basic subjective tuning through the configuration file.
  • the VTM attenuates the application of CC-ALF in regions that are coded with high QP and are either near mid-grey or contain a large amount of luma high frequencies. Algorithmically, this is accomplished by disabling the application of CC-ALF in CTUs where any of the following conditions are true:
  • the slice QP value minus 1 is less than or equal to the base QP value.
  • ALF filter parameters are signalled in Adaptation Parameter Set (APS) .
  • APS Adaptation Parameter Set
  • up to 25 sets of luma filter coefficients and clipping value indexes, and up to eight sets of chroma filter coefficients and clipping value indexes could be signalled.
  • filter coefficients of different classification for luma component can be merged.
  • slice header the indices of the APSs used for the current slice are signalled.
  • is a pre-defined constant value equal to 2.35, and N equal to 4 which is the number of allowed clipping values in VVC.
  • the AlfClip is then rounded to the nearest value with the format of power of 2.
  • APS indices can be signalled to specify the luma filter sets that are used for the current slice.
  • the filtering process can be further controlled at CTB level.
  • a flag is always signalled to indicate whether ALF is applied to a luma CTB.
  • a luma CTB can choose a filter set among 16 fixed filter sets and the filter sets from APSs.
  • a filter set index is signaled for a luma CTB to indicate which filter set is applied.
  • the 16 fixed filter sets are pre-defined and hard-coded in both the encoder and the decoder.
  • an APS index is signalled in slice header to indicate the chroma filter sets being used for the current slice.
  • a filter index is signalled for each chroma CTB if there is more than one chroma filter set in the APS.
  • the filter coefficients are quantized with norm equal to 128.
  • a bitstream conformance is applied so that the coefficient value of the non-central position shall be in the range of -2 7 to 2 7 -1, inclusive.
  • the central position coefficient is not signalled in the bitstream and is considered as equal to 128.
  • Classification in ALF is extended with an additional alternative band classifier.
  • a flag is signalled to indicate whether the alternative classifier is applied.
  • Geometric transformation is not applied to the alternative band classifier.
  • class_index (sum *25) >> (sample bit depth + 2) .
  • Block size for classification is reduced from 4x4 to 2x2.
  • Filter size for both luma and chroma, for which ALF coefficients are signalled, is increased to 9x9.
  • two 13x13 diamond shape fixed filters F 0 and F 1 are applied to derive two intermediate samples R 0 (x, y) and R 1 (x, y) .
  • F 2 is applied to R 0 (x, y) , R 1 (x, y) , and neighbouring samples to derive a filtered sample as
  • f i, j is the clipped difference between a neighbouring sample and current sample R (x, y) and g i is the clipped difference between R i-20 (x, y) and current sample.
  • M D, i represents the total number of directionalities D i .
  • values of the horizontal, vertical, and two diagonal gradients are calculated for each sample using 1-D Laplacian.
  • the sum of the sample gradients within a 4 ⁇ 4 window that covers the target 2 ⁇ 2 block is used for classifier C 0 and the sum of sample gradients within a 12 ⁇ 12 window is used for classifiers C 1 and C 2 .
  • the sums of horizontal, vertical and two diagonal gradients are denoted, respectively, as and The directionality D i is determined by comparing
  • the directionality D 2 is derived as in VVC using thresholds 2 and 4.5.
  • D 0 and D 1 horizontal/vertical edge strength and diagonal edge strength are calculated first.
  • Thresholds Th [1.25, 1.5, 2, 3, 4.5, 8] are used.
  • each set may have up to 25 filters.
  • different ALFs can be derived through geometric transformation (e.g. diagonal, vertical or horizontal rotation) of an original ALF according to the gradient information derived for the block.
  • multiple ALFs through geometric transformation are available for the geometric transformation by gradient.
  • multiple geometric transformations including the conventional geometric transformation by gradient.
  • an ALF filter has a footprint (a7x7 diamond shape in this example) to cover selected neighbouring samples for the filtering process.
  • the neighbouring samples in the footprint are referred as filter samples in this disclosure.
  • the filter coefficient is set to C0; and for the position with the next smallest difference, the filter coefficient is set to C1; and so on for the remaining coefficients.
  • the filter coefficient is set to C11.
  • neighbouring difference calculation could be sorted according to one of the following values.
  • the sorting order can be ascending (i.e., C0 for the smallest one) or descending (i.e., C0 for the largest one) . Therefore, for a given set of ALF coefficients (e.g. C0, C1, ..., C11) , multiple ALFs can be derived according to neighbouring difference reordering. In other words, this neighbouring difference reordering generates a new type of geometric transformation according to the present invention.
  • only part of coefficients in the filter shape are reordered or transformed.
  • only the centre 3x3 coefficients in the filter shape e.g., C5, C6, C7, C11
  • the centre 5x5 coefficients in the filter shape e.g., C5, C6, C7, C11, C1, C4, C10, C8
  • only the coefficients located in the outside region are rotated (e.g., C0, C1, C3, C4, C8, C9) .
  • rotation with one pre-defined degree is supported in geometric transform. For example, if the sum of vertical gradient is the largest one among all directions, the filter coefficients are rotated 90 degrees in the clockwise direction. If the sum of 45-degree gradient is the largest one among all directions, the filter coefficients are rotated 45 degrees in the clockwise direction, as shown in Fig. 6A and Fig. 6B.
  • the filter coefficients are rotated 135 degrees in the clockwise direction.
  • more directions are calculated according to horizontal and vertical gradients (e.g., 8 directions) , and the coefficients are rotated accordingly.
  • the rotated filter shape can be different to each other.
  • only a part of coefficients is rotated in order to keep the same filter shape among different rotation cases. An example is shown in Fig. 7, where the filter is rotated by 90 degrees counter clockwise except for the 4 positions at the top, bottom, leftmost and rightmost, while maintaining the ALF footprint the same as the original one.
  • the 7x7 filter shape is used as an example to illustrate the geometric transformation according to embodiments of the present invention.
  • the present invention is not limited to this specific filter shape. Instead, the present invention can be applied to any ALF filter shape.
  • an index to indicate which geometric transformation to use among multiple geometric transformations is signalled (at the encoder side) or parsed (at the decoder side) per APS.
  • each filter set has one index signalled or parsed to indicate which geometric transformation to use.
  • each class has one index signalled or parsed to indicate which geometric transformation to use.
  • the index to indicate which geometric transformation to use among multiple geometric transformations can be inferred from classifier selection. For example, for gradient classifier, geometric transformation by gradient is always used, or for band classifier, geometric transformation by neighbouring reordering is always used.
  • any of the ALF methods described above can be implemented in encoders and/or decoders.
  • any of the proposed methods can be implemented in the in-loop filter module (e.g. ILPF 130 in Fig. 1A and Fig. 1B) of an encoder or a decoder.
  • any of the proposed methods can be implemented as a circuit coupled to the inter coding module of an encoder and/or motion compensation module, a merge candidate derivation module of the decoder.
  • the ALF methods may also be implemented using executable software or firmware codes stored on a media, such as hard disk or flash memory, for a CPU (Central Processing Unit) or programmable devices (e.g. DSP (Digital Signal Processor) or FPGA (Field Programmable Gate Array) ) .
  • a media such as hard disk or flash memory, for a CPU (Central Processing Unit) or programmable devices (e.g. DSP (Digital Signal Processor) or FPGA (Field Programmable Gate Array) ) .
  • DSP
  • Fig. 8 illustrates a flowchart of an exemplary video coding system that reorders ALF coefficients based on neighbouring differences according to an embodiment of the present invention.
  • the steps shown in the flowchart may be implemented as program codes executable on one or more processors (e.g., one or more CPUs) at the encoder side.
  • the steps shown in the flowchart may also be implemented based hardware such as one or more electronic devices or processors arranged to perform the steps in the flowchart.
  • reconstructed pixels are received in step 810, wherein the reconstructed pixels comprise current reconstructed pixels in a current block.
  • a current to-be-filtered sample among the reconstructed pixels in the current block and a set of filter samples surrounding the current to-be-filtered sample are determined in step 820.
  • Difference measures are derived for at least a partial set of filter samples in step 830, wherein each of the difference measures is related to sample differences between a pair of respective filter samples and the current to-be-filtered sample, and the pair of respective filter samples are located symmetrically with respect to the current to-be-filtered sample.
  • At least a partial set of ALF coefficients are assigned to at least the partial set of filter samples according to the difference measures in step 840.
  • a filtered output sample is derived by applying an ALF with at least the partial set of ALF coefficients to the current to-be-filtered sample in step 850.
  • the filtered output sample are provided in step 860.
  • the ALF processed samples can be stored in the reference picture buffer to form prediction for subsequent video data.
  • the ALF processed samples can be readily provided to form decoded video.
  • Fig. 9 illustrates a flowchart of an exemplary video coding system that uses an index to select a target geometric transformation from a set of geometric transformations according to an embodiment of the present invention.
  • reconstructed pixels are received in step 910, wherein the reconstructed pixels comprise current reconstructed pixels in a current block.
  • a target geometric transformation is determined from a set of geometric transformations according to an index in step 920.
  • a filtered block is derived by applying the target geometric transformation to the current reconstructed pixels in step 930.
  • the filtered output sample are provided in step 940.
  • the ALF processed samples can be stored in the reference picture buffer to form prediction for subsequent video data.
  • the ALF processed samples can be readily provided to form decoded video.
  • Embodiment of the present invention as described above may be implemented in various hardware, software codes, or a combination of both.
  • an embodiment of the present invention can be one or more circuit circuits integrated into a video compression chip or program code integrated into video compression software to perform the processing described herein.
  • An embodiment of the present invention may also be program code to be executed on a Digital Signal Processor (DSP) to perform the processing described herein.
  • DSP Digital Signal Processor
  • the invention may also involve a number of functions to be performed by a computer processor, a digital signal processor, a microprocessor, or field programmable gate array (FPGA) .
  • These processors can be configured to perform particular tasks according to the invention, by executing machine-readable software code or firmware code that defines the particular methods embodied by the invention.
  • the software code or firmware code may be developed in different programming languages and different formats or styles.
  • the software code may also be compiled for different target platforms.
  • different code formats, styles and languages of software codes and other means of configuring code to perform the tasks in accordance with the invention will not depart from the spirit and scope of the invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

Methods and apparatus for geometric transformation by neighbouring difference reordering. According to one method, a current sample among the reconstructed pixels in the current block and a set of filter samples surrounding the current sample are determined. Difference measures are derived for at least a partial set of filter samples, where each of the difference measures is related to sample differences between a pair of respective filter samples and the current sample, and the pair of respective filter samples are located symmetrically with respect to the current sample. At least a partial set of ALF coefficients are assigned to at least the partial set of filter samples according to the difference measures. According to another method, a target geometric transformation is determined from a set of geometric transformations according to an index. A filtered block is then derived by applying the target geometric transformation to the current reconstructed pixels.

Description

METHOD AND APPARATUS FOR ADAPTIVE LOOP FILTER WITH GEOMETRIC TRANSFORM FOR VIDEO CODING
CROSS REFERENCE TO RELATED APPLICATIONS
The present invention is a non-Provisional Application of and claims priority to U.S. Provisional Patent Application No. 63/368,903, filed on July 20, 2022. The U.S. Provisional Patent Application is hereby incorporated by reference in its entirety.
FIELD OF THE INVENTION
The present invention relates to video coding system using ALF (Adaptive Loop Filter) . In particular, the present invention relates to the ALF using new geometric transform and signalling thereof.
BACKGROUND
Versatile video coding (VVC) is the latest international video coding standard developed by the Joint Video Experts Team (JVET) of the ITU-T Video Coding Experts Group (VCEG) and the ISO/IEC Moving Picture Experts Group (MPEG) . The standard has been published as an ISO standard: ISO/IEC 23090-3: 2021, Information technology -Coded representation of immersive media -Part 3: Versatile video coding, published Feb. 2021. VVC is developed based on its predecessor HEVC (High Efficiency Video Coding) by adding more coding tools to improve coding efficiency and also to handle various types of video sources including 3-dimensional (3D) video signals.
Fig. 1A illustrates an exemplary adaptive Inter/Intra video coding system incorporating loop processing. For Intra Prediction, the prediction data is derived based on previously coded video data in the current picture. For Inter Prediction 112, Motion Estimation (ME) is performed at the encoder side and Motion Compensation (MC) is performed based of the result of ME to provide prediction data derived from other picture (s) and motion data. Switch 114 selects Intra Prediction 110 or Inter-Prediction 112 and the selected prediction data is supplied to Adder 116 to form prediction errors, also called residues. The prediction error is then processed by Transform (T) 118 followed by Quantization (Q) 120. The transformed and quantized residues are then coded by Entropy Encoder 122 to be included in a video bitstream corresponding to the compressed video data. The bitstream associated with the transform coefficients is then packed with side information such as motion and coding modes associated with Intra prediction and Inter prediction, and other information such as parameters associated with loop filters applied to underlying image area. The side information associated with Intra Prediction 110, Inter prediction 112 and in-loop filter 130, are provided to Entropy Encoder 122 as shown in Fig. 1A. When an Inter-prediction mode is used, a reference picture  or pictures have to be reconstructed at the encoder end as well. Consequently, the transformed and quantized residues are processed by Inverse Quantization (IQ) 124 and Inverse Transformation (IT) 126 to recover the residues. The residues are then added back to prediction data 136 at Reconstruction (REC) 128 to reconstruct video data. The reconstructed video data may be stored in Reference Picture Buffer 134 and used for prediction of other frames.
As shown in Fig. 1A, incoming video data undergoes a series of processing in the encoding system. The reconstructed video data from REC 128 may be subject to various impairments due to a series of processing. Accordingly, in-loop filter 130 is often applied to the reconstructed video data before the reconstructed video data are stored in the Reference Picture Buffer 134 in order to improve video quality. For example, deblocking filter (DF) , Sample Adaptive Offset (SAO) and Adaptive Loop Filter (ALF) may be used. The loop filter information may need to be incorporated in the bitstream so that a decoder can properly recover the required information. Therefore, loop filter information is also provided to Entropy Encoder 122 for incorporation into the bitstream. In Fig. 1A, Loop filter 130 is applied to the reconstructed video before the reconstructed samples are stored in the reference picture buffer 134. The system in Fig. 1A is intended to illustrate an exemplary structure of a typical video encoder. It may correspond to the High Efficiency Video Coding (HEVC) system, VP8, VP9, H. 264 or VVC.
The decoder, as shown in Fig. 1B, can use similar or portion of the same functional blocks as the encoder except for Transform 118 and Quantization 120 since the decoder only needs Inverse Quantization 124 and Inverse Transform 126. Instead of Entropy Encoder 122, the decoder uses an Entropy Decoder 140 to decode the video bitstream into quantized transform coefficients and needed coding information (e.g. ILPF information, Intra prediction information and Inter prediction information) . The Intra prediction 150 at the decoder side does not need to perform the mode search. Instead, the decoder only needs to generate Intra prediction according to Intra prediction information received from the Entropy Decoder 140. Furthermore, for Inter prediction, the decoder only needs to perform motion compensation (MC 152) according to Inter prediction information received from the Entropy Decoder 140 without the need for motion estimation.
According to VVC, an input picture is partitioned into non-overlapped square block regions referred as CTUs (Coding Tree Units) , similar to HEVC. Each CTU can be partitioned into one or multiple smaller size coding units (CUs) . The resulting CU partitions can be in square or rectangular shapes. Also, VVC divides a CTU into prediction units (PUs) as a unit to apply prediction process, such as Inter prediction, Intra prediction, etc.
In the present invention, Adaptive Loop Filter (ALF) with new geometric transform and signalling associated with the new geometric transform are disclosed for the emerging video coding development beyond the VVC.
BRIEF SUMMARY OF THE INVENTION
A method and apparatus for video coding using ALF (Adaptive Loop Filter) are disclosed. According to the method, reconstructed pixels are received, wherein the reconstructed pixels comprise current reconstructed pixels in a current block. A current to-be-filtered sample among the reconstructed pixels in the current block and a set of filter samples surrounding the current to-be-filtered sample are determined. Difference measures for at least a partial set of filter samples are derived, wherein each of the difference measures is related to sample differences between a pair of respective filter samples and the current to-be-filtered sample, and the pair of respective filter samples are located symmetrically with respect to the current to-be-filtered sample. At least a partial set of ALF coefficients are assigned to at least the partial set of filter samples according to the difference measures. A filtered output sample is derived by applying an ALF with at least the partial set of ALF coefficients to the current to-be-filtered sample.
In one embodiment, at least the partial set of ALF coefficients are assigned to at least the partial set of filter samples according to an ascending order of the difference measures. In another embodiment, at least the partial set of ALF coefficients are assigned to at least the partial set of filter samples according to a descending order of the difference measures.
In one embodiment, said each of the difference measures comprises a first term related to a first difference corresponding to a first sample difference between a first one of the pair of respective filter samples and the current to-be-filtered sample, and a second difference corresponding to a second sample difference between a second one of the pair of respective filter samples and the current to-be-filtered sample. In one embodiment, said each of the difference measures corresponds to a sum of the first sample difference and the second sample difference. In another embodiment, said each of the difference measures corresponds to a sum of absolute value of the first sample difference and absolute value of the second sample difference. In yet another embodiment, said each of the difference measures corresponds to an absolute value of a sum of the first sample difference and the second sample difference. In yet another embodiment, said each of the difference measures corresponds to a sum of a clipped first sample difference and a clipped second sample difference. In yet another embodiment, said each of the difference measures corresponds to a sum of absolute value of a clipped first sample difference and absolute value of a clipped second sample difference. In yet another embodiment, said each of the difference measures corresponds to an absolute value of a sum of a clipped first sample difference and a clipped sample second difference.
In one embodiment, said at least the partial set of filter samples is the same as the set of filter samples. In another embodiment, said at least the partial set of filter samples is less than the set of filter samples.
According to another method, reconstructed pixels are received, wherein the reconstructed pixels comprise current reconstructed pixels in a current block. A target geometric transformation  from a set of geometric transformations according to an index. A filtered block is derived by applying the target geometric transformation to the current reconstructed pixels. The filtered block is provided.
In one embodiment, the index is signalled or parsed based on per APS (Application Parameter Set) , filter set or class. In one embodiment, the index is inferred based on a target ALF classifier selected. For example, the target geometric transformation may correspond to a geometric transformation by gradient in response to the target ALF classifier selected being a gradient classifier. For another example, the target geometric transformation may correspond to a geometric transformation by neighbouring reordering in response to the target ALF classifier selected being a band classifier.
BRIEF DESCRIPTION OF THE DRAWINGS
Fig. 1A illustrates an exemplary adaptive Inter/Intra video coding system incorporating loop processing.
Fig. 1B illustrates a corresponding decoder for the encoder in Fig. 1A.
Fig. 2 illustrates the ALF filter shapes for the chroma (left) and luma (right) components.
Figs. 3A-D illustrates the subsampled Laplacian calculations for gv (3A) , gh (3B) , gd1 (3C) and gd2 (3D) .
Fig. 4A illustrates the placement of CC-ALF with respect to other loop filters.
Fig. 4B illustrates a diamond shaped filter for the chroma samples.
Fig. 5 illustrates an example of neighbouring sample positions used to derive the neighbouring differences for ALF filter coefficient reordering according to an embodiment of the present invention.
Fig. 6A illustrates an example of rotating the ALF filter in Fig. 5 by 45 degrees clockwise.
Fig. 6B illustrates another example of rotating the ALF filter in Fig. 5 by 45 degrees clockwise.
Fig. 7 illustrates an example of rotating only part of ALF coefficients of the filter in Fig. 5.
Fig. 8 illustrates a flowchart of an exemplary video coding system that reorders ALF coefficients based on neighbouring differences according to an embodiment of the present invention.
Fig. 9 illustrates a flowchart of an exemplary video coding system that uses an index to select a target geometric transformation from a set of geometric transformations according to an embodiment of the present invention.
DETAILED DESCRIPTION OF THE INVENTION
It will be readily understood that the components of the present invention, as generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following more detailed description of the embodiments of the systems and methods of the present invention, as represented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention.  References throughout this specification to “one embodiment, ” “an embodiment, ” or similar language mean that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present invention. Thus, appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. One skilled in the relevant art will recognize, however, that the invention can be practiced without one or more of the specific details, or with other methods, components, etc. In other instances, well-known structures, or operations are not shown or described in detail to avoid obscuring aspects of the invention. The illustrated embodiments of the invention will be best understood by reference to the drawings, wherein like parts are designated by like numerals throughout. The following description is intended only by way of example, and simply illustrates certain selected embodiments of apparatus and methods that are consistent with the invention as claimed herein.
Adaptive Loop Filter in VVC
In VVC, an Adaptive Loop Filter (ALF) with block-based filter adaption is applied. For the luma component, one filter is selected among 25 filters for each 4×4 block, based on the direction and activity of local gradients.
Filter shape
Two diamond filter shapes (as shown in Fig. 2) are used. The 7×7 diamond shape 220 is applied for luma component and the 5×5 diamond shape 210 is applied for chroma components.
Block classification
For luma component, each 4×4 block is categorized into one out of 25 classes. The classification index C is derived based on its directionality D and a quantized value of activityas follows:
To calculategradients of the horizontal, vertical and two diagonal direction are first calculated using 1-D Laplacian:



where indices i and j refer to the coordinates of the upper left sample within the 4×4 block and R (i, j) indicates a reconstructed sample at coordinate (i, j) .
To reduce the complexity of block classification, the subsampled 1-D Laplacian calculation is applied to the vertical direction (Fig. 3A) and the horizontal direction (Fig. 3B) . As shown in Figs. 3C-D, the same subsampled positions are used for gradient calculation of all directions (gd1 in Fig. 3C and gd2 in Fig. 3D) .
Then D maximum and minimum values of the gradients of horizontal and vertical directions are set as:
The maximum and minimum values of the gradient of two diagonal directions are set as:
To derive the value of the directionality D, these values are compared against each other and with two thresholds t1 and t2:
Step 1. If bothare true, D is set to 0.
Step 2. Ifcontinue from Step 3; otherwise continue from Step 4.
Step 3. IfD is set to 2; otherwise D is set to 1.
Step 4. IfD is set to 4; otherwise D is set to 3.
The activity value A is calculated as:
A is further quantized to the range of 0 to 4, inclusively, and the quantized value is denoted as 
For chroma components in a picture, no classification is applied.
Geometric transformations of filter coefficients and clipping values
Before filtering each 4×4 luma block, geometric transformations such as rotation or diagonal and vertical flipping are applied to the filter coefficients f (k, l) and to the corresponding filter  clipping values c (k, l) depending on gradient values calculated for that block. This is equivalent to applying these transformations to the samples in the filter support region. The idea is to make different blocks to which ALF is applied more similar by aligning their directionality.
Three geometric transformations, including diagonal, vertical flip and rotation are introduced:
Diagonal: fD (k, l) =f (l, k) , cD (k, l) =c (l, k) ,
Vertical flip: fV (k, l) =f (k, K-l-1) , cV (k, l) =c (k, K-l-1) ,
Rotation: fR (k, l) =f (K-l-1, k) , cR (k, l) =c (K-l-1, k) ,
where K is the size of the filter and 0≤k, l≤K-1 are coefficients coordinates, such that location (0,0) is at the upper left corner and location (K-1, K-1) is at the lower right corner. The transformations are applied to the filter coefficients f (k, l) and to the clipping values c (k, l) depending on gradient values calculated for that block. The relationship between the transformation and the four gradients of the four directions are summarized in the following table.
Table 1. Mapping of the gradient calculated for one block and the transformations
Filtering process
At decoder side, when ALF is enabled for a CTB, each sample R (i, j) within the CU is filtered, resulting in sample value R′ (i, j) as shown below,
where f (k, l) denotes the decoded filter coefficients, K (x, y) is the clipping function and c (k, l) denotes the decoded clipping parameters. The variable k and l varies between –L/2 and L/2, where L denotes the filter length. The clipping function K (x, y) =min (y, max (-y, x) ) which corresponds to the function Clip3 (-y, y, x) . The clipping operation introduces non-linearity to make ALF more efficient by reducing the impact of neighbour sample values that are too different with the current sample value.
Cross Component Adaptive Loop Filter
CC-ALF uses luma sample values to refine each chroma component by applying an adaptive, linear filter to the luma channel and then using the output of this filtering operation for chroma refinement. Fig. 4A provides a system level diagram of the CC-ALF process with respect to the SAO,  luma ALF and chroma ALF processes. As shown in Fig. 4A, each colour component (i.e., Y, Cb and Cr) is processed by its respective SAO (i.e., SAO Luma 410, SAO Cb 412 and SAO Cr 414) . After SAO, ALF Luma 420 is applied to the SAO-processed luma and ALF Chroma 430 is applied to SAO-processed Cb and Cr. However, there is a cross-component term from luma to a chroma component (i.e., CC-ALF Cb 422 and CC-ALF Cr 424) . The outputs from the cross-component ALF are added (using adders 432 and 434 respectively) to the outputs from ALF Chroma 430.
Filtering in CC-ALF is accomplished by applying a linear, diamond shaped filter (e.g. filters 440 and 442 in Fig. 4B) to the luma channel. In Fig. 4B, a blank circle indicates a luma sample and a dot-filled circle indicate a chroma sample. One filter is used for each chroma channel, and the operation is expressed as:
where (x, y) is chroma component i location being refined, (xY, yY) is the luma location based on (x, y) , Si is filter support area in luma component, and ci (x0, y0) represents the filter coefficients. The ci in the above equation may correspond to Cb or Cr.
As shown in Fig, 4B, the luma filter support is the region collocated with the current chroma sample after accounting for the spatial scaling factor between the luma and chroma planes.
In the VVC reference software, CC-ALF filter coefficients are computed by minimizing the mean square error of each chroma channel with respect to the original chroma content. To achieve this, the VTM (VVC Test Model) algorithm uses a coefficient derivation process similar to the one used for chroma ALF. Specifically, a correlation matrix is derived, and the coefficients are computed using a Cholesky decomposition solver in an attempt to minimize a mean square error metric. In designing the filters, a maximum of 8 CC-ALF filters can be designed and transmitted per picture. The resulting filters are then indicated for each of the two chroma channels on a CTU basis.
Additional characteristics of CC-ALF include:
● The design uses a 3x4 diamond shape with 8 filters.
● Seven filter coefficients are transmitted in the APS.
● Each of the transmitted coefficients has a 6-bit dynamic range and is restricted to power-of-2 values.
● The eighth filter coefficient is derived at the decoder such that the sum of the filter coefficients is equal to 0.
● An APS may be referenced in the slice header.
● CC-ALF filter selection is controlled at CTU-level for each chroma component.
● Boundary padding for the horizontal virtual boundaries uses the same memory access pattern as luma ALF.
As an additional feature, the reference encoder can be configured to enable some basic subjective tuning through the configuration file. When enabled, the VTM attenuates the application of CC-ALF in regions that are coded with high QP and are either near mid-grey or contain a large amount of luma high frequencies. Algorithmically, this is accomplished by disabling the application of CC-ALF in CTUs where any of the following conditions are true:
● The slice QP value minus 1 is less than or equal to the base QP value.
● The number of chroma samples for which the local contrast is greater than (1 << (bitDepth –2) ) –1 exceeds the CTU height, where the local contrast is the difference between the maximum and minimum luma sample values within the filter support region.
● More than a quarter of chroma samples are in the range between
(1 << (bitDepth –1) ) –16 and (1 << (bitDepth –1) ) + 16
The motivation for this functionality is to provide some assurance that CC-ALF does not amplify artefacts introduced earlier in the decoding path (This is largely due the fact that the VTM currently does not explicitly optimize for chroma subjective quality) . It is anticipated that alternative encoder implementations may either not use this functionality or incorporate alternative strategies suitable for their encoding characteristics.
Filter parameters signalling
ALF filter parameters are signalled in Adaptation Parameter Set (APS) . In one APS, up to 25 sets of luma filter coefficients and clipping value indexes, and up to eight sets of chroma filter coefficients and clipping value indexes could be signalled. To reduce bits overhead, filter coefficients of different classification for luma component can be merged. In slice header, the indices of the APSs used for the current slice are signalled.
Clipping value indexes, which are decoded from the APS, allow determining clipping values using a table of clipping values for both luma and Chroma components. These clipping values are dependent of the internal bitdepth. More precisely, the clipping values are obtained by the following formula:
AlfClip= {round (2B-α*n) for n∈ [0.. N-1] }
with B equal to the internal bitdepth, α is a pre-defined constant value equal to 2.35, and N equal to  4 which is the number of allowed clipping values in VVC. The AlfClip is then rounded to the nearest value with the format of power of 2.
In slice header, up to 7 APS indices can be signalled to specify the luma filter sets that are used for the current slice. The filtering process can be further controlled at CTB level. A flag is always signalled to indicate whether ALF is applied to a luma CTB. A luma CTB can choose a filter set among 16 fixed filter sets and the filter sets from APSs. A filter set index is signaled for a luma CTB to indicate which filter set is applied. The 16 fixed filter sets are pre-defined and hard-coded in both the encoder and the decoder.
For the chroma component, an APS index is signalled in slice header to indicate the chroma filter sets being used for the current slice. At CTB level, a filter index is signalled for each chroma CTB if there is more than one chroma filter set in the APS.
The filter coefficients are quantized with norm equal to 128. In order to restrict the multiplication complexity, a bitstream conformance is applied so that the coefficient value of the non-central position shall be in the range of -27 to 27 -1, inclusive. The central position coefficient is not signalled in the bitstream and is considered as equal to 128.
Adaptive Loop Filter in ECM
Alternative 2x2 ALF band classifier
Classification in ALF is extended with an additional alternative band classifier. For a signalled luma filter set, a flag is signalled to indicate whether the alternative classifier is applied. Geometric transformation is not applied to the alternative band classifier. When the band-based classifier is applied, the sum of sample values of a 2x2 luma block is calculated at first. Then the class index is calculated as below,
class_index = (sum *25) >> (sample bit depth + 2) .
ALF simplification
ALF gradient subsampling and ALF virtual boundary processing are removed. Block size for classification is reduced from 4x4 to 2x2. Filter size for both luma and chroma, for which ALF coefficients are signalled, is increased to 9x9.
ALF with fixed filters
To filter a luma sample, three different classifiers (C0, C1 and C2) and three different sets of filters (F0, F1 and F2) are used. Sets F0 and F1 contain fixed filters, with coefficients trained for classifiers C0 and C1. Coefficients of filters in F2 are signalled. Which filter from a set Fi is used for a given sample is decided by a class Ci assigned to this sample using classifier Ci.
Filtering
At first, two 13x13 diamond shape fixed filters F0 and F1 are applied to derive two intermediate samples R0 (x, y) and R1 (x, y) . After that, F2 is applied to R0 (x, y) , R1 (x, y) , and neighbouring  samples to derive a filtered sample as
where fi, j is the clipped difference between a neighbouring sample and current sample R (x, y) and gi is the clipped difference between Ri-20 (x, y) and current sample. The filter coefficients ci, i=0, …21, are signalled.
Classification
Based on directionality Di and activitya class Ci is assigned to each 2x2 block:
where MD, i represents the total number of directionalities Di.
As in VVC, values of the horizontal, vertical, and two diagonal gradients are calculated for each sample using 1-D Laplacian. The sum of the sample gradients within a 4×4 window that covers the target 2×2 block is used for classifier C0 and the sum of sample gradients within a 12×12 window is used for classifiers C1 and C2. The sums of horizontal, vertical and two diagonal gradients are denoted, respectively, asandThe directionality Di is determined by comparing
with a set of thresholds. The directionality D2 is derived as in VVC using thresholds 2 and 4.5. For D0 and D1, horizontal/vertical edge strengthand diagonal edge strengthare calculated first. Thresholds Th= [1.25, 1.5, 2, 3, 4.5, 8] are used. Edge strengthis 0 ifotherwise, is the maximum integer such thatEdge strengthis 0 ifotherwise, is the maximum integer such thatWheni.e., horizontal/vertical edges are dominant, the Di is derived by using Table 2A; otherwise, diagonal edges are dominant, the Di is derived by using Table 2B.
Table 2A. Mapping ofandto Di
Table 2B. Mapping ofandto Di
To obtainthe sum of vertical and horizontal gradients Ai is mapped to the range of 0 to n, where n is equal to 4 forand 15 forand
In an ALF_APS, up to 4 luma filter sets are signalled, each set may have up to 25 filters.
In the present invention, techniques to improve the ALF performance are disclosed as follows.
ALF with Alternative Geometric Transformation
The geometric transformation in ALF reorganizes the neighbouring sample differences for potentially making the statistics more consistent, which is beneficial for deriving more general adaptive filters. For ALF design in the ECM (Muhammed Coban, et al., “Algorithm description of Enhanced Compression Model 5 (ECM 5) ” , Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29, 26th Meeting, by teleconference, 20–29 April 2022, Document: JVET-Z2025) , adaptive filters can select not to apply geometric transformation by selecting the newly introduced band classifier and this modification shows BD-rate gains, indicating that current geometric transformation design may be not effective sometimes. Therefore, in the present invention, some new geometric transformation types are disclosed.
Geometric Transformation using Neighbouring Reordering
In the conventional geometric transformation, different ALFs can be derived through geometric transformation (e.g. diagonal, vertical or horizontal rotation) of an original ALF according to the gradient information derived for the block. In other words, multiple ALFs through geometric transformation are available for the geometric transformation by gradient. In the present invention, multiple geometric transformations including the conventional geometric transformation by gradient. In one embodiment, geometric transformations of filter coefficients depend on the sorted neighbouring difference. For example, first we calculate (Ii--Ic) + (Ii+-Ic) for i=0.. 11, where Ix represents the sample value of position x. Fig. 5 illustrates an example of positions of x (i.e., i-and i+ for i=0, …, 11) . As shown in Fig. 5, an ALF filter has a footprint (a7x7 diamond shape in this example) to cover selected neighbouring samples for the filtering process. The neighbouring samples in the footprint are referred as filter samples in this disclosure. Then, for the position with the smallest difference, the filter coefficient is set to C0; and for the position with the next smallest difference, the filter coefficient is set to C1; and so on for the remaining coefficients. Finally, for the position with the largest difference, the filter coefficient is set to C11.
In above embodiment, neighbouring difference calculation could be sorted according to one of the following values.
(1) ( (Ii--Ic) + (Ii+-Ic) ) .
(2) (abs (Ii--Ic) +abs (Ii+-Ic) ) .
(3) (abs ( (Ii--Ic) + (Ii+-Ic) ) ) .
(4) (K (Ii--Ic, ki) +K (Ii+-Ic, ki) ) , where K (x, k) =clip3 (-k, k, x) ,
(5) (abs (K (Ii--Ic, ki) ) +abs (K (Ii+-Ic, ki) ) ) , where K (x, k) =clip3 (-k, k, x) ,
(6) (abs (K (Ii--Ic, ki) +K (Ii+-Ic, ki) ) ) , where K (x, k) =clip3 (-k, k, x) .
In the above equations, they are all related two basic terms, i.e., (Ii--Ic) and (Ii+-Ic) , which are referred as a first sample difference and a second sample difference for a pair of samples located at i-and i+ with respect to the current to-be-filtered sample at C.
In above embodiment, the sorting order can be ascending (i.e., C0 for the smallest one) or descending (i.e., C0 for the largest one) . Therefore, for a given set of ALF coefficients (e.g. C0, C1, …, C11) , multiple ALFs can be derived according to neighbouring difference reordering. In other words, this neighbouring difference reordering generates a new type of geometric transformation according to the present invention.
In one embodiment, only part of coefficients in the filter shape are reordered or transformed. For example, only the centre 3x3 coefficients in the filter shape (e.g., C5, C6, C7, C11) are rotated accordingly. In another example, only the centre 5x5 coefficients in the filter shape (e.g., C5, C6, C7, C11, C1, C4, C10, C8) are transformed. In another example, only the coefficients located in the outside region are rotated (e.g., C0, C1, C3, C4, C8, C9) .
In one embodiment, rotation with one pre-defined degree is supported in geometric transform. For example, if the sum of vertical gradient is the largest one among all directions, the filter coefficients are rotated 90 degrees in the clockwise direction. If the sum of 45-degree gradient is the largest one among all directions, the filter coefficients are rotated 45 degrees in the clockwise direction, as shown in Fig. 6A and Fig. 6B.
If the sum of 135-degree gradient is the largest one among all directions, the filter coefficients are rotated 135 degrees in the clockwise direction. In another example, more directions are calculated according to horizontal and vertical gradients (e.g., 8 directions) , and the coefficients are rotated accordingly. In one embodiment, the rotated filter shape can be different to each other. In another embodiment, only a part of coefficients is rotated in order to keep the same filter shape among different rotation cases. An example is shown in Fig. 7, where the filter is rotated by 90 degrees  counter clockwise except for the 4 positions at the top, bottom, leftmost and rightmost, while maintaining the ALF footprint the same as the original one.
Note that the 7x7 filter shape is used as an example to illustrate the geometric transformation according to embodiments of the present invention. However, the present invention is not limited to this specific filter shape. Instead, the present invention can be applied to any ALF filter shape.
Syntax design
As disclosed above, a new geometric transformation has been disclosed. Therefore, there are more choices for the geometric transformation. In one embodiment, an index to indicate which geometric transformation to use among multiple geometric transformations is signalled (at the encoder side) or parsed (at the decoder side) per APS. In another embodiment, each filter set has one index signalled or parsed to indicate which geometric transformation to use. In another embodiment, each class has one index signalled or parsed to indicate which geometric transformation to use.
In another embodiment, the index to indicate which geometric transformation to use among multiple geometric transformations can be inferred from classifier selection. For example, for gradient classifier, geometric transformation by gradient is always used, or for band classifier, geometric transformation by neighbouring reordering is always used.
Any of the ALF methods described above can be implemented in encoders and/or decoders. For example, any of the proposed methods can be implemented in the in-loop filter module (e.g. ILPF 130 in Fig. 1A and Fig. 1B) of an encoder or a decoder. Alternatively, any of the proposed methods can be implemented as a circuit coupled to the inter coding module of an encoder and/or motion compensation module, a merge candidate derivation module of the decoder. The ALF methods may also be implemented using executable software or firmware codes stored on a media, such as hard disk or flash memory, for a CPU (Central Processing Unit) or programmable devices (e.g. DSP (Digital Signal Processor) or FPGA (Field Programmable Gate Array) ) .
Fig. 8 illustrates a flowchart of an exemplary video coding system that reorders ALF coefficients based on neighbouring differences according to an embodiment of the present invention. The steps shown in the flowchart may be implemented as program codes executable on one or more processors (e.g., one or more CPUs) at the encoder side. The steps shown in the flowchart may also be implemented based hardware such as one or more electronic devices or processors arranged to perform the steps in the flowchart. According to this method, reconstructed pixels are received in step 810, wherein the reconstructed pixels comprise current reconstructed pixels in a current block. A current to-be-filtered sample among the reconstructed pixels in the current block and a set of filter samples surrounding the current to-be-filtered sample are determined in step 820. Difference measures are derived for at least a partial set of filter samples in step 830, wherein each of the difference measures is related to sample differences between a pair of respective filter samples and the current to-be-filtered sample, and the pair of respective filter samples are located symmetrically  with respect to the current to-be-filtered sample. At least a partial set of ALF coefficients are assigned to at least the partial set of filter samples according to the difference measures in step 840. A filtered output sample is derived by applying an ALF with at least the partial set of ALF coefficients to the current to-be-filtered sample in step 850. The filtered output sample are provided in step 860. As shown in Fig. 1A and Fig. 1B, the ALF processed samples can be stored in the reference picture buffer to form prediction for subsequent video data. At a decoder side, the ALF processed samples can be readily provided to form decoded video.
Fig. 9 illustrates a flowchart of an exemplary video coding system that uses an index to select a target geometric transformation from a set of geometric transformations according to an embodiment of the present invention. According to this method, reconstructed pixels are received in step 910, wherein the reconstructed pixels comprise current reconstructed pixels in a current block. A target geometric transformation is determined from a set of geometric transformations according to an index in step 920. A filtered block is derived by applying the target geometric transformation to the current reconstructed pixels in step 930. The filtered output sample are provided in step 940. As shown in Fig. 1A and Fig. 1B, the ALF processed samples can be stored in the reference picture buffer to form prediction for subsequent video data. At a decoder side, the ALF processed samples can be readily provided to form decoded video.
The flowcharts shown are intended to illustrate an example of video coding according to the present invention. A person skilled in the art may modify each step, re-arranges the steps, split a step, or combine steps to practice the present invention without departing from the spirit of the present invention. In the disclosure, specific syntax and semantics have been used to illustrate examples to implement embodiments of the present invention. A skilled person may practice the present invention by substituting the syntax and semantics with equivalent syntax and semantics without departing from the spirit of the present invention.
The above description is presented to enable a person of ordinary skill in the art to practice the present invention as provided in the context of a particular application and its requirement. Various modifications to the described embodiments will be apparent to those with skill in the art, and the general principles defined herein may be applied to other embodiments. Therefore, the present invention is not intended to be limited to the particular embodiments shown and described, but is to be accorded the widest scope consistent with the principles and novel features herein disclosed. In the above detailed description, various specific details are illustrated in order to provide a thorough understanding of the present invention. Nevertheless, it will be understood by those skilled in the art that the present invention may be practiced.
Embodiment of the present invention as described above may be implemented in various hardware, software codes, or a combination of both. For example, an embodiment of the present invention can be one or more circuit circuits integrated into a video compression chip or program  code integrated into video compression software to perform the processing described herein. An embodiment of the present invention may also be program code to be executed on a Digital Signal Processor (DSP) to perform the processing described herein. The invention may also involve a number of functions to be performed by a computer processor, a digital signal processor, a microprocessor, or field programmable gate array (FPGA) . These processors can be configured to perform particular tasks according to the invention, by executing machine-readable software code or firmware code that defines the particular methods embodied by the invention. The software code or firmware code may be developed in different programming languages and different formats or styles. The software code may also be compiled for different target platforms. However, different code formats, styles and languages of software codes and other means of configuring code to perform the tasks in accordance with the invention will not depart from the spirit and scope of the invention.
The invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described examples are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.

Claims (18)

  1. A method for Adaptive Loop Filter (ALF) processing of reconstructed video, the method comprising:
    receiving reconstructed pixels, wherein the reconstructed pixels comprise current reconstructed pixels in a current block;
    determining a current to-be-filtered sample among the reconstructed pixels in the current block and a set of filter samples surrounding the current to-be-filtered sample;
    deriving difference measures for at least a partial set of filter samples, wherein each of the difference measures is related to sample differences between a pair of respective filter samples and the current to-be-filtered sample, and the pair of respective filter samples are located symmetrically with respect to the current to-be-filtered sample;
    assigning at least a partial set of ALF coefficients to at least the partial set of filter samples according to the difference measures;
    deriving a filtered output sample by applying an ALF with at least the partial set of ALF coefficients to the current to-be-filtered sample; and
    providing the filtered output sample.
  2. The method of Claim 1, wherein at least the partial set of ALF coefficients are assigned to at least the partial set of filter samples according to an ascending order of the difference measures.
  3. The method of Claim 1, wherein at least the partial set of ALF coefficients are assigned to at least the partial set of filter samples according to a descending order of the difference measures.
  4. The method of Claim 1, wherein said each of the difference measures comprises a first term related to a first sample difference between a first one of the pair of respective filter samples and the current to-be-filtered sample, and a second term related to a second sample difference between a second one of the pair of respective filter samples and the current to-be-filtered sample.
  5. The method of Claim 4, said each of the difference measures corresponds to a sum of the first sample difference and the second sample difference.
  6. The method of Claim 4, said each of the difference measures corresponds to a sum of absolute value of the first sample difference and absolute value of the second sample difference.
  7. The method of Claim 4, said each of the difference measures corresponds to an absolute value  of a sum of the first sample difference and the second sample difference.
  8. The method of Claim 4, said each of the difference measures corresponds to a sum of a clipped first sample difference and a clipped second sample difference.
  9. The method of Claim 4, said each of the difference measures corresponds to a sum of absolute value of a clipped first sample difference and absolute value of a clipped second sample difference.
  10. The method of Claim 4, said each of the difference measures corresponds to an absolute value of a sum of a clipped first sample difference and a clipped sample second difference.
  11. The method of Claim 1, wherein said at least the partial set of filter samples is the same as the set of filter samples.
  12. The method of Claim 1, wherein said at least the partial set of filter samples is less than the set of filter samples.
  13. An apparatus for Adaptive Loop Filter (ALF) processing of reconstructed video, the apparatus comprising one or more electronic circuits or processors arranged to:
    receive reconstructed pixels, wherein the reconstructed pixels comprise current reconstructed pixels in a current block;
    determine a current to-be-filtered sample among the reconstructed pixels in the current block and a set of filter samples surrounding the current to-be-filtered sample;
    derive difference measures for at least a partial set of filter samples, wherein each of the difference measures is related to sample differences between a pair of respective filter samples and the current to-be-filtered sample, and the pair of respective filter samples are located symmetrically with respect to the current to-be-filtered sample;
    assign at least a partial set of ALF coefficients to at least the partial set of filter samples according to the difference measures;
    derive a filtered output sample by applying an ALF with at least the partial set of ALF coefficients to the current to-be-filtered sample; and
    provide the filtered output sample.
  14. A method for Adaptive Loop Filter (ALF) processing of reconstructed video, the method comprising:
    receiving reconstructed pixels, wherein the reconstructed pixels comprise current reconstructed pixels in a current block;
    determining a target geometric transformation from a set of geometric transformations according to an index;
    deriving a filtered block by applying the target geometric transformation to the current reconstructed pixels; and
    providing the filtered block.
  15. The method of Claim 14, wherein the index is signalled or parsed based on per APS (Application Parameter Set) , filter set or class.
  16. The method of Claim 14, wherein the index is inferred based on a target ALF classifier selected.
  17. The method of Claim 16, wherein in response to the target ALF classifier selected being a gradient classifier, the target geometric transformation corresponds to a geometric transformation by gradient.
  18. The method of Claim 16, wherein in response to the target ALF classifier selected being a band classifier, the target geometric transformation corresponds to a geometric transformation by neighbouring reordering.
PCT/CN2023/103572 2022-07-20 2023-06-29 Method and apparatus for adaptive loop filter with geometric transform for video coding Ceased WO2024016983A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW112126396A TW202406337A (en) 2022-07-20 2023-07-14 Method and apparatus for adaptive loop filter processing of reconstructed video

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202263368903P 2022-07-20 2022-07-20
US63/368,903 2022-07-20

Publications (1)

Publication Number Publication Date
WO2024016983A1 true WO2024016983A1 (en) 2024-01-25

Family

ID=89617019

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/103572 Ceased WO2024016983A1 (en) 2022-07-20 2023-06-29 Method and apparatus for adaptive loop filter with geometric transform for video coding

Country Status (2)

Country Link
TW (1) TW202406337A (en)
WO (1) WO2024016983A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180041779A1 (en) * 2016-08-02 2018-02-08 Qualcomm Incorporated Geometry transformation-based adaptive loop filtering
US20200329239A1 (en) * 2019-04-11 2020-10-15 Mediatek Inc. Adaptive Loop Filter With Adaptive Parameter Set
US20220094919A1 (en) * 2019-01-25 2022-03-24 Mediatek Inc. Method and Apparatus for Non-Linear Adaptive Loop Filtering in Video Coding
US20220201292A1 (en) * 2020-12-23 2022-06-23 Qualcomm Incorporated Adaptive loop filter with fixed filters

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180041779A1 (en) * 2016-08-02 2018-02-08 Qualcomm Incorporated Geometry transformation-based adaptive loop filtering
US20220094919A1 (en) * 2019-01-25 2022-03-24 Mediatek Inc. Method and Apparatus for Non-Linear Adaptive Loop Filtering in Video Coding
US20200329239A1 (en) * 2019-04-11 2020-10-15 Mediatek Inc. Adaptive Loop Filter With Adaptive Parameter Set
US20220201292A1 (en) * 2020-12-23 2022-06-23 Qualcomm Incorporated Adaptive loop filter with fixed filters

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
N. HU (QUALCOMM), V. SEREGIN, M. KARCZEWICZ (QUALCOMM), W. YIN (BYTEDANCE), K. ZHANG (BYTEDANCE), L. ZHANG (BYTEDANCE): "EE2-5: Adaptive filter shape switch and using samples before deblocking filter for adaptive loop filter", 27. JVET MEETING; 20220713 - 20220722; TELECONFERENCE; (THE JOINT VIDEO EXPLORATION TEAM OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ), 6 July 2022 (2022-07-06), XP030302877 *

Also Published As

Publication number Publication date
TW202406337A (en) 2024-02-01

Similar Documents

Publication Publication Date Title
WO2024016981A1 (en) Method and apparatus for adaptive loop filter with chroma classifier for video coding
WO2024016983A1 (en) Method and apparatus for adaptive loop filter with geometric transform for video coding
WO2024114810A1 (en) Method and apparatus for adaptive loop filter with fixed filters for video coding
WO2024212779A1 (en) Method and apparatus of alf adaptive parameters for video coding
WO2024067188A1 (en) Method and apparatus for adaptive loop filter with chroma classifiers by transpose indexes for video coding
WO2024017010A1 (en) Method and apparatus for adaptive loop filter with alternative luma classifier for video coding
WO2024017200A1 (en) Method and apparatus for adaptive loop filter with tap constraints for video coding
WO2024222417A1 (en) Method and apparatus of chroma alf with residual taps in video coding system
WO2024082946A1 (en) Method and apparatus of adaptive loop filter sub-shape selection for video coding
WO2024088003A1 (en) Method and apparatus of position-aware reconstruction in in-loop filtering
WO2025152690A1 (en) Method and apparatus of adaptive for in-loop filtering of reconstructed video
WO2024012168A1 (en) Method and apparatus for adaptive loop filter with virtual boundaries and multiple sources for video coding
WO2024012167A1 (en) Method and apparatus for adaptive loop filter with non-local or high degree taps for video coding
WO2025152997A1 (en) Method and apparatus of adaptive loop filter with additional modes and taps related to cccm and fixed filters in video coding
WO2024146624A1 (en) Method and apparatus for adaptive loop filter with cross-component taps for video coding
WO2025139389A1 (en) Method and apparatus of adaptive loop filter with shared or adaptively refined fixed filters in video coding
WO2024082899A1 (en) Method and apparatus of adaptive loop filter selection for positional taps in video coding
WO2024055842A1 (en) Method and apparatus for adaptive loop filter with non-sample taps for video coding
WO2025001782A1 (en) Method and apparatus of alf complexity reduction for cross-component taps in video coding
WO2025011377A1 (en) Method and apparatus of unified classification in in-loop filtering in video coding
WO2024146428A1 (en) Method and apparatus of alf with model-based taps in video coding system
WO2025185711A1 (en) Method and device of alternative clipping in adaptive loop filter

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23842048

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 23842048

Country of ref document: EP

Kind code of ref document: A1