[go: up one dir, main page]

WO2020073990A1 - 视频图像分量预测方法及装置、计算机存储介质 - Google Patents

视频图像分量预测方法及装置、计算机存储介质 Download PDF

Info

Publication number
WO2020073990A1
WO2020073990A1 PCT/CN2019/110633 CN2019110633W WO2020073990A1 WO 2020073990 A1 WO2020073990 A1 WO 2020073990A1 CN 2019110633 W CN2019110633 W CN 2019110633W WO 2020073990 A1 WO2020073990 A1 WO 2020073990A1
Authority
WO
WIPO (PCT)
Prior art keywords
image component
component
reference value
predicted
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2019/110633
Other languages
English (en)
French (fr)
Inventor
霍俊彦
万帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to IL281832A priority Critical patent/IL281832B1/en
Priority to KR1020257000327A priority patent/KR20250008806A/ko
Priority to BR112021006138-0A priority patent/BR112021006138A2/pt
Priority to AU2019357929A priority patent/AU2019357929B2/en
Priority to MYPI2021001809A priority patent/MY208324A/en
Priority to KR1020217014094A priority patent/KR20210070368A/ko
Priority to EP19870849.7A priority patent/EP3843399B1/en
Priority to CN201980041795.1A priority patent/CN112335245A/zh
Priority to MX2021004090A priority patent/MX2021004090A/es
Priority to JP2021517944A priority patent/JP7518065B2/ja
Priority to CN202110236395.5A priority patent/CN113068030B/zh
Priority to CA3114816A priority patent/CA3114816C/en
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to KR1020257000316A priority patent/KR20250011712A/ko
Priority to CN202511250548.6A priority patent/CN121099035A/zh
Priority to CN202510071832.0A priority patent/CN119835417A/zh
Priority to SG11202103312YA priority patent/SG11202103312YA/en
Priority to CN202510071803.4A priority patent/CN119835416A/zh
Publication of WO2020073990A1 publication Critical patent/WO2020073990A1/zh
Priority to PH12021550708A priority patent/PH12021550708A1/en
Priority to ZA2021/02207A priority patent/ZA202102207B/en
Priority to US17/220,007 priority patent/US11388397B2/en
Priority to MX2025000213A priority patent/MX2025000213A/es
Priority to MX2025000212A priority patent/MX2025000212A/es
Priority to MX2025000210A priority patent/MX2025000210A/es
Priority to MX2025000211A priority patent/MX2025000211A/es
Anticipated expiration legal-status Critical
Priority to US17/658,787 priority patent/US11876958B2/en
Priority to US18/523,440 priority patent/US12323584B2/en
Priority to JP2024108422A priority patent/JP2024129129A/ja
Priority to AU2024287238A priority patent/AU2024287238A1/en
Priority to AU2024287237A priority patent/AU2024287237A1/en
Priority to AU2024287239A priority patent/AU2024287239A1/en
Priority to AU2024287236A priority patent/AU2024287236A1/en
Priority to US19/073,537 priority patent/US20250211729A1/en
Priority to JP2025146779A priority patent/JP2025172916A/ja
Priority to IL323377A priority patent/IL323377A/en
Priority to IL323378A priority patent/IL323378A/en
Priority to IL323383A priority patent/IL323383A/en
Ceased legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/107Selection of coding mode or of prediction mode between spatial and temporal predictive coding, e.g. picture refresh
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/11Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/132Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/182Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/59Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • H04N19/82Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop

Definitions

  • the embodiments of the present application relate to the technical field of video encoding and decoding, and in particular, to a method and device for predicting a video image component, and a computer storage medium.
  • H.265 / High Efficiency Video Coding is the latest international video compression standard.
  • the compression performance of H.265 / HEVC is higher than the previous generation video coding standard H.264 / Advanced Video Coding (Advanced Video Coding).
  • AVC Advanced Video Coding
  • VVC Versatile Video Coding
  • the embodiments of the present application provide a video image component prediction method and device, and a computer storage medium, which can reduce the complexity of video component prediction, improve prediction efficiency, and thereby improve video coding and decoding efficiency.
  • An embodiment of the present application provides a video component prediction method, including:
  • mapping processing on the reconstruction value of the first image component of the current block according to the component linear model to obtain a mapping value
  • the prediction value of the image component to be predicted of the current block is determined according to the mapping value.
  • An embodiment of the present application provides a video component prediction apparatus, including:
  • the acquiring part is configured to acquire the reference value set of the first image component of the current block
  • a determining section configured to determine a plurality of first image component reference values from the reference value set of the first image component
  • the filtering part is configured to perform a first filtering process on the sample values of the pixels corresponding to the multiple first image component reference values, respectively, to obtain multiple filtered first image reference sample values;
  • the determining section is further configured to determine a reference value of an image component to be predicted corresponding to the plurality of filtered first image reference samples, wherein the image component to be predicted is different from the first image component The image component of; and determining the parameters of the component linear model according to the plurality of filtered first image reference samples and the reference value of the image component to be predicted, wherein the component linear model characterizes the first image
  • the sample mapping of the component is a linear mapping relationship between the samples of the image component to be predicted;
  • the filtering part is further configured to perform mapping processing on the reconstruction value of the first image component of the current block according to the component linear model to obtain a mapping value;
  • the prediction part is configured to determine the prediction value of the image component to be predicted of the current block according to the mapping value.
  • An embodiment of the present application provides a video component prediction apparatus, including:
  • Memory used to store executable video component prediction instructions
  • the processor is configured to implement the video component prediction method provided by the embodiments of the present application when executing executable video component prediction instructions stored in the memory.
  • An embodiment of the present application provides a computer-readable storage medium that stores executable video component prediction instructions for causing a processor to execute, to implement the video component prediction method provided by the embodiment of the present application.
  • An embodiment of the present application provides a video image component prediction method.
  • the video image component prediction apparatus may first select a plurality of first image component reference values based on the directly acquired reference value set of the first image component corresponding to the current block , Performing filtering processing based on the pixel positions of the selected multiple first image component reference values to obtain multiple filtered first image reference samples, and then, finding multiple filtered first image reference samples Corresponding to the reference value of the image component to be predicted, the parameters of the component linear model are obtained. Based on the parameters of the component linear model, the component linear model is constructed, and then the component linear model is constructed to predict the image component to be predicted.
  • the component linear model multiple reference values of the first image component are first selected, and then filtering is performed according to the positions corresponding to the selected multiple first image component reference values, and then the component linear model is constructed. Saves the workload of filtering the pixels corresponding to the current block, that is, reduces the filtering operation, thereby reducing the complexity of building a component linear model, thereby reducing the complexity of video component prediction, improving prediction efficiency, and improving Video codec efficiency.
  • FIG. 1 is a schematic diagram of a relationship between a current block and neighboring reference pixels provided by an embodiment of this application;
  • FIG. 2 is an architecture diagram of a video image component prediction system provided by an embodiment of the present application.
  • 3A is a schematic block diagram of a composition of a video encoding system provided by an embodiment of this application.
  • 3B is a schematic block diagram of a composition of a video decoding system provided by an embodiment of the present application.
  • FIG. 4 is a flowchart 1 of a video image component prediction method according to an embodiment of the present application.
  • FIG. 5 is a flowchart 2 of a video image component prediction method according to an embodiment of the present application.
  • FIG. 6 is a flowchart 3 of a video image component prediction method according to an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of constructing a prediction model based on a maximum value and a minimum value provided by an embodiment of this application;
  • FIG. 8 is a first schematic structural diagram of a video image component prediction apparatus according to an embodiment of the present application.
  • FIG. 9 is a second schematic structural diagram of a video image component prediction apparatus according to an embodiment of the present application.
  • the main function of prediction codec is to use the reconstructed image in space or time to construct the predicted value of the current block in the video codec, and only the difference between the original value and the predicted value is transmitted to achieve the purpose of reducing the amount of transmitted data .
  • the main function of intra prediction is to construct the prediction value of the current block by using the current block and the adjacent pixel unit in the previous row and the pixel unit in the left column. As shown in FIG. 1, using the neighboring pixels that have been restored around the current block 101 (ie, the pixel unit in the previous row 102 adjacent to the current block and the pixel unit in the left column 103), each pixel unit of the current block 101 is prediction.
  • the three image components are generally used to characterize the processing block.
  • the three image components are a luminance component, a blue chroma component and a red chroma component.
  • the luminance component is generally represented by the symbol Y
  • the blue chrominance component is generally represented by the symbol Cb
  • the red chrominance component is generally represented by the symbol Cr.
  • the sampling format commonly used for video images is the YCbCr format.
  • the YCbCr format includes:
  • the video image adopts YCbCr 4: 2: 0 format
  • the luminance image component of the video image is a 2N ⁇ 2N processing block
  • the corresponding blue chrominance component or red chrominance component is N ⁇ N Processing block, where N is the side length of the processing block.
  • the following will take the 4: 2: 0 format as an example for description, but the technical solutions of the embodiments of the present application are also applicable to other sampling formats.
  • FIG. 2 is a composition of a video codec network architecture according to an embodiment of the present application
  • the network architecture includes one or more electronic devices 11 to 1N and a communication network 01, where the electronic devices 11 to 1N can perform video interaction through the communication network 01.
  • the electronic device may be various types of devices with video encoding and decoding functions during the implementation process.
  • the electronic device may include a mobile phone, a tablet computer, a personal computer, a personal digital assistant, a navigator, a digital phone, a video phone, TV sets, sensor devices, servers, etc. are not limited in the embodiments of the present application.
  • the intra prediction device in the embodiment of the present application may be the above-mentioned electronic device.
  • the electronic device in the embodiment of the present application has a video codec function, and generally includes a video encoder and a video decoder.
  • the composition structure of the video encoder 21 includes: a transform and quantization unit 211, an intra estimation unit 212, an intra prediction unit 213, a motion compensation unit 214, a motion estimation unit 215, an inverse transform and Inverse quantization unit 216, filter control analysis unit 217, filtering unit 218, entropy encoding unit 219, and decoded image buffer unit 210, etc.
  • the filtering unit 218 can implement deblocking filtering and sample adaptive indentation (SampleAdaptive0ffset, SAO ) Filtering
  • the entropy coding unit 219 can implement header information coding and context-based adaptive binary arithmetic coding (Context-based Adaptive Binary Arithmatic Coding, CABAC).
  • the coding tree block (Coding Tree Unit, CTU) can be divided to obtain a block to be coded for the current video frame, and then the intra-frame prediction or inter-frame prediction of the block to be coded
  • the residual information is transformed by the transform and quantization unit 211, including transforming the residual information from the pixel domain to the transform domain, and quantizing the resulting transform coefficients to further reduce the bit rate; intra-estimation unit 212 and intra-prediction
  • the unit 213 is used to perform intra prediction on the block to be encoded, for example, to determine the intra prediction mode used to encode the block to be encoded;
  • the motion compensation unit 214 and the motion estimation unit 215 are used to perform the block to be encoded with respect to one or more Inter prediction coding of one or more blocks in a reference frame to provide temporal prediction information; wherein, the motion estimation unit 215 is used to estimate the motion vector, the motion vector can estimate the motion of the block to be coded, and then the motion compensation unit 214 Perform motion compensation based on the motion vector
  • the reconstructed residual block removes the block effect artifacts through the filter control analysis unit 217 and the filtering unit 218, and then adds the reconstructed residual block to the frame of the decoded image buffer unit 210.
  • the block may be used to encode information indicating the determined intra prediction mode and output the code stream of the video data; and the decoded image buffer unit 210 is used to store the reconstructed video encoding block for prediction reference. As the video encoding proceeds, new reconstructed video encoding blocks will be continuously generated, and these reconstructed video encoding blocks will be stored in the decoded image buffer unit 210.
  • the composition structure of the video decoder 22 corresponding to the video encoder 21 is shown in FIG. 3B and includes: an entropy decoding unit 221, an inverse transform and inverse quantization unit 222, an intra prediction unit 223, a motion compensation unit 224, and a filtering unit 225 And the decoded image buffer unit 226, etc., wherein the entropy decoding unit 221 can implement header information decoding and CABAC decoding, and the filtering unit 225 can implement deblocking filtering and SAO filtering.
  • the code stream of the video signal is output; the code stream is input to the video decoder 22, and first passes through the entropy decoding unit 221 to obtain the decoded transform coefficient;
  • the inverse transform and inverse quantization unit 222 processes to generate a residual block in the pixel domain;
  • the intra prediction unit 223 may be used to generate based on the determined intra prediction mode and data from the previously decoded block of the current frame or picture The prediction data of the current decoding block;
  • the motion compensation unit 224 determines the prediction information of the current decoding block by analyzing the motion vector and other associated syntax elements, and uses the prediction information to generate the predictive block of the current decoding block being decoded;
  • the residual block from the inverse transform and inverse quantization unit 222 is summed with the corresponding predictive block generated by the intra prediction unit 223 or the motion compensation unit 224 to form a decoded video block;
  • the decoded video block passes through the filtering unit 225 In order to remove the artifacts of the square effect,
  • the video image component prediction method provided in the embodiment of the present application is a prediction in the intra prediction process in the prediction codec, which can be applied to the video encoder 21 or the video decoder 22.
  • the application examples do not specifically limit this.
  • CCLM implements the prediction between the luma component to the blue chroma component, the luma component to the red chroma component, and the blue chroma component and the red chroma component.
  • Embodiments of the present application provide a video image component prediction method.
  • the method is applied to a video image component prediction device.
  • the functions implemented by the method can be implemented by calling a program code by a processor in the video image component prediction device.
  • the program code It can be stored in a computer storage medium.
  • the video image component prediction apparatus includes at least a processor and a storage medium.
  • FIG. 4 is a schematic flowchart of an implementation method of a video image component prediction method provided by an embodiment of the present application. As shown in FIG. 4, the method includes:
  • S102 Determine a plurality of first image component reference values from the reference value set of the first image component
  • S103 Perform a first filtering process on the sample values of the pixels corresponding to the multiple first image component reference values, respectively, to obtain multiple filtered first image reference sample values;
  • S104 Determine reference values of image components to be predicted corresponding to a plurality of filtered first image reference samples; where the image components to be predicted are image components different from the first image component;
  • S105 Determine the parameters of the component linear model according to the plurality of filtered first image reference samples and the reference value of the image component to be predicted, where the component linear model characterizes the mapping of the samples of the first image component to the image component to be predicted Linear mapping relationship of samples;
  • the current block is an encoding block or a decoding block that is currently to be subjected to image component prediction.
  • the video image component prediction apparatus obtains the first image component reference value of the current block, where the reference value set of the first image component includes one or more first image component reference values.
  • the reference value of the current block may be obtained from the reference block.
  • the reference block may be an adjacent block of the current block or a non-adjacent block of the current block, which is not limited in this embodiment of the present application.
  • the video image component prediction apparatus determines one or more reference pixel points located outside the current block, and uses the one or more reference pixel points as one or more first image component reference values.
  • the adjacent processing block corresponding to the current block is a processing block adjacent to one or more edges of the current block, and the adjacent one or more edges may include the adjacent of the current block
  • the upper side of may also refer to the adjacent left side of the current block, or may refer to the adjacent upper side and left side of the current block, which is not specifically limited in this embodiment of the present application.
  • the video image component prediction device determines that pixels adjacent to the current block are one or more reference pixels.
  • one or more reference pixels may be adjacent pixels or non-adjacent pixels.
  • the embodiments of the present application are not limited.
  • the present application uses adjacent pixels as Examples.
  • one or more side adjacent pixel points corresponding to the adjacent processing block of the current block are regarded as one or more adjacent reference pixel points corresponding to the current block, and each adjacent reference pixel point corresponds to three image component references Value (ie, the first image component reference value, the second image component reference value, and the third image component reference value). Therefore, the video image component prediction apparatus may obtain the reference value of the first image component in each adjacent reference pixel of one or more adjacent reference pixels corresponding to the current block as the reference value set of the first image component
  • one or more first image component reference values are obtained, that is, one or more first image component reference values represent one or more adjacent pixel points in the adjacent reference block corresponding to the current block The reference value of the corresponding first image component.
  • the role of the first image component in the embodiment of the present application is used to predict other image components.
  • the combination of the first image component and the image component to be predicted includes at least one of the following:
  • the first image component is a luminance component
  • the image component to be predicted is a first or second chrominance component
  • the first image component is a first chrominance component
  • the image component to be predicted is a luminance component or a second chrominance component
  • the first image component is a second chrominance component, and the image component to be predicted is a luminance component or a first chrominance component; or,
  • the first image component is a first color component
  • the image component to be predicted is a second color component or a third color component
  • the first image component is the second color component, and the image component to be predicted is the first color component or the third color component; or,
  • the first image component is a third color component
  • the image component to be predicted is a second color component or a first color component.
  • the first color component is a red component
  • the second color component is a green component
  • the third color component is a blue component
  • the first chroma component may be a blue chroma component
  • the second chroma component may be a red chroma component
  • the first chroma component may be a red chroma component
  • the second chroma component may be a blue chroma component.
  • the first chroma component and the second chroma component only need to represent the blue chroma component and the red chroma component.
  • the first chroma component may be a blue chroma component
  • the second chroma component may be a red chroma component.
  • the first image component is a luminance component
  • the video image component prediction apparatus may use the luminance component to predict the blue chroma component; the first image component is the luminance component, and the image component to be predicted is the first
  • the video image component prediction device can use the luminance component to predict the red chroma component; when the first image component is the first chroma component, and when the image component to be predicted is the second chroma component, the video image component prediction device It is possible to use the blue chroma component to predict the red chroma component; when the first image component is the second chroma component and the image component to be predicted is the first chroma component, the video image component prediction apparatus can use the red chroma component to predict the blue Chroma component.
  • the video image component prediction device may determine a plurality of first image component reference values from one or more first image component reference values.
  • the video image component prediction apparatus may compare one or more first image component reference values contained in the reference value set of the first image component to determine the maximum first image component reference value and the minimum first Image component reference value.
  • the video image component prediction apparatus may determine the maximum value and the minimum value of the multiple first image component reference values from the one or more first image component reference values, which may be from one or more
  • the first image component reference value determines a reference value that characterizes or represents the maximum or minimum value of the first image component reference value.
  • the video image component prediction apparatus determines the maximum first image component reference value and the minimum first image component reference value from the reference value set of the first image component.
  • the video image component prediction apparatus may obtain the maximum first image component reference value and the minimum first image component reference value in various ways.
  • Method 1 Compare each first image component reference value among the one or more first image component reference values in sequence to determine the one with the largest reference image value and the smallest with the first image component reference value The minimum first image component reference value.
  • Method 2 Select at least two first image component reference values at preset positions from one or more first image component reference values; divide the at least two first sub-image component reference values into the largest according to the numerical value The image component reference value set and the minimum image component reference value set; based on the maximum image component reference value set and the minimum image component reference value set, the maximum first image component reference value and the minimum first image component reference value are obtained.
  • the video image component prediction apparatus may select the largest first image component reference value from the one or more first image component reference values as the largest first image component reference value, The one with the smallest value is selected as the smallest first image component reference value.
  • the determination methods can be compared one by one in sequence, or can be determined after sorting. Specific determination methods are not limited in the embodiments of the present application.
  • the video image component prediction apparatus may also select several first image component reference values corresponding to a preset position (preset pixel point position) from pixel positions corresponding to one or more first image component reference values, as At least two first image component reference values, and then based on at least two first image component reference values, divide the largest data set (the largest image component reference value set) and the smallest data set (the smallest image component reference value set), Based on the largest data set and the smallest data set, the largest first image component reference value and the smallest first image component reference value are determined.
  • the process of determining the maximum reference value of the first image component and the minimum reference value of the first image component based on the largest data set and the smallest data set can be performed by averaging the largest data set to obtain the largest first image
  • the minimum data set is averaged to obtain the minimum first image component reference value
  • the maximum and minimum values can also be determined in other ways, which is not limited in this embodiment of the present application.
  • the number of values in the largest data set and the smallest data set is an integer greater than or equal to 1.
  • the number of values in the two sets may be the same or may be inconsistent, and the embodiments of the present application are not limited.
  • the video image component prediction apparatus may also directly select from the at least two first sub-image component reference values as the at least two first sub-image component reference values for the corresponding first image component reference values at the preset positions
  • the maximum value is selected as the maximum first image component reference value
  • the minimum value is selected as the minimum first image component reference value.
  • the video image component prediction apparatus divides M (M may be a value greater than 4 or no limit) among the at least two first sub-image component reference values into the largest first sub-image component reference value as the largest
  • the image component reference value set which is divided into the minimum image component reference value set except the M largest first sub image component reference values among the at least two first sub image component reference values; and finally the maximum image component reference value Mean value processing is performed in the set to obtain the maximum first image component reference value, and mean value processing is performed in the minimum image component reference value set as the minimum first image component reference value.
  • the maximum first image component reference value and the minimum first image component reference value are the maximum value and the minimum value that can be directly determined by the numerical value, or the preset can be selected first After the first image component reference value (at least two first sub-image component reference values) whose position can represent the validity of the reference value, a set with a larger value is first divided from the effective first image component reference value, and A set with the smallest value, and then determine the maximum first image component reference value based on the larger set, and determine the minimum first image component reference value based on the smaller set; or directly correspond to the effective first at the preset position In the image component reference value set, the maximum first image component reference value of the maximum value and the minimum first image component reference value of the minimum value are determined by the numerical value.
  • the video image component prediction device does not limit the determination method of the maximum first image component reference value and the minimum first image component reference value.
  • the video image component prediction apparatus may also divide one or more first image component reference values into three or even four sets according to size, and then process each set to obtain a representative parameter, and then select the representative parameter The maximum and minimum parameters are selected as the maximum first image component reference value and the minimum first image component reference value, etc.
  • the selection of the preset position may be a position representing the validity of the reference value of the first image component, and the number of preset positions is not limited, for example, it may be 4, 6, etc .; It can be all positions of adjacent pixels, which is not limited in the embodiments of the present application.
  • the preset position may be a preset reference value of the first image component selected on both sides according to the sampling frequency based on the center of the row or example; it may also be the position after removing the edge point in the row or column
  • the reference values of the first image component at other positions are not limited in this embodiment of the present application.
  • the allocation of the preset positions in the rows and columns may be evenly divided, or may be allocated in a preset manner, which is not limited in the embodiments of the present application. For example, when the number of preset positions is 4, and adjacent rows and adjacent columns are positions corresponding to one or more first image component reference values, you can select 2 from the first image component reference values corresponding to adjacent rows 2 of the first image component reference values corresponding to adjacent columns; 1 can be selected from the first image component reference values corresponding to adjacent rows, and 3 can be selected from the first image component reference values corresponding to adjacent columns
  • the embodiments of the present application are not limited.
  • the video image component prediction apparatus may determine the maximum value and the minimum value of the one or more first image component reference values from the one or more first image component reference values, that is, one or more first image components are obtained
  • the minimum value among the one or more first image component reference values the minimum first image component reference value.
  • After determining a plurality of reference values from the preset positions in the one or more first image component reference values after processing, the largest first image component reference value with the largest characterizing value and the smallest first image component with the smallest characterizing value are obtained An image component reference value.
  • the video image component prediction device performs a first filtering process on the pixel samples corresponding to the determined multiple first image component reference values, respectively, to obtain multiple filtered first image reference samples.
  • the multiple filtered first image reference sample values may be the filtered maximum first image component reference value and the filtered minimum first image component reference value, or may include other
  • the multiple reference sample values of the image component reference value and the filtered minimum first image component reference value, or other multiple reference sample values, are not limited in this embodiment of the present application.
  • the video image component prediction device performs filtering (that is, first filter processing) on the pixel position corresponding to the determined first image component reference value (that is, the sample value of the corresponding pixel), so that it can The corresponding filtered first image reference samples are obtained, so that the candidate can build the component linear model based on the multiple filtered first image reference samples.
  • filtering that is, first filter processing
  • the video image component prediction device performs first filtering processing on the sample values of the pixels corresponding to the maximum first image component reference value and the minimum first image component reference value, respectively, to obtain the filtered maximum An image component reference value and the filtered minimum first image component reference value.
  • the filtering process may be used to determine the maximum first image component
  • the reference value and the pixel position of the minimum first image component reference value that is, the sample value of the corresponding pixel
  • are filtered that is, the first filtering process
  • the filtered minimum image component reference value that is, multiple filtered first image reference samples
  • the filtering method may be up-sampling, down-sampling, and low-pass filtering.
  • the embodiments of the present application are not limited.
  • the down-sampling methods may include: mean, interpolation, or median. The embodiment is not limited.
  • the first filtering process may be down-sampling filtering and low-pass filtering.
  • the video image component prediction apparatus performs downsampling and filtering on the pixel positions for determining the maximum first image component reference value and the minimum first image component reference value, so that the corresponding filtered maximum first image can be obtained The component reference value and the filtered minimum first image component reference value.
  • the video component prediction apparatus performs an average calculation of the first image component for an area composed of the position corresponding to the maximum reference value of the first image component and the position of its neighboring pixel points, and fuses the pixels of this area into one pixel.
  • the average result is the reference value of the first image component corresponding to the fused pixel, that is, the filtered maximum reference value of the first image component; similarly, the video component prediction device corresponds to the position and
  • the area formed by the positions of its adjacent pixels is calculated as the average value of the first image component, and the pixels in this area are fused into one pixel, and the average result is the reference value of the first image component corresponding to the fused pixel. That is, the filtered minimum reference value of the first image component.
  • the downsampling process of the video image component prediction device is implemented by using a filter, and the specific vector pixel position range adjacent to the position corresponding to the maximum first image component reference value
  • the determination of can be determined by the type of filter, which is not limited in the embodiments of the present application.
  • the filter type may be a 6-tap filter or a 4-tap filter, which is not limited in the embodiment of the present application.
  • the video image component prediction device determines the reference value of the image component to be predicted corresponding to the plurality of filtered first image reference samples, where the image component to be predicted is an image component different from the first image component ( For example, the second image component or the third image component); then, based on the multiple filtered first image reference samples and the reference value of the image component to be predicted, the parameters of the component linear model are determined.
  • the sample mapping of an image component is a linear mapping relationship of the sample values of the image component to be predicted, for example, the function relationship.
  • the video image component prediction device determines the maximum image component reference value to be predicted corresponding to the filtered maximum first image component reference value, and the corresponding minimum image component reference value after filtering The minimum reference value of the image component to be predicted.
  • the video image component prediction apparatus may adopt the construction method of the maximum value and the minimum value, and derive the model parameters (that is, the parameters of the component linear model) according to the principle of "two points are determined by one line", Furthermore, a component linear model is constructed, that is, a simplified cross-component linear prediction model (Cross-component Linear Model Prediction, CCLM).
  • CCLM Cross-component Linear Model Prediction
  • the video image component prediction device performs downsampling (that is, filtering) to achieve alignment with the position of the image to be predicted, so that the corresponding sample value of the filtered first image component reference sample can be determined
  • the reference value of the image component to be predicted For example, the maximum image component reference value to be predicted corresponding to the filtered maximum first image component reference value and the minimum image component reference value to be predicted corresponding to the filtered minimum first image component reference value are determined.
  • the video image component prediction apparatus has determined (the filtered maximum first image component reference value, the maximum image component to be predicted reference value) and (the filtered minimum first image component reference value, the minimum image component to be predicted (Reference value) these two points, so that the model parameters can be derived according to the principle of "two points determine one line", and then a component linear model is constructed.
  • the video image component prediction device is based on the filtered maximum first image component reference value, the maximum image component reference value to be predicted, the filtered minimum first image component reference value, and the minimum image element to be predicted
  • the reference value determines the parameters of the component linear model, where the component linear model represents a linear mapping relationship that maps the sample values of the first image component to the sample values of the image component to be predicted.
  • the video image component prediction device is based on the filtered maximum first image component reference value, the maximum image component reference value to be predicted, the filtered minimum first image component reference value, and the minimum image element to be predicted
  • the implementation of determining the parameters of the component linear model may include: (1).
  • the parameters of the component linear model further include a multiplicative factor and an additive offset.
  • the video image component prediction apparatus can calculate the first difference between the maximum image component reference value to be predicted and the minimum image component reference value to be predicted; calculate the maximum first image component reference value and the minimum first image component reference value The second difference; set the multiplier factor as the ratio of the first difference and the second difference; calculate the first product between the reference value of the largest first image component and the multiplier factor, and set the additive offset Is the difference between the maximum reference value of the image component to be predicted and the first product; or, the second product between the minimum reference value of the first image component and the multiplicative factor is calculated, and the additive offset is set to the minimum to be predicted The difference between the reference value of the image component and the second product.
  • the video image component prediction apparatus may predict the image component to be predicted based on the first image component and the component linear model, and
  • the image component to be predicted in the embodiment of the present application may be a chroma component.
  • the component linear model can be shown in formula (1), as follows:
  • Y represents the reconstruction value of the first image component corresponding to a certain pixel in the current block (after downsampling)
  • C represents the predicted value of the second image component corresponding to that certain pixel in the current block
  • ⁇ and ⁇ is a model parameter of the above component linear model.
  • the video image component prediction apparatus may first select the maximum and minimum first image component reference values based on the directly acquired one or more first image component reference values corresponding to the current block, and then The corresponding positions of the maximum first image component reference value and the minimum first image component reference value are down-sampled, and then a component linear model is constructed, which saves the workload of down-sampling processing for the pixels corresponding to the current block, that is, reduces The filtering operation reduces the complexity of constructing the component linear model, thereby reducing the complexity of video component prediction, improves the prediction efficiency, and improves the video codec efficiency.
  • the video image component prediction device may directly use the component linear model to predict the video component of the current block, and then obtain the prediction of the image component to be predicted value.
  • the video image component prediction apparatus may perform a mapping process on the reconstruction value of the first image component of the current block according to the component linear model to obtain a mapping value, and then determine the prediction value of the image component to be predicted of the current block according to the mapping value.
  • the video image component prediction device performs a second filtering process on the reconstructed value of the first image component to obtain a second filtered value of the reconstructed value of the first image component; according to the component linear model, the second The filtered value is mapped to obtain the mapped value.
  • the video image component prediction device sets the mapping value to the prediction value of the image component to be predicted of the current block.
  • the second filtering process may be down-sampling filtering or low-pass filtering.
  • the video image component prediction apparatus may also perform a third filtering process on the mapping value setting to obtain a third filtering value of the mapping value; setting the third filtering value to the value of the to-be-predicted image component of the current block Predictive value.
  • the third filtering process may be low-pass filtering.
  • the predicted value represents the predicted value of the second image component or the predicted value of the third image component corresponding to one or more pixels of the current block.
  • an embodiment of the present invention further provides a video image component prediction method, including:
  • the component linear model represents a linear mapping relationship that maps the samples of the first image component to the samples of the image component to be predicted;
  • the reconstruction value of the first image component of the current block is to filter the first image component of the current block to obtain the reconstruction value of the first image component corresponding to the current block, and then The component linear model and the first image component reconstruction value obtain the prediction value of the image component to be predicted of the current block.
  • the minimum unit of prediction for the current block is the pixel, so the first image corresponding to each pixel of the current block is required Component reconstruction value to predict the predicted value of the image component to be predicted corresponding to the pixel.
  • the video image component prediction first performs the first image component filtering (eg, downsampling) on the current block to obtain the reconstruction value of the first image component corresponding to the current block, specifically the first image of each pixel corresponding to the current block Component reconstruction value.
  • the first image component reconstruction value represents the reconstruction value of the first image component corresponding to one or more pixels of the current block.
  • the video image component prediction apparatus can perform a mapping process on the reconstruction value of the first image component of the current block based on the component linear model to obtain the map value, and according to the map value, thereby obtain the predicted value of the image component to be predicted of the current block.
  • S204 may include: S2041-S2042. as follows:
  • the video image component prediction device is based on the principle of "determining one line with two points" in the process of constructing a component linear model based on the filtered maximum image component reference value and the filtered minimum image component reference value, in order to
  • the first image component is the abscissa.
  • the image component to be predicted is the ordinate
  • the values of the abscissas of the two points are known, and the values of the ordinates corresponding to the two points need to be determined.
  • the principle of “one line is determined at two points” determines a linear model, namely a component linear model.
  • the video image component prediction device converts the sampling point position of the first image component reference value corresponding to the largest first image component reference value into the first sampling point position;
  • the predicted image component reference value is set to the reference value at the first sampling point position among the image component reference values to be predicted;
  • the sampling point position of the first image component reference value corresponding to the smallest first image component reference value is converted to The position of the second sampling point;
  • the minimum reference value of the image component to be predicted is set as the reference value at the position of the second sampling point among the reference values of the image component to be predicted.
  • the reference pixel point is an adjacent pixel point as an example for description.
  • the video image component prediction apparatus may obtain one or more reference values of the image components to be predicted corresponding to the current block based on the description of the neighboring blocks, where the one or more reference values of the image components to be predicted may refer to Is the reference value of the image component to be predicted in each adjacent reference pixel of one or more reference pixels corresponding to the current block, as a reference value of the image component to be predicted, so that the video image component prediction device obtains One or more reference values of the image component to be predicted.
  • the video image component prediction apparatus finds the first adjacent reference pixel point where the filtered maximum first image component reference value is located from the pixel points corresponding to the one or more reference values of the image component to be predicted, and compares the first adjacent reference point
  • the reference value of the image component to be predicted corresponding to the pixel is taken as the maximum reference value of the image component to be predicted, that is, the maximum reference value of the image component to be predicted corresponding to the filtered maximum first image component reference value is determined, and from one or more Among the pixels corresponding to the predicted image component reference value, find the second adjacent reference pixel where the filtered first image component reference value is located, and use the to-be-predicted image component reference value corresponding to the second adjacent reference pixel as the
  • the minimum reference value of the image component to be predicted that is, the minimum reference value of the image component to be predicted corresponding to the filtered minimum first image component reference value is determined.
  • the video image component prediction apparatus may further filter the positions of adjacent pixels to obtain one or more reference values of the image components to be predicted of the filtered pixels, and then select the filtered pixels. Find the first adjacent reference pixel point where the filtered maximum first image component reference value is located in the point position, and compare the reference value of the image component to be predicted corresponding to the first adjacent reference pixel point (one or more image components to be predicted One of the reference values) is used as the maximum image component reference value to be predicted, that is, the maximum image component reference value to be predicted corresponding to the filtered maximum first image component reference value is determined, and in the filtered pixel position, the filter is found The second adjacent reference pixel where the smallest first image component reference value is located, and the reference value of the image component to be predicted corresponding to the second adjacent reference pixel is taken as the minimum reference value of the image component to be predicted, that is, it is determined and filtered The minimum reference value of the image component to be predicted corresponding to the minimum reference value of the first image component.
  • the video image component prediction apparatus may also first filter the positions of adjacent pixels to filter the image component to be predicted, such as the chroma image component, which is not limited in this embodiment of the present application. That is, in the embodiment of the present application, the video image component prediction apparatus may perform a fourth filtering process on the reference value of the image component to be predicted to obtain the reconstruction value of the image component to be predicted.
  • the fourth filtering process may be low-pass filtering.
  • the process of constructing the component linear model by the video image component prediction device is: using the filtered maximum first image component reference value, the maximum image component reference value to be predicted, and the preset initial linear model to construct the first A sub-component linear model; using the filtered minimum first image component reference value, the minimum image component reference value to be predicted, and a preset initial linear model to construct a second sub-component linear model; based on the first sub-component linear model and the second The sub-component linear model is used to obtain the model parameters; the model parameters and the preset initial linear model are used to construct the component linear model.
  • the preset initial linear model is an initial model with unknown model parameters.
  • the preset initial linear model may be in the form of formula (1), but ⁇ and ⁇ are unknown, and the first subcomponent linear model and the second subcomponent linear model are used to construct a binary second equation. Solving the model parameters ⁇ and ⁇ , and substituting ⁇ and ⁇ into formula (1), the linear mapping relationship model between the first image component and the image component to be predicted can be obtained.
  • L max and L min represent the maximum value and minimum value searched from the reference value of the first image component corresponding to the left side and / or the upper side without down-sampling
  • C max and C min represent L max and The reference value of the image component to be predicted corresponding to the adjacent reference pixel at the position corresponding to L min .
  • the video image component prediction apparatus may first select the maximum and minimum first image component reference values based on the directly acquired one or more first image component reference values corresponding to the current block, and then The maximum first image component reference value and the position corresponding to the maximum first image component reference value are down-sampled (filtered), and then a component linear model is constructed, which saves the workload of down-sampling processing for pixels corresponding to the current block, That is, the filtering operation is reduced, thereby reducing the complexity of constructing the component linear model, thereby reducing the complexity of video component prediction, improving the prediction efficiency, and improving the video codec efficiency.
  • embodiments of the present application provide a video component prediction apparatus, which includes each unit included and each module included in each unit, which can be implemented by a processor in the video component prediction apparatus; It can also be realized by a specific logic circuit; in the implementation process, the processor may be a central processing unit, a microprocessor, a digital signal processor (DSP, Digital Signal Processor), or a field programmable gate array.
  • the processor may be a central processing unit, a microprocessor, a digital signal processor (DSP, Digital Signal Processor), or a field programmable gate array.
  • an embodiment of the present application provides a video component prediction apparatus 3, including:
  • the acquiring part 30 is configured to acquire a reference value set of the first image component of the current block, wherein the reference value set of the first image component includes one or more first image component reference values;
  • the determining section 31 is configured to determine a plurality of first image component reference values from the reference value set of the first image component;
  • the filtering part 32 is configured to perform a first filtering process on the sample values of the pixels corresponding to the multiple first image component reference values, respectively, to obtain multiple filtered first image reference sample values;
  • the determining section 31 is further configured to determine a reference value of an image component to be predicted corresponding to the plurality of filtered first image reference samples, wherein the image component to be predicted is the first image component Different image components; and determining the parameters of the component linear model according to the plurality of filtered first image reference samples and the reference values of the image components to be predicted, wherein the component linear model characterizes the first
  • the sample value mapping of the image component is a linear mapping relationship of the sample values of the image component to be predicted;
  • the filtering part 32 is further configured to perform a mapping process on the reconstruction value of the first image component of the current block according to the component linear model to obtain a mapping value;
  • the prediction section 33 is configured to determine the prediction value of the image component to be predicted of the current block according to the mapping value.
  • the determining portion 31 is further configured to compare the reference values contained in the reference value set of the first image component to determine the maximum first image component reference value and the minimum first image component reference value.
  • the filtering part 32 is further configured to perform the sampling on pixel samples corresponding to the maximum first image component reference value and the minimum first image component reference value, respectively In the first filtering process, the filtered maximum first image component reference value and the filtered minimum first image component reference value are obtained.
  • the determining section 31 is further configured to determine a maximum reference value of the image component to be predicted corresponding to the filtered maximum first image component reference value, and the filtered The minimum reference value of the image component to be predicted corresponding to the minimum first image component reference value.
  • the determining portion 31 is further configured to: according to the filtered maximum first image component reference value, the maximum image component reference value to be predicted, and the filtered minimum An image component reference value and the minimum image component reference value to be predicted determine the parameter of the component linear model, wherein the component linear model characterizes the mapping of the samples of the first image component to the image component to be predicted Linear mapping of samples.
  • the determining portion 31 is further configured to determine one or more reference pixels located outside the current block;
  • the acquisition section 30 is further configured to use the one or more reference pixel points as the one or more first image component reference values.
  • the determining portion 31 is further configured to determine that the pixel adjacent to the current block is the one or more reference pixels.
  • the filtering part 32 is further configured to perform a second filtering process on the reconstructed value of the first image component to obtain a second filtered value of the reconstructed value of the first image component And mapping the second filtered value according to the component linear model to obtain the mapped value.
  • the second filtering process is down-sampling filtering or low-pass filtering.
  • the prediction section 33 is further configured to set the mapping value as the prediction value of the to-be-predicted image component of the current block.
  • the filtering part 32 is further configured to perform a third filtering process on the mapping value setting to obtain a third filtering value of the mapping value;
  • the prediction section 33 is further configured to set the third filtered value as the predicted value of the image component to be predicted of the current block.
  • the third filtering process is low-pass filtering.
  • the determining portion 31 is further configured to obtain a reference value of the image component to be predicted of the current block; and among the reference value of the image component to be predicted, determine the maximum to be predicted The image component reference value and the minimum image component reference value to be predicted.
  • the filtering part 32 is further configured to perform a fourth filtering process on the reference value of the image component to be predicted to obtain a reconstruction value of the image component to be predicted.
  • the fourth filtering process is low-pass filtering.
  • the determining portion 31 is further configured to convert the sampling point position of the first image component reference value corresponding to the maximum first image component reference value into the first sampling point position
  • the determining section 31 is further configured to construct the filtered first reference value of the first image component, the largest reference value of the image component to be predicted and a preset initial linear model to construct A first sub-component linear model; using the filtered minimum first image component reference value, the minimum image component reference value to be predicted, and the preset initial linear model to construct a second sub-component linear model; based on the The first subcomponent linear model and the second subcomponent linear model are used to obtain model parameters; the component parameters are constructed using the model parameters and the preset initial linear model.
  • the determining part 31 is further configured that the parameters of the component linear model include a multiplicative factor and an additive offset; calculating the maximum reference value of the image component to be predicted and the A first difference between the minimum reference value of the image component to be predicted; calculating a second difference between the maximum first image component reference value and the minimum first image component reference value; setting the multiplicative factor Is the ratio of the first difference to the second difference; calculating the first product between the maximum first image component reference value and the multiplicative factor, and setting the additive offset to The difference between the maximum reference value of the image component to be predicted and the first product; or, calculating the second product between the minimum reference value of the first image component and the multiplication factor, and adding the The sexual offset is set as the difference between the minimum reference value of the image component to be predicted and the second product.
  • the parameters of the component linear model include a multiplicative factor and an additive offset
  • the first image component is a luminance component
  • the image component to be predicted is a first or second chrominance component
  • the first image component is the first chroma component, and the image component to be predicted is the luminance component or the second chroma component; or,
  • the first image component is the second chroma component, and the image component to be predicted is the luminance component or the first chroma component; or,
  • the first image component is a first color component
  • the image component to be predicted is a second color component or a third color component
  • the first image component is the second color component, and the image component to be predicted is the first color component or the third color component; or,
  • the first image component is the third color component, and the image component to be predicted is the second color component or the first color component.
  • the first color component is a red component
  • the second color component is a green component
  • the third color component is a blue component
  • the first filtering process is down-sampling filtering or low-pass filtering.
  • the above video component prediction method is implemented in the form of a software function module and sold or used as an independent product, it may also be stored in a computer-readable storage medium.
  • the technical solutions of the embodiments of the present application may be embodied in the form of software products in essence or part of contributions to related technologies.
  • the computer software products are stored in a storage medium and include several instructions to make Electronic devices (which may be mobile phones, tablet computers, personal computers, personal digital assistants, navigators, digital phones, video phones, televisions, sensor devices, servers, etc.) perform all or part of the methods described in the embodiments of the present application.
  • the foregoing storage media include various media that can store program codes, such as a U disk, a mobile hard disk, a read-only memory (Read Only Memory, ROM), a magnetic disk, or an optical disk.
  • program codes such as a U disk, a mobile hard disk, a read-only memory (Read Only Memory, ROM), a magnetic disk, or an optical disk.
  • an embodiment of the present application provides a video component prediction apparatus, including:
  • the memory 34 is used to store executable video component prediction instructions
  • the processor 35 is configured to implement the steps in the video component prediction method provided in the foregoing embodiments when executing the executable video component prediction instructions stored in the memory 34.
  • an embodiment of the present application provides a computer-readable storage medium on which a video component prediction instruction is stored.
  • the video component prediction instruction is executed by the processor 35, the video component prediction method provided in the foregoing embodiment is implemented. step.
  • the video component prediction apparatus may first determine the reference values of the plurality of first image components based on the directly acquired reference value set of the first image component corresponding to the current block, and then according to the determined Filter the position corresponding to the reference value of the first image component, and then build a component linear model, which saves the workload of the filtering process of the pixels corresponding to the current block, that is, reduces the filtering operation, thereby reducing the construction of the component linear model Complexity of video components, thereby reducing the complexity of video component prediction, improving prediction efficiency, and improving video codec efficiency.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Image Processing (AREA)

Abstract

本申请实施例提供了一种视频图像分量预测方法及装置、计算机存储介质,该方法可以包括:获取当前块的第一图像分量的参考值集;从第一图像分量的参考值集中确定多个第一图像分量参考值;对多个第一图像分量参考值对应的像素点的样值分别进行第一滤波处理,得到多个滤波后的第一图像参考样值;确定与多个滤波后的第一图像参考样值对应的待预测图像分量参考值;根据多个滤波后的第一图像参考样值和待预测图像分量参考值,确定分量线性模型的参数;根据分量线性模型,对当前块的第一图像分量的重建值进行映射处理,得到映射值;根据映射值确定当前块的待预测图像分量的预测值。

Description

视频图像分量预测方法及装置、计算机存储介质 技术领域
本申请实施例涉及视频编解码的技术领域,尤其涉及一种视频图像分量预测方法及装置、计算机存储介质。
背景技术
随着人们对视频显示质量要求的提高,高清和超高清视频等新视频应用形式应运而生。在这种高分辨率高质量视频欣赏应用越来越广泛的情况下,对视频压缩技术的要求也越来越高。H.265/高效率视频编码(High Efficiency Video Coding,HEVC)是目前最新的国际视频压缩标准,H.265/HEVC的压缩性能比前一代视频编码标准H.264/先进视频编码(Advanced Video Coding,AVC)提高约50%,但仍然满足不了视频应用迅速发展的需求,尤其是超高清、虚拟现实(Virtual Reality,VR)等新视频应用。
在下一代视频编码标准为多功能视频编码(Versatile Video Coding,VVC)采用的编码工具中,已经集成了一种基于线性模型的预测方法,色度分量可以通过线性模型由重建的亮度分量得到色度预测值。
然而,在通过线性模型进行视频分量的预测时,需要采用亮度邻近区域的像素值进行下采样处理后,在下采样后得到的参考样本点中找出最大值和最小值来构建线性模型,基于相邻参考块的数量较多,因此,采用上述方式进行模型构建的复杂度高,导致色度预测的效率低,进而影响视频编解码效率。
发明内容
本申请实施例提供了一种视频图像分量预测方法及装置、计算机存储介质,能够减少视频分量预测中的复杂度,提高预测效率,从而提高视频编解码效率。
本申请实施例的技术方案可以如下实现:
本申请实施例提供了一种视频分量预测方法,包括:
获取当前块的第一图像分量的参考值集;
从所述第一图像分量的参考值集中确定多个第一图像分量参考值;
对所述多个第一图像分量参考值对应的像素点的样值分别进行第一滤波处理,得到多个滤波后的第一图像参考样值;
确定与所述多个滤波后的第一图像参考样值对应的待预测图像分量参考值,其中,所述待预测图像分量是与所述第一图像分量不同的图像分量;
根据所述多个滤波后的第一图像参考样值和所述待预测图像分量参考值,确定分量线性模型的参数,其中,所述分量线性模型表征将所述第一图像分量的样值映射为所述待预测图像分量的样值的线性映射关系;
根据所述分量线性模型,对所述当前块的所述第一图像分量的重建值进行映射处理,得到映射值;
根据所述映射值确定所述当前块的所述待预测图像分量的预测值。
本申请实施例提供了一种视频分量预测装置,包括:
获取部分,被配置为获取当前块的第一图像分量的参考值集;
确定部分,被配置为从所述第一图像分量的参考值集中确定多个第一图像分量参考值;
滤波部分,被配置为对所述多个第一图像分量参考值对应的像素点的样值分别进行第一滤波处理,得到多个滤波后的第一图像参考样值;
所述确定部分,还被配置为确定与所述多个滤波后的第一图像参考样值对应的待预测图像分量参考值,其中,所述待预测图像分量是与所述第一图像分量不同的图像分量;以及根据所述多个滤波后的第一图像参考样值和所述待预测图像分量参考值,确定分量线性模型的参数,其中,所述分量线性模型表征将所述第一图像分量的样值映射为所述待预测图像分量的样值的线性映射关系;
所述滤波部分,还被配置为根据所述分量线性模型,对所述当前块的所述第一图像分量的重建值进行映射处理,得到映射值;
预测部分,被配置为根据所述映射值确定所述当前块的所述待预测图像分量的预测值。
本申请实施例提供了一种视频分量预测装置,包括:
存储器,用于存储可执行视频分量预测指令;
处理器,用于执行所述存储器中存储的可执行视频分量预测指令时,实现本申请实施例提供的视频分量预测方法。
本申请实施例提供了一种计算机可读存储介质,存储有可执行视频分量预测指令,用于引起处理器执行时,实现本申请实施例提供的视频分量预测方法。
本申请实施例提供了一种视频图像分量预测方法,视频图像分量预测装置可以基于直接获取的当前块对应的第一图像分量的参考值集,先进行多个第一图像分量参考值的选取后,基于用于选取的多个第一图像分量参考值的像素点位置进行滤波处理,得到多个滤波后的第一图像参考样值,然后,找到与多个滤波后的第一图像参考样值对应的待预测图像分量参考值,得到分量线性模型的参数,基于分量线性模型的参数,构建分量线性模型,进而采用上述构建分量线性模型进行待预测图像分量的预测过程。由于在分量线性模型构建的过程中先进行多个第一图像分量参考值的选取,然后根据选取的出的多个第一图像分量参考值对应的位置进行滤波处理,进而构建分量线性模型,这样节省了对当前块对应的像素点的滤波处理的工作量,即减少了滤波操作,从而降低了构建分量线性模型的复杂度,进而减少视频分量预测中的复杂度,提高了预测效率,提高了视频编解码效率。
附图说明
图1为本申请实施例提供的当前块与相邻参考像素点的关系示意图;
图2为本申请实施例提供的一种视频图像分量预测系统的架构图;
图3A为本申请实施例提供的一种视频编码系统的组成框图示意图;
图3B为本申请实施例提供的一种视频解码系统的组成框图示意图;
图4为本申请实施例提供的一种视频图像分量预测方法的流程图一;
图5为本申请实施例提供的一种视频图像分量预测方法的流程图二;
图6为本申请实施例提供的一种视频图像分量预测方法的流程图三;
图7为本申请实施例提供的基于最大值和最小值构造预测模型的结构示意图;
图8为本申请实施例提供的一种视频图像分量预测装置的结构示意图一;
图9为本申请实施例提供的一种视频图像分量预测装置的结构示意图二。
具体实施方式
为了使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请作进一步地详细描述,所描述的实施例不应视为对本申请的限制,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其它实施例,都属于本申请保护的范围。
除非另有定义,本申请所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本申请中所使用的术语只是为了描述本申请实施例的目的,不是旨在限制本申请。
下面先介绍下帧内预测、视频编解码等概念。
预测编解码主要的功能是:在视频编解码中利用空间或时间上已有的重建图像构造当前块的预测值,仅将原始值和预测值的差值传输,以达到减少传输数据量的目的。
帧内预测主要的功能是:利用当前块与相邻的上一行像素单元和左一列像素单元构造该当前块 的预测值。如图1所示,利用当前块101周围已经恢复的邻近像素(即与当前块相邻的上一行102中的像素单元和左一列103中像素单元),对当前块101的每个像素单元进行预测。
在本申请实施例中,针对视频图像,通常采用三个图像分量来表征处理块。其中,这三个图像分量分别为一个亮度分量、一个蓝色色度分量和一个红色色度分量。具体地,亮度分量通常使用符号Y表示,蓝色色度分量通常使用符号Cb表示,红色色度分量通常使用符号Cr表示。
目前,对视频图像常采用的采样格式为YCbCr格式,YCbCr格式包括:
4:4:4格式:表示蓝色色度分量或红色色度分量没有下采样;它是在每个扫描行上每4个连续的像素点取4个亮度分量的采样样本、4个蓝色色度分量的采样样本和4个红色色度分量的采样样本。
4:2:2格式:表示亮度分量相对于蓝色色度分量或红色色度分量进行2:1的水平采样,没有竖直下采样;它是在每个扫描行上每4个连续的像素点取4个亮度分量的采样样本、2个蓝色色度分量的采样样本和2个红色色度分量的采样样本。
4:2:0格式:表示亮度分量相对于蓝色色度分量或红色色度分量进行2:1的水平下采样和2:1的竖直下采样;它是在水平扫描行和垂直扫描行上每2个连续的像素点取2个亮度分量的采样样本、1个蓝色色度分量的采样样本和1个红色色度分量的采样样本。
在视频图像采用YCbCr为4:2:0格式的情况下,若视频图像的亮度像分量为2N×2N大小的处理块,则对应的蓝色色度分量或红色色度分量为N×N大小的处理块,其中,N为处理块的边长。在本申请实施例中,下述将以4:2:0格式为例进行描述,但是本申请实施例的技术方案同样适用于其他采样格式。
基于上述概念的基础上,本申请实施例提供了一种包含帧内预测中的视频图像分量预测方法的视频编解码系统的网络架构,图2为本申请实施例视频编解码的网络架构的组成结构示意图,如图2所示,该网络架构包括一个或多个电子设备11至1N和通信网络01,其中,电子设备11至1N可以通过通信网络01进行视频交互。电子设备在实施的过程中可以为各种类型的具有视频编解码功能的设备,例如,所述电子设备可以包括手机、平板电脑、个人计算机、个人数字助理、导航仪、数字电话、视频电话、电视机、传感设备、服务器等,本申请实施例不作限制。其中,本申请实施例中的帧内预测装置就可以为上述电子设备。
其中,本申请实施例中的电子设备具有视频编解码功能,一般包括视频编码器和视频解码器。
示例性的,参见图3A所示,视频编码器21的组成结构包括:变换与量化单元211、帧内估计单元212、帧内预测单元213、运动补偿单元214、运动估计单元215、反变换与反量化单元216、滤波器控制分析单元217、滤波单元218、熵编码单元219和解码图像缓存单元210等,其中,滤波单元218可以实现去方块滤波及样本自适应缩进(Sample Adaptive 0ffset,SAO)滤波,熵编码单元219可以实现头信息编码及基于上下文的自适应二进制算术编码(Context-based Adaptive Binary Arithmatic Coding,CABAC)。针对输入的源视频数据,通过编码树块(Coding Tree Unit,CTU)的划分可以得到一个当前视频帧的待编码块,然后对该待编码块进行帧内预测或帧间预测后,将得到的残差信息通过变换与量化单元211进行变换,包括将残差信息从像素域变换到变换域,并对所得的变换系数进行量化,用以进一步减少比特率;帧内估计单元212和帧内预测单元213用于对该待编码块进行帧内预测,例如,确定用以编码该待编码块的帧内预测模式;运动补偿单元214和运动估计单元215用于执行待编码块相对于一或多个参考帧中的一或多个块的帧间预测编码以提供时间预测信息;其中,运动估计单元215用于估计运动向量,运动向量可以估计该待编码块的运动,然后由运动补偿单元214基于运动向量执行运动补偿;在确定帧内预测模式之后,帧内预测单元213还用于将所选择的帧内预测数据提供到熵编码单元219,而且运动估计单元215将所计算确定的运动向量数据也发送到熵编码单元219;此外,反变换与反量化单元216是用于该待编码块的重构建,在像素域中重构建残差块,该重构建残差块通过滤波器控制分析单元217和滤波单元218去除方块效应伪影,然后将该重构残差块添加到解码图像缓存单元210的帧中的一个预测性块,用以产生经重构建的视频编码块;熵编码单元219是用于编码各种编码参数及量化后的变换系数,在基于CABAC的编码算法中,上下文内容可基于相邻编码块,可用于编码指示所确定的帧内预测模式的信息,输出该视频数据的码流;而解码图像缓存单元210是用于存放重构建的视频编码块,用于预测参考。随着视频编码的进行,会不断生成新的重构建的视频编码块,这些重构建的视频编码块都会被存放在解码图像缓存单元210中。
与视频编码器21对应的视频解码器22,其组成结构如图3B所示,包括:熵解码单元221、反变换与反量化单元222、帧内预测单元223、运动补偿单元224、滤波单元225和解码图像缓存单元226等,其中,熵解码单元221可以实现头信息解码以及CABAC解码,滤波单元225可以实现去方 块滤波以及SAO滤波。输入的视频信号经过图3A的编码处理之后,输出该视频信号的码流;该码流输入至视频解码器22中,首先经过熵解码单元221,得到解码后的变换系数;针对该变换系数通过反变换与反量化单元222进行处理,以便在像素域中产生残差块;帧内预测单元223可用于基于所确定的帧内预测模式和来自当前帧或图片的先前经解码块的数据而产生当前解码块的预测数据;运动补偿单元224通过剖析运动向量和其他关联语法元素来确定当前解码块的预测信息,并使用该预测信息以产生正被解码的当前解码块的预测性块;通过对来自反变换与反量化单元222的残差块与由帧内预测单元223或运动补偿单元224产生的对应预测性块进行求和,而形成解码的视频块;该解码的视频块通过滤波单元225以便去除方块效应伪影,从而改善视频质量;然后将经解码的视频块存储于解码图像缓存单元226中,解码图像缓存单元226存储用于后续帧内预测或运动补偿的参考图像,同时也用于视频信号的输出显示。
基于此,下面结合附图和实施例对本申请的技术方案进一步详细阐述。本申请实施例所提供的视频图像分量预测方法,是在预测编解码中的帧内预测过程的一种预测,既可以应用于视频编码器21中,也可以应用于视频解码器22中,本申请实施例对此不作具体限定。
在下一代视频编码标准H.266中,为了进一步提升了编解码性能和编解码效率,针对分量间预测(Cross-component Prediction,CCP)进行了扩展改进,提出了分量间线性模型预测(Cross-component Linear Model Prediction,CCLM)。在H.266中,CCLM实现了亮度分量到蓝色色度分量、亮度分量到红色色度分量,以及蓝色色度分量与红色色度分量之间的预测,下述将针对现有的CCLM为背景进行视频分量预测方法的描述。
本申请实施例提供一种视频图像分量预测方法,该方法应用于视频图像分量预测装置中,该方法所实现的功能可以通过视频图像分量预测装置中的处理器调用程序代码来实现,当然程序代码可以保存在计算机存储介质中,可见,该视频图像分量预测装置至少包括处理器和存储介质。
图4为本申请实施例提供的一种视频图像分量预测方法的实现流程示意图,如图4所示,该方法包括:
S101、获取当前块的第一图像分量参考值;
S102、从第一图像分量的参考值集中确定多个第一图像分量参考值;
S103、对多个第一图像分量参考值对应的像素点的样值分别进行第一滤波处理,得到多个滤波后的第一图像参考样值;
S104、确定出与多个滤波后的第一图像参考样值对应的待预测图像分量参考值;其中,待预测图像分量是与第一图像分量不同的图像分量;
S105、根据多个滤波后的第一图像参考样值和待预测图像分量参考值,确定分量线性模型的参数,其中,分量线性模型表征将第一图像分量的样值映射为待预测图像分量的样值的线性映射关系;
S106、根据分量线性模型,对当前块的第一图像分量的重建值进行映射处理,得到映射值;
S107、根据映射值确定当前块的待预测图像分量的预测值。
在S101中,在本申请实施例中,当前块为当前待进行图像分量预测的编码块或者解码块。在本申请实施例中,视频图像分量预测装置获取当前块的第一图像分量参考值,其中,第一图像分量的参考值集中包含一个或多个第一图像分量参考值。当前块的参考值可以从参考块中获取,参考块可以为当前块的相邻块,也可以为当前块的不相邻块,本申请实施例不作限制。
在本申请的一些实施例中,视频图像分量预测装置确定位于当前块之外的一个或多个参考像素点,并将一个或多个参考像素点作为一个或多个第一图像分量参考值。
需要说明的是,在本申请实施例中,当前块对应的相邻处理块为与当前块的一个或多个边相邻的处理块,一个或多个边相邻可以包括当前块的相邻的上侧边,也可以是指当前块的相邻的左侧边,还可以是指当前块的相邻的上侧边和左侧边,本申请实施例不作具体限定。
在本申请的一些实施例中,视频图像分量预测装置确定与当前块相邻的像素点为一个或多个参考像素点。
需要说明的是,在本申请实施例中,一个或多个参考像素点可以为相邻像素点也可以为不相邻的像素点,本申请实施例不作限制,本申请以相邻像素点为例进行说明。
其中,当前块的相邻处理块对应的一个或多个边相邻像素点,作为当前块对应的一个或多个相邻参考像素点,而每一个相邻参考像素点对应三个图像分量参考值(即第一图像分量参考值、第二图像分量参考值和第三图像分量参考值)。因此,视频图像分量预测装置可以获取当前块对应的一个或多个相邻参考像素点中的每个相邻参考像素点中的第一图像分量的参考值,作为第一图像分量的参考值集,这里,就得到了一个或多个第一图像分量参考值了,即,一个或多个第一图像分量参考 值表征当前块对应的相邻参考块中的一个或多个相邻像素点的对应的第一图像分量的参考值。其中,第一图像分量在本申请实施例中的作用是用于预测其他图像分量的。
在本申请的一些实施例中,第一图像分量和待预测图像分量的组合包括以下至少一种:
第一图像分量是亮度分量,待预测图像分量是第一或第二色度分量;或者,
第一图像分量是第一色度分量,待预测图像分量是亮度分量或第二色度分量;或者,
第一图像分量是第二色度分量,待预测图像分量是亮度分量或第一色度分量;或者,
第一图像分量是第一色彩分量,待预测图像分量是第二色彩分量或第三色彩分量;或者,
第一图像分量是第二色彩分量,待预测图像分量是第一色彩分量或第三色彩分量;或者,
第一图像分量是第三色彩分量,待预测图像分量是第二色彩分量或第一色彩分量。
在本申请的一些实施例中,第一色彩分量为红分量,第二色彩分量为绿分量,第三色彩分量为蓝分量。
其中,第一色度分量可以为蓝色色度分量,第二色度分量可以为红色色度分量。或者,第一色度分量可以为红色色度分量,第二色度分量可以为蓝色色度分量。这里,第一色度分量和第二色度分量分别代表蓝色色度分量和红色色度分量即可。
以第一色度分量可以为蓝色色度分量,第二色度分量可以为红色色度分量为例进行说明。第一图像分量为亮度分量,待预测图像分量为第一色度分量时,视频图像分量预测装置就可以采用亮度分量预测蓝色色度分量;第一图像分量为亮度分量,待预测图像分量为第二色度分量时,视频图像分量预测装置就可以采用亮度分量预测红色色度分量;第一图像分量为第一色度分量,待预测图像分量为第二色度分量时,视频图像分量预测装置就可以采用蓝色色度分量预测红色色度分量;第一图像分量为第二色度分量,待预测图像分量为第一色度分量时,视频图像分量预测装置就可以采用红色色度分量预测蓝色色度分量了。
在S102中,视频图像分量预测装置可以从一个或多个第一图像分量参考值中确定出多个第一图像分量参考值。
在本申请的一些实施例中,视频图像分量预测装置可以比较第一图像分量的参考值集中包含的一个或多个第一图像分量参考值,确定出最大第一图像分量参考值和最小第一图像分量参考值。
在本申请的一些实施例中,视频图像分量预测装置可以从一个或多个第一图像分量参考值中确定出多个第一图像分量参考值中的最大值和最小值,可以从一个或多个第一图像分量参考值中确定出表征或者代表第一图像分量参考值最大或者最小的参考值。
例如,视频图像分量预测装置从第一图像分量的参考值集中确定出最大第一图像分量参考值和最小第一图像分量参考值。
在本申请实施例中,视频图像分量预测装置可以采用多种方式来得到最大第一图像分量参考值和最小第一图像分量参考值。
方式一:对一个或多个第一图像分量参考值中的每个第一图像分量参考值进行依次对比,确定出一个第一图像分量参考值最大的,以及和一个第一图像分量参考值最小的最小第一图像分量参考值。
方式二:从一个或多个第一图像分量参考值中,筛选出预设位置的至少两个第一图像分量参考值;按照数值大小,将至少两个第一子图像分量参考值划分为最大图像分量参考值集合和最小图像分量参考值集合;基于最大图像分量参考值集合和最小图像分量参考值集合,得到最大第一图像分量参考值和最小第一图像分量参考值。
也就是说,在本申请实施例中,视频图像分量预测装置可以从一个或多个第一图像分量参考值,选出第一图像分量参考值中数值最大的作为最大第一图像分量参考值,选出其中数值最小的作为最小第一图像分量参考值。确定方式可以两两依次对比,也可以排序后确定出,具体的确定方式,本申请实施例不作限制。
视频图像分量预测装置还可以从一个或多个第一图像分量参考值对应的像素点位置中,选择出预设位置(预设像素点位置)处对应的几个第一图像分量参考值,作为至少两个第一图像分量参考值,再基于至少两个第一图像分量参考值,划分出最大的数据集合(最大图像分量参考值集合)和最小的数据集合(最小图像分量参考值集合),基于最大的数据集合和最小的数据集合,确定出最大第一图像分量参考值和最小第一图像分量参考值。其中,基于最大的数据集合和最小的数据集合,确定出最大第一图像分量参考值和最小第一图像分量参考值的过程,可以为分别对最大的数据集合进行均值处理,得到最大第一图像分量参考值,对最小的数据集合进行均值处理,得到最小第一图像分量参考值,还可以采用其他方式确定最大和最小值,本申请实施例不作限制。
需要说明的是,最大的数据集合和最小的数据集合中的数值的数量为大于等于1的整数,两个集合中的数值的数量可以一致,也可以不一致,本申请实施例不作限制。
视频图像分量预测装置还可以针对预设位置处对应的几个第一图像分量参考值,作为至少两个第一子图像分量参考值后,从至少两个第一子图像分量参考值中直接选出最大值,作为最大第一图像分量参考值,选出最小值作为最小第一图像分量参考值。
示例性的,视频图像分量预测装置将至少两个第一子图像分量参考值中的M(M可以为大于4的数值,也可以不限制)个最大的第一子图像分量参考值划分为最大图像分量参考值集合,将至少两个第一子图像分量参考值中的除M个最大的第一子图像分量参考值外的划分为最小图像分量参考值集合;最后再在最大图像分量参考值集合中进行均值处理,得到最大第一图像分量参考值,在最小图像分量参考值集合中分别进行均值处理,作为最小第一图像分量参考值。
需要说明的是,在本申请实施例中,最大第一图像分量参考值和最小第一图像分量参考值是可以直接通过数值大小确定出来的最大值和最小值,也可以是先选出预设位置能够代表参考值的有效性的第一图像分量参考值(至少两个第一子图像分量参考值)后,从有效的第一图像分量参考值中先划分出数值较大的一个集合,和数值最小的一个集合,再基于较大的集合确定出最大第一图像分量参考值,以及基于较小的集合确定出最小第一图像分量参考值;或者直接在预设位置对应有效性的第一图像分量参考值集合中通过数值大小确定出最大值的最大第一图像分量参考值,和最小值的最小第一图像分量参考值。
在本申请实施例中,视频图像分量预测装置不限制最大第一图像分量参考值和最小第一图像分量参考值的确定方式。例如,视频图像分量预测装置还可以将一个或多个第一图像分量参考值按照大小划分为三个,甚至四个集合,再对每个集合进行处理,得到一个代表参数后,再从代表参数中选出最大和最小的参数作为最大第一图像分量参考值和最小第一图像分量参考值等。
在本申请实施例中,预设位置的选择可以为代表第一图像分量参考值有效性的位置,预设位置的个数不作限制,例如可以为4个,6个等等;预设位置还可以为相邻像素点的全部位置,本申请实施例不做限制。
示例性的,预设位置可以为以所在行或者例的中心为基准,向两边按照采样频率选出的预设个第一图像分量参考值;也可以是行或者列中去掉边缘点位置后的其他位置上的第一图像分量参考值,本申请实施例不作限制。
针对行和列中预设位置的分配可以是均分,也可以是按照预设方式分配,本申请实施例不作限制。例如,预设位置的个数为4、且相邻行和相邻列为一个或多个第一图像分量参考值对应的位置时,可以从相邻行对应的第一图像分量参考值选2个,相邻列对应的第一图像分量参考值选2个;也可以从相邻行对应的第一图像分量参考值选1个,从相邻列对应的第一图像分量参考值选3个等,本申请实施例不作限制。
视频图像分量预测装置可以从一个或多个第一图像分量参考值中,确定出一个或多个第一图像分量参考值中的最大值和最小值,即得到了一个或多个第一图像分量参考值中的最大值:最大第一图像分量参考值,以及一个或多个第一图像分量参考值中的最小值:最小第一图像分量参考值。或者,从一个或多个第一图像分量参考值中的预设位置出确定出多个参考值后,经过处理后,得到表征值最大的最大第一图像分量参考值和表征值最小的最小第一图像分量参考值。这里,为了与其他视频分量的采样位置一致或者接近,需要基于最大第一图像分量参考值和最小第一图像分量参考值对应的像素点位置,进行滤波后,再进行后续的处理。
在S103中,视频图像分量预测装置对确定出来的多个第一图像分量参考值对应的像素点的样值分别进行第一滤波处理,得到多个滤波后的第一图像参考样值。
在本申请实施例中,多个滤波后的第一图像参考样值可以为滤波后的最大第一图像分量参考值和滤波后的最小第一图像分量参考值,也可以为其他包含最大第一图像分量参考值和滤波后的最小第一图像分量参考值的多个参考样值,或者为其他多个参考样值,本申请实施例不作限制。
在本申请实施例中,视频图像分量预测装置针对确定出来的第一图像分量参考值对应的像素点位置(即对应的像素点的样值),进行滤波(即第一滤波处理),从而可以得到对应的多个滤波后的第一图像参考样值,这样候选,就可以基于多个滤波后的第一图像参考样值来进行分量线性模型的构建了。
在本申请的一些实施例中,视频图像分量预测装置对最大第一图像分量参考值和最小第一图像分量参考值对应的像素点的样值分别进行第一滤波处理,得到滤波后的最大第一图像分量参考值和滤波后的最小第一图像分量参考值。
需要说明的是,由于确定出来的多个第一图像分量参考值可以为最大第一图像分量参考值和最小第一图像分量参考值,那么,滤波过程可以为针对用于确定最大第一图像分量参考值和最小第一图像分量参考值的像素点位置(即对应的像素点的样值)进行滤波(即第一滤波处理),从而可以得到对应的滤波后的最大第一图像分量参考值和滤波后的最小图像分量参考值(即多个滤波后的第一图像参考样值),这样后续就可以基于滤波后的最大第一图像分量参考值和滤波后的最小第一图像分量参考值来进行分量线性模型的构建了。
在本申请实施例中,滤波的方式可以为上采样、下采样和低通滤波等方式,本申请实施例不作限制,其中,下采样的方式可以包括:均值、插值或中值等,本申请实施例不作限制。
在本申请实施例中,第一滤波处理可以为下采样滤波和低通滤波。
示例性的,视频图像分量预测装置针对用于确定最大第一图像分量参考值和最小第一图像分量参考值的像素点位置,进行下采样滤波,从而可以得到对应的滤波后的最大第一图像分量参考值和滤波后的最小第一图像分量参考值。
下面以下采样为均值的方式进行说明。
视频分量预测装置针对用于确定最大第一图像分量参考值对应的位置和其相邻像素点位置构成的区域,进行第一图像分量的均值计算,将这块区域的像素融合为一个像素,该均值结果就为该融合后的像素点对应的第一图像分量参考值,即滤波后的最大第一图像分量参考值;同样的,视频分量预测装置针对最小第一图像分量参考值对应的位置和其相邻像素点位置构成的区域,进行第一图像分量的均值计算,将这块区域的像素融合为一个像素,该均值结果就为该融合后的像素点对应的第一图像分量参考值,即滤波后的最小第一图像分量参考值。
需要说明的是,在本申请实施例中,视频图像分量预测装置下采样的处理是采用滤波器进行实现的,具体的与最大第一图像分量参考值对应的位置相邻的向量像素点位置范围的确定可以由滤波器的类型决定,本申请实施例不作限制。
在本申请实施例中,滤波器的类型可以为6抽头滤波器也可以为4抽头滤波器,本申请实施例不作限制。
在S104和S105中,视频图像分量预测装置确定与多个滤波后的第一图像参考样值对应的待预测图像分量参考值,其中,待预测图像分量是与第一图像分量不同的图像分量(例如,第二图像分量或第三图像分量);再根据多个滤波后的第一图像参考样值和待预测图像分量参考值,确定出分量线性模型的参数,其中,分量线性模型表征将第一图像分量的样值映射为待预测图像分量的样值的线性映射关系,例如函数关系等。
在本申请的一些实施例中,视频图像分量预测装置确定与滤波后的最大第一图像分量参考值对应的最大待预测图像分量参考值,以及与滤波后的最小第一图像分量参考值对应的最小待预测图像分量参考值。
需要说明的时,在本申请实施例中,视频图像分量预测装置可以采用最大值与最小值的构造方式,根据“两点确定一线”原则来推导出模型参数(即分量线性模型的参数),进而构建出分量线性模型,即简化的分量间线性预测模型(Cross-component Linear Model Prediction,CCLM)。
在本申请实施例中,视频图像分量预测装置进行了下采样(即滤波),实现了与待预测图像的位置的对齐,这样就可以确定出与滤波后的第一图像分量参考样值对应的待预测图像分量参考值了。例如,确定出与滤波后的最大第一图像分量参考值对应的最大待预测图像分量参考值,以及与滤波后的最小第一图像分量参考值对应的最小待预测图像分量参考值。这样,视频图像分量预测装置由于已经确定出了(滤波后的最大第一图像分量参考值,最大待预测图像分量参考值)和(滤波后的最小第一图像分量参考值,最小待预测图像分量参考值)这两个点了,这样就可以根据“两点确定一线”原则,来推导出模型参数,进而构建出分量线性模型。
在本申请的一些实施例中,视频图像分量预测装置根据滤波后的最大第一图像分量参考值、最大待预测图像分量参考值、滤波后的最小第一图像分量参考值和最小待预测图像分量参考值,确定分量线性模型的参数,其中,分量线性模型表征将第一图像分量的样值映射为待预测图像分量的样值的线性映射关系。
在本申请的一些实施例中,视频图像分量预测装置根据滤波后的最大第一图像分量参考值、最大待预测图像分量参考值、滤波后的最小第一图像分量参考值和最小待预测图像分量参考值,确定分量线性模型的参数的实现方式可以包括:(1)、分量线性模型的参数还包括乘性因子和加性偏移量。这样,视频图像分量预测装置可以计算最大待预测图像分量参考值与最小待预测图像分量参考值之间的第一差值;计算最大第一图像分量参考值与最小第一图像分量参考值之间的第 二差值;将乘性因子设置为第一差值与第二差值的比值;计算最大第一图像分量参考值与乘性因子之间的第一乘积,将加性偏移量设置为最大待预测图像分量参考值与第一乘积之间的差值;或者,计算最小第一图像分量参考值与乘性因子之间的第二乘积,将加性偏移量设置为最小待预测图像分量参考值与第二乘积之间的差值。(2)、采用滤波后的最大第一图像分量参考值、最大待预测图像分量参考值和预设初始线性模型,构建第一子分量线性模型;采用滤波后的最小第一图像分量参考值、最小待预测图像分量参考值和预设初始线性模型,构建第二子分量线性模型;基于第一子分量线性模型和第二子分量线性模型,得到模型参数;采用模型参数和预设初始线性模型,构建分量线性模型。
其中,上述数值的设定由实际情况决定或设计,本申请实施例不作限制。
示例性的,由于分量线性模型表征了第一图像分量与待预测图像分量的线性映射关系,因此,视频图像分量预测装置可以基于第一图像分量和分量线性模型来对待预测图像分量进行预测,而本申请实施例中的待预测图像分量可以为色度分量。
示例性的,分量线性模型可以为公式(1)所示,如下:
C=αY+β     (1)
其中,Y表示表示当前块(经过下采样的)中的某个像素点对应的第一图像分量重建值,C表示当前块中该某个像素点所对应的第二图像分量预测值,α和β是上述分量线性模型的模型参数。
其中,模型参数的具体实现将在后续实施例中的进行详细的说明。
可以理解的是,视频图像分量预测装置可以基于直接获取的当前块对应的一个或多个第一图像分量参考值,先进行最大和最小的第一图像分量参考值的选取,然后根据选取的出的最大第一图像分量参考值和最小第一图像分量参考值对应的位置进行下采样,进而构建分量线性模型,这样节省了对当前块对应的像素点的下采样处理的工作量,即减少了滤波操作,从而降低了构建分量线性模型的复杂度,进而减少视频分量预测中的复杂度,提高了预测效率,提高了视频编解码效率。
在S106和S107中,在本申请实施例中,视频图像分量预测装置在得到了分量线性模型之后,就可以直接采用分量线性模型对当前块进行视频分量预测了,进而得到待预测图像分量的预测值。其中,视频图像分量预测装置可以根据分量线性模型,对当前块的第一图像分量的重建值进行映射处理,得到映射值,然后再根据映射值确定当前块的待预测图像分量的预测值。
在本申请的一些实施例中,视频图像分量预测装置对第一图像分量的重建值进行第二滤波处理,得到第一图像分量的重建值的第二滤波值;根据分量线性模型,对第二滤波值进行映射处理,得到映射值。
在本申请的一些实施例中,视频图像分量预测装置将映射值设置为当前块的待预测图像分量的预测值。
其中,第二滤波处理可以为下采样滤波或低通滤波。
在本申请的一些实施例中,视频图像分量预测装置还可以对映射值设置进行第三滤波处理,得到映射值的第三滤波值;将第三滤波值设置为当前块的待预测图像分量的预测值。
其中,第三滤波处理可以为低通滤波。
在本申请实施例中,预测值表征当前块的一个或多个像素点对应的第二图像分量的预测值或第三图像分量的预测值。
可以理解的是,由于在分量线性模型构建的过程中先进行多个第一图像分量参考值的选取,然后根据选取的出的多个第一图像分量参考值对应的位置进行滤波处理,进而构建分量线性模型,这样节省了对当前块对应的像素点的滤波处理的工作量,即减少了滤波操作,从而降低了构建分量线性模型的复杂度,进而减少视频分量预测中的复杂度,提高了预测效率,提高了视频编解码效率。
在本申请的一些实施例中,如图5所示,本发明实施例还提供了一种视频图像分量预测方法,包括:
S201、获取当前块的第一图像分量的参考值集;
S202、比较第一图像分量的参考值集中包含的参考值,确定最大第一图像分量参考值和最小第一图像分量参考值;
S203、对最大第一图像分量参考值和最小第一图像分量参考值对应的像素点的样值分别进行第一滤波处理,得到滤波后的最大第一图像分量参考值和滤波后的最小第一图像分量参考值;
S204、确定与滤波后的最大第一图像分量参考值对应的最大待预测图像分量参考值,以及与滤 波后的最小第一图像分量参考值对应的最小待预测图像分量参考值;
S205、根据滤波后的最大第一图像分量参考值、最大待预测图像分量参考值、滤波后的最小第一图像分量参考值和最小待预测图像分量参考值,确定分量线性模型的参数,其中,分量线性模型表征将第一图像分量的样值映射为待预测图像分量的样值的线性映射关系;
S206、根据分量线性模型,对当前块的第一图像分量的重建值进行映射处理,得到映射值;
S207、根据映射值,确定当前块的待预测图像分量的预测值。
在本申请实施例中,前面的实施例已经对S201-207的过程进行过描述,此处不再赘述。
需要说明的是,视频图像分量预测装置在进行预测时,当前块的第一图像分量的重建值是对当前块进行第一图像分量滤波,得到当前块对应的第一图像分量重建值,再根据分量线性模型和第一图像分量重建值,得到当前块的待预测图像分量的预测值。
在本申请实施例中,视频图像分量预测装置在得到了分量线性模型之后,由于需要对当前块的预测的最小单位为像素点,因此,需要当前块的每个像素点所对应的第一图像分量重建值来预测该像素点所对应的待预测图像分量的预测值的。这里,视频图像分量预测先对当前块进行第一图像分量滤波(例如下采样),得到当前块对应的第一图像分量重建值,具体的是得到当前块对应的每个像素点的第一图像分量重建值。
在本申请实施例中,第一图像分量重建值表征当前块的一个或多个像素点对应的第一图像分量的重建值。
这样,视频图像分量预测装置就可以基于分量线性模型,对当前块的第一图像分量的重建值进行映射处理,得到映射值,根据映射值,从而得到当前块的待预测图像分量的预测值。
在本申请的一些实施例中,如图6所示,S204的具体实现可以包括:S2041-S2042。如下:
S2041、获取当前块的待预测图像分量参考值;
S2042、在待预测图像分量参考值中,确定最大待预测图像分量参考值,以及最小待预测图像分量参考值。
在本申请实施例中,视频图像分量预测装置在基于滤波后的最大图像分量参考值和滤波后的最小图像分量参考值,构建分量线性模型的过程中,基于“两点确定一线”原则,以第一图像分量为横坐标,待预测图像分量为纵坐标的情况下,已知了两个点的横坐标的数值,还需要确定出该两个点对应的纵坐标的数值,才可以根据“两点确定一线”原则,确定出一个线性模型,即分量线性模型。
在本申请的一些实施例中,视频图像分量预测装置将与最大第一图像分量参考值对应的第一图像分量参考值的采样点(Sample)位置,转换为第一采样点位置;将最大待预测图像分量参考值设置为在待预测图像分量参考值中位于第一采样点位置上的参考值;将与最小第一图像分量参考值对应的第一图像分量参考值的采样点位置,转换为第二采样点位置;将最小待预测图像分量参考值设置为在待预测图像分量参考值中位于第二采样点位置上的参考值。
示例性的,以参考像素点为相邻像素点为例进行说明。视频图像分量预测装置可以基于上述相邻块的描述的基础上,可以获取到当前块对应的一个或多个待预测图像分量参考值,这里的一个或多个待预测图像分量参考值可以指的是当前块对应的一个或多个参考像素点中的每个相邻参考像素点中的待预测图像分量的参考值,作为一个待预测图像分量参考值,这样,视频图像分量预测装置就得到了一个或多个待预测图像分量参考值了。
视频图像分量预测装置从一个或多个待预测图像分量参考值对应的像素点中,找到滤波后的最大第一图像分量参考值所在的第一相邻参考像素点,并将第一相邻参考像素点对应的待预测图像分量参考值作为最大待预测图像分量参考值,即确定出与滤波后的最大第一图像分量参考值对应的最大待预测图像分量参考值,并且从一个或多个待预测图像分量参考值对应的像素点中,找到滤波后的最小第一图像分量参考值所在的第二相邻参考像素点,并将第二相邻参考像素点对应的待预测图像分量参考值作为最小待预测图像分量参考值,即确定出与滤波后的最小第一图像分量参考值对应的最小待预测图像分量参考值。最后,根据“两点确定一线”的原则,基于(滤波后的最大第一图像分量参考值,最大待预测图像分量参考值)和(滤波后的最小第一图像分量参考值,最小待预测图像分量参考值)这两点确定出一条直线,该直线表征的函数(映射关系)就是分量线性模型。
在本申请的一些实施例中,视频图像分量预测装置还可以先对相邻像素点位置进行滤波,得到滤波后的像素点的一个或多个待预测图像分量参考值,再从滤波后的像素点位置中找到滤波后的最大第一图像分量参考值所在的第一相邻参考像素点,并将第一相邻参考像素点对应的待预测图像分量参考值(一个或多个待预测图像分量参考值中的一个)作为最大待预测图像分量参考值,即确定 出与滤波后的最大第一图像分量参考值对应的最大待预测图像分量参考值,并且滤波后的像素点位置中,找到滤波后的最小第一图像分量参考值所在的第二相邻参考像素点,并将第二相邻参考像素点对应的待预测图像分量参考值作为最小待预测图像分量参考值,即确定出与滤波后的最小第一图像分量参考值对应的最小待预测图像分量参考值。
需要说明的是,视频图像分量预测装置还可以先对相邻像素点位置进行滤波的过程是针对待预测图像分量的滤波,例如色度图像分量,本申请实施例不作限制。即在本申请实施例中,视频图像分量预测装置可以对待预测图像分量参考值进行第四滤波处理,得到待预测图像分量重建值。
其中,第四滤波处理可以为低通滤波。
在本申请的一些实施例中,视频图像分量预测装置构建分量线性模型的过程为:采用滤波后的最大第一图像分量参考值、最大待预测图像分量参考值和预设初始线性模型,构建第一子分量线性模型;采用滤波后的最小第一图像分量参考值、最小待预测图像分量参考值和预设初始线性模型,构建第二子分量线性模型;基于第一子分量线性模型和第二子分量线性模型,得到模型参数;采用模型参数和预设初始线性模型,构建分量线性模型。
在本申请实施例中,预设初始线性模型为模型参数未知的初始模型。
示例性的,预设初始线性模型可以为公式(1)的形式,但是其中的α和β是未知的,采用第一子分量线性模型和第二子分量线性模型构建二元第二方程,可以求解出模型参数α和β,将α和β代入公式(1),就可以得到第一图像分量和待预测图像分量的线性映射关系模型了。
示例性的,通过搜索最大的第一图像分量参考值(滤波后的最大第一图像分量参考值)和最小的第一图像分量参考值(滤波后的最小第一图像分量参考值),根据“两点确定一线”原则来推导出模型参数,如下式(2)所示的α和β:
Figure PCTCN2019110633-appb-000001
其中,L max和L min表示未经下采样的左侧边和/或上侧边所对应的第一图像分量参考值中搜索得到的最大值和最小值,C max和C min表示L max和L min对应位置的相邻参考像素点所对应的待预测图像分量参考值。参见图7,其示出了当前块基于最大值和最小值构造预测模型的结构示意图;其中,横坐标表示当前块的第一图像分量参考值,纵坐标表示当前块的待预测图像分量参考值,根据L max和L min以及C max和C min,通过式(2)可以计算得到模型参数α和β,所构建的预测模型为C=αY+β;实际预测过程中,这里的Y表示当前块中其中一个像素点所对应的第一图像分量重建值,C表示当前块中该像素点所对应的待预测图像分量预测值。
可以理解的是,视频图像分量预测装置可以基于直接获取的当前块对应的一个或多个第一图像分量参考值,先进行最大和最小的第一图像分量参考值的选取,然后根据选取的出的最大第一图像分量参考值和最大第一图像分量参考值对应的位置进行下采样(滤波),进而构建分量线性模型,这样节省了对当前块对应的像素点的下采样处理的工作量,即减少了滤波操作,从而降低了构建分量线性模型的复杂度,进而减少视频分量预测中的复杂度,提高了预测效率,提高了视频编解码效率。
基于前述的实施例,本申请实施例提供一种视频分量预测装置,该装置包括所包括的各单元、以及各单元所包括的各模块,可以通过视频分量预测装置中的处理器来实现;当然也可通过具体的逻辑电路实现;在实施的过程中,处理器可以为中央处理器、微处理器、数字信号处理器(DSP,Digital Signal Processor)或现场可编程门阵列等。
如图8所示,本申请实施例提供一种视频分量预测装置3,包括:
获取部分30,被配置为获取当前块的第一图像分量的参考值集,其中,所述第一图像分量的参考值集中包含一个或多个第一图像分量参考值;
确定部分31,被配置为从所述第一图像分量的参考值集中确定多个第一图像分量参考值;
滤波部分32,被配置为对所述多个第一图像分量参考值对应的像素点的样值分别进行第一滤波处理,得到多个滤波后的第一图像参考样值;
所述确定部分31,还被配置为确定与所述多个滤波后的第一图像参考样值对应的待预测图像分量参考值,其中,所述待预测图像分量是与所述第一图像分量不同的图像分量;以及根据所述 多个滤波后的第一图像参考样值和所述待预测图像分量参考值,确定分量线性模型的参数,其中,所述分量线性模型表征将所述第一图像分量的样值映射为所述待预测图像分量的样值的线性映射关系;
所述滤波部分32,还被配置为根据所述分量线性模型,对所述当前块的所述第一图像分量的重建值进行映射处理,得到映射值;
预测部分33,被配置为根据所述映射值确定所述当前块的所述待预测图像分量的预测值。
在本申请的一些实施例中,所述确定部分31,还被配置为比较所述第一图像分量的参考值集中包含的参考值,确定最大第一图像分量参考值和最小第一图像分量参考值。
在本申请的一些实施例中,所述滤波部分32,还被配置为对所述最大第一图像分量参考值和所述最小第一图像分量参考值对应的像素点的样值分别进行所述第一滤波处理,得到滤波后的最大第一图像分量参考值和滤波后的最小第一图像分量参考值。
在本申请的一些实施例中,所述确定部分31,还被配置为确定与所述滤波后的最大第一图像分量参考值对应的最大待预测图像分量参考值,以及与所述滤波后的最小第一图像分量参考值对应的最小待预测图像分量参考值。
在本申请的一些实施例中,所述确定部分31,还被配置为根据所述滤波后的最大第一图像分量参考值、所述最大待预测图像分量参考值、所述滤波后的最小第一图像分量参考值和所述最小待预测图像分量参考值,确定分量线性模型的参数,其中,所述分量线性模型表征将所述第一图像分量的样值映射为所述待预测图像分量的样值的线性映射关系。
在本申请的一些实施例中,所述确定部分31,还被配置为确定位于所述当前块之外的一个或多个参考像素点;
所述获取部分30,还被配置为将所述一个或多个参考像素点作为所述一个或多个第一图像分量参考值。
在本申请的一些实施例中,所述确定部分31,还被配置为确定与所述当前块相邻的像素点为所述一个或多个参考像素点。
在本申请的一些实施例中,所述滤波部分32,还被配置为对所述第一图像分量的重建值进行第二滤波处理,得到所述第一图像分量的重建值的第二滤波值;以及根据所述分量线性模型,对所述第二滤波值进行映射处理,得到所述映射值。
在本申请的一些实施例中,所述第二滤波处理为下采样滤波或低通滤波。
在本申请的一些实施例中,所述预测部分33,还被配置为将所述映射值设置为所述当前块的所述待预测图像分量的预测值。
在本申请的一些实施例中,所述滤波部分32,还被配置为对所述映射值设置进行第三滤波处理,得到映射值的第三滤波值;
所述预测部分33,还被配置为将所述第三滤波值设置为所述当前块的所述待预测图像分量的预测值。
在本申请的一些实施例中,所述第三滤波处理为低通滤波。
在本申请的一些实施例中,所述确定部分31,还被配置为获取所述当前块的待预测图像分量参考值;以及在所述待预测图像分量参考值中,确定所述最大待预测图像分量参考值,以及所述最小待预测图像分量参考值。
在本申请的一些实施例中,所述滤波部分32,还被配置为对所述待预测图像分量参考值进行第四滤波处理,得到待预测图像分量重建值。
在本申请的一些实施例中,所述第四滤波处理为低通滤波。
在本申请的一些实施例中,所述确定部分31,还被配置为将与所述最大第一图像分量参考值对应的第一图像分量参考值的采样点位置,转换为第一采样点位置;将所述最大待预测图像分量参考值设置为在所述待预测图像分量参考值中位于所述第一采样点位置上的参考值;将与所述最小第一图像分量参考值对应的第一图像分量参考值的采样点位置,转换为第二采样点位置;以及将所述最小待预测图像分量参考值设置为在所述待预测图像分量参考值中位于所述第二采样点位置上的参考值。
在本申请的一些实施例中,所述确定部分31,还被配置为采用所述滤波后的最大第一图像分量参考值、所述最大待预测图像分量参考值和预设初始线性模型,构建第一子分量线性模型;采用所述滤波后的最小第一图像分量参考值、所述最小待预测图像分量参考值和所述预设初始线性模型,构建第二子分量线性模型;基于所述第一子分量线性模型和所述第二子分量线性模型,得 到模型参数;采用所述模型参数和所述预设初始线性模型,构建所述分量线性模型。
在本申请的一些实施例中,所述确定部分31,还被配置为所述分量线性模型的参数包括乘性因子和加性偏移量;计算所述最大待预测图像分量参考值与所述最小待预测图像分量参考值之间的第一差值;计算所述最大第一图像分量参考值与所述最小第一图像分量参考值之间的第二差值;将所述乘性因子设置为所述第一差值与所述第二差值的比值;计算所述最大第一图像分量参考值与所述乘性因子之间的第一乘积,将所述加性偏移量设置为所述最大待预测图像分量参考值与所述第一乘积之间的差值;或者,计算所述最小第一图像分量参考值与所述乘性因子之间的第二乘积,将所述加性偏移量设置为所述最小待预测图像分量参考值与所述第二乘积之间的差值。
在本申请的一些实施例中,所述第一图像分量是亮度分量,所述待预测图像分量是第一或第二色度分量;或者,
所述第一图像分量是所述第一色度分量,所述待预测图像分量是所述亮度分量或所述第二色度分量;或者,
所述第一图像分量是所述第二色度分量,所述待预测图像分量是所述亮度分量或所述第一色度分量;或者,
所述第一图像分量是第一色彩分量,所述待预测图像分量是第二色彩分量或第三色彩分量;
或者,
所述第一图像分量是所述第二色彩分量,所述待预测图像分量是所述第一色彩分量或所述第三色彩分量;或者,
所述第一图像分量是所述第三色彩分量,所述待预测图像分量是所述第二色彩分量或所述第一色彩分量。
在本申请的一些实施例中,所述第一色彩分量为红分量,所述第二色彩分量为绿分量,所述第三色彩分量为蓝分量。
在本申请的一些实施例中,所述第一滤波处理为下采样滤波或低通滤波。
需要说明的是,本申请实施例中,如果以软件功能模块的形式实现上述的视频分量预测方法,并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得电子设备(可以是手机、平板电脑、个人计算机、个人数字助理、导航仪、数字电话、视频电话、电视机、传感设备、服务器等)执行本申请各个实施例所述方法的全部或部分。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read Only Memory,ROM)、磁碟或者光盘等各种可以存储程序代码的介质。这样,本申请实施例不限制于任何特定的硬件和软件结合。
在实际应用中,如图9所示,本申请实施例提供了一种视频分量预测装置,包括:
存储器34,用于存储可执行视频分量预测指令;
处理器35,用于执行所述存储器34中存储的可执行视频分量预测指令时,实现上述实施例中提供的视频分量预测方法中的步骤。
相应的,本申请实施例提供了一种计算机可读存储介质,其上存储有视频分量预测指令,该视频分量预测指令被处理器35执行时实现上述实施例中提供的视频分量预测方法中的步骤。
这里需要指出的是:以上存储介质和装置实施例的描述,与上述方法实施例的描述是类似的,具有同方法实施例相似的有益效果。对于本申请存储介质和装置实施例中未披露的技术细节,请参照本申请方法实施例的描述而理解。
以上所述,仅为本申请的实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。
工业实用性
在本申请实施例中,视频分量预测装置可以基于直接获取的当前块对应的第一图像分量的参考值集,先进行多个第一图像分量参考值的确定后,然后根据确定的出的多个第一图像分量参考值对应的位置进行滤波处理,进而构建分量线性模型,这样节省了对当前块对应的像素点的滤波处理的工作量,即减少了滤波操作,从而降低了构建分量线性模型的复杂度,进而减少视频分量预测中的复杂度,提高了预测效率,提高了视频编解码效率。

Claims (44)

  1. 一种图像分量预测方法,其中,包括:
    获取当前块的第一图像分量的参考值集;
    从所述第一图像分量的参考值集中确定多个第一图像分量参考值;
    对所述多个第一图像分量参考值对应的像素点的样值分别进行第一滤波处理,得到多个滤波后的第一图像参考样值;
    确定与所述多个滤波后的第一图像参考样值对应的待预测图像分量参考值,其中,所述待预测图像分量是与所述第一图像分量不同的图像分量;
    根据所述多个滤波后的第一图像参考样值和所述待预测图像分量参考值,确定分量线性模型的参数,其中,所述分量线性模型表征将所述第一图像分量的样值映射为所述待预测图像分量的样值的线性映射关系;
    根据所述分量线性模型,对所述当前块的所述第一图像分量的重建值进行映射处理,得到映射值;
    根据所述映射值确定所述当前块的所述待预测图像分量的预测值。
  2. 根据权利要求1所述的方法,其中,所述从所述第一图像分量的参考值集中确定多个第一图像分量参考值,包括:
    比较所述第一图像分量的参考值集中包含的参考值,确定最大第一图像分量参考值和最小第一图像分量参考值。
  3. 根据权利要求2所述的方法,其中,所述对所述多个第一图像分量参考值对应的像素点的样值分别进行第一滤波处理,得到多个滤波后的第一图像参考样值,包括:
    对所述最大第一图像分量参考值和所述最小第一图像分量参考值对应的像素点的样值分别进行所述第一滤波处理,得到滤波后的最大第一图像分量参考值和滤波后的最小第一图像分量参考值。
  4. 根据权利要求3所述的方法,其中,所述确定与所述多个滤波后的第一图像参考样值对应的待预测图像分量参考值,包括:
    确定与所述滤波后的最大第一图像分量参考值对应的最大待预测图像分量参考值,以及与所述滤波后的最小第一图像分量参考值对应的最小待预测图像分量参考值。
  5. 根据权利要求4所述的方法,其中,所述根据所述多个滤波后的第一图像参考样值和所述待预测图像分量参考值,确定分量线性模型的参数,包括:
    根据所述滤波后的最大第一图像分量参考值、所述最大待预测图像分量参考值、所述滤波后的最小第一图像分量参考值和所述最小待预测图像分量参考值,确定分量线性模型的参数,其中,所述分量线性模型表征将所述第一图像分量的样值映射为所述待预测图像分量的样值的线性映射关系。
  6. 根据权利要求1所述的方法,其中,所述获取当前块的第一图像分量的参考值集,包括:
    确定位于所述当前块之外的一个或多个参考像素点,并将所述一个或多个参考像素点作为所述一个或多个第一图像分量参考值。
  7. 根据权利要求6所述的方法,其中,所述确定位于所述当前块之外的一个或多个参考像素点,包括:
    确定与所述当前块相邻的像素点为所述一个或多个参考像素点。
  8. 根据权利要求1所述的方法,其中,所述根据所述分量线性模型,对所述当前块的所述第一图像分量的重建值进行映射处理,得到映射值,包括:
    对所述第一图像分量的重建值进行第二滤波处理,得到所述第一图像分量的重建值的第二滤波值;
    根据所述分量线性模型,对所述第二滤波值进行映射处理,得到所述映射值。
  9. 根据权利要求8所述的方法,其中,
    所述第二滤波处理为下采样滤波或低通滤波。
  10. 根据权利要求1所述的方法,其中,所述根据所述映射值确定所述当前块的所述待预测图像分量的预测值,包括:
    将所述映射值设置为所述当前块的所述待预测图像分量的预测值。
  11. 根据权利要求1所述的方法,其中,所述根据所述映射值确定所述当前块的所述待预测 图像分量的预测值,包括:
    对所述映射值设置进行第三滤波处理,得到映射值的第三滤波值;
    将所述第三滤波值设置为所述当前块的所述待预测图像分量的预测值。
  12. 根据权利要求11所述的方法,其中,
    所述第三滤波处理为低通滤波。
  13. 根据权利要求4所述的方法,其中,所述确定与所述滤波后的最大第一图像分量参考值对应的最大待预测图像分量参考值,以及与所述滤波后的最小第一图像分量参考值对应的最小待预测图像分量参考值,包括:
    获取所述当前块的待预测图像分量参考值;
    在所述待预测图像分量参考值中,确定所述最大待预测图像分量参考值,以及所述最小待预测图像分量参考值。
  14. 根据权利要求13所述的方法,其中,所述方法还包括:
    对所述待预测图像分量参考值进行第四滤波处理,得到待预测图像分量重建值。
  15. 根据权利要求14所述的方法,其中,
    所述第四滤波处理为低通滤波。
  16. 根据权利要求13至15中任一项所述的方法,其中,在所述待预测图像分量参考值中,确定所述最大待预测图像分量参考值,以及所述最小待预测图像分量参考值,包括:
    将与所述最大第一图像分量参考值对应的第一图像分量参考值的采样点位置,转换为第一采样点位置;
    将所述最大待预测图像分量参考值设置为在所述待预测图像分量参考值中位于所述第一采样点位置上的参考值;
    将与所述最小第一图像分量参考值对应的第一图像分量参考值的采样点位置,转换为第二采样点位置;
    将所述最小待预测图像分量参考值设置为在所述待预测图像分量参考值中位于所述第二采样点位置上的参考值。
  17. 根据权利要求5所述的方法,其中,所述根据所述滤波后的最大第一图像分量参考值、所述最大待预测图像分量参考值、所述滤波后的最小第一图像分量参考值和所述最小待预测图像分量参考值,确定分量线性模型的参数,包括:
    采用所述滤波后的最大第一图像分量参考值、所述最大待预测图像分量参考值和预设初始线性模型,构建第一子分量线性模型;
    采用所述滤波后的最小第一图像分量参考值、所述最小待预测图像分量参考值和所述预设初始线性模型,构建第二子分量线性模型;
    基于所述第一子分量线性模型和所述第二子分量线性模型,得到模型参数;
    采用所述模型参数和所述预设初始线性模型,构建所述分量线性模型。
  18. 根据权利要求5所述的方法,其中,所述根据所述滤波后的最大第一图像分量参考值、所述最大待预测图像分量参考值、所述滤波后的最小第一图像分量参考值和所述最小待预测图像分量参考值,确定分量线性模型的参数,包括:
    所述分量线性模型的参数包括乘性因子和加性偏移量;
    计算所述最大待预测图像分量参考值与所述最小待预测图像分量参考值之间的第一差值;
    计算所述最大第一图像分量参考值与所述最小第一图像分量参考值之间的第二差值;
    将所述乘性因子设置为所述第一差值与所述第二差值的比值;
    计算所述最大第一图像分量参考值与所述乘性因子之间的第一乘积,将所述加性偏移量设置为所述最大待预测图像分量参考值与所述第一乘积之间的差值;或者,计算所述最小第一图像分量参考值与所述乘性因子之间的第二乘积,将所述加性偏移量设置为所述最小待预测图像分量参考值与所述第二乘积之间的差值。
  19. 根据权利要求1所述的方法,其中,所述方法还包括:
    所述第一图像分量是亮度分量,所述待预测图像分量是第一或第二色度分量;或者,
    所述第一图像分量是所述第一色度分量,所述待预测图像分量是所述亮度分量或所述第二色度分量;或者,
    所述第一图像分量是所述第二色度分量,所述待预测图像分量是所述亮度分量或所述第一色度分量;或者,
    所述第一图像分量是第一色彩分量,所述待预测图像分量是第二色彩分量或第三色彩分量;或者,
    所述第一图像分量是所述第二色彩分量,所述待预测图像分量是所述第一色彩分量或所述第三色彩分量;或者,
    所述第一图像分量是所述第三色彩分量,所述待预测图像分量是所述第二色彩分量或所述第一色彩分量。
  20. 如权利要求19所述的方法,其中,所述方法还包括:
    所述第一色彩分量为红分量,所述第二色彩分量为绿分量,所述第三色彩分量为蓝分量。
  21. 根据权利要求1所述的方法,其中,
    所述第一滤波处理为下采样滤波或低通滤波。
  22. 一种视频分量预测装置,其中,包括:
    获取部分,被配置为获取当前块的第一图像分量的参考值集;
    确定部分,被配置为从所述第一图像分量的参考值集中确定多个第一图像分量参考值;
    滤波部分,被配置为对所述多个第一图像分量参考值对应的像素点的样值分别进行第一滤波处理,得到多个滤波后的第一图像参考样值;
    所述确定部分,还被配置为确定与所述多个滤波后的第一图像参考样值对应的待预测图像分量参考值,其中,所述待预测图像分量是与所述第一图像分量不同的图像分量;以及根据所述多个滤波后的第一图像参考样值和所述待预测图像分量参考值,确定分量线性模型的参数,其中,所述分量线性模型表征将所述第一图像分量的样值映射为所述待预测图像分量的样值的线性映射关系;
    所述滤波部分,还被配置为根据所述分量线性模型,对所述当前块的所述第一图像分量的重建值进行映射处理,得到映射值;
    预测部分,被配置为根据所述映射值确定所述当前块的所述待预测图像分量的预测值。
  23. 根据权利要求22所述的装置,其中,
    所述确定部分,还被配置为比较所述第一图像分量的参考值集中包含的参考值,确定最大第一图像分量参考值和最小第一图像分量参考值。
  24. 根据权利要求23所述的装置,其中,
    所述滤波部分,还被配置为对所述最大第一图像分量参考值和所述最小第一图像分量参考值对应的像素点的样值分别进行所述第一滤波处理,得到滤波后的最大第一图像分量参考值和滤波后的最小第一图像分量参考值。
  25. 根据权利要求24所述的装置,其中,
    所述确定部分,还被配置为确定与所述滤波后的最大第一图像分量参考值对应的最大待预测图像分量参考值,以及与所述滤波后的最小第一图像分量参考值对应的最小待预测图像分量参考值。
  26. 根据权利要求25所述的装置,其中,
    所述确定部分,还被配置为根据所述滤波后的最大第一图像分量参考值、所述最大待预测图像分量参考值、所述滤波后的最小第一图像分量参考值和所述最小待预测图像分量参考值,确定分量线性模型的参数,其中,所述分量线性模型表征将所述第一图像分量的样值映射为所述待预测图像分量的样值的线性映射关系。
  27. 根据权利要求22所述的装置,其中,
    所述确定部分,还被配置为确定位于所述当前块之外的一个或多个参考像素点;
    所述获取部分,还被配置为将所述一个或多个参考像素点作为所述一个或多个第一图像分量参考值。
  28. 根据权利要求27所述的装置,其中,
    所述确定部分,还被配置为确定与所述当前块相邻的像素点为所述一个或多个参考像素点。
  29. 根据权利要求22所述的装置,其中,
    所述滤波部分,还被配置为对所述第一图像分量的重建值进行第二滤波处理,得到所述第一图像分量的重建值的第二滤波值;以及根据所述分量线性模型,对所述第二滤波值进行映射处理,得到所述映射值。
  30. 根据权利要求29所述的装置,其中,
    所述第二滤波处理为下采样滤波或低通滤波。
  31. 根据权利要求22所述的装置,其中,
    所述预测部分,还被配置为将所述映射值设置为所述当前块的所述待预测图像分量的预测值。
  32. 根据权利要求22所述的装置,其中,
    所述滤波部分,还被配置为对所述映射值设置进行第三滤波处理,得到映射值的第三滤波值;
    所述预测部分,还被配置为将所述第三滤波值设置为所述当前块的所述待预测图像分量的预测值。
  33. 根据权利要求32所述的装置,其中,
    所述第三滤波处理为低通滤波。
  34. 根据权利要求25所述的装置,其中,
    所述确定部分,还被配置为获取所述当前块的待预测图像分量参考值;以及在所述待预测图像分量参考值中,确定所述最大待预测图像分量参考值,以及所述最小待预测图像分量参考值。
  35. 根据权利要求34所述的装置,其中,
    所述滤波部分,还被配置为对所述待预测图像分量参考值进行第四滤波处理,得到待预测图像分量重建值。
  36. 根据权利要求35所述的装置,其中,
    所述第四滤波处理为低通滤波。
  37. 根据权利要求34至36任一项所述的装置,其中,
    所述确定部分,还被配置为将与所述最大第一图像分量参考值对应的第一图像分量参考值的采样点位置,转换为第一采样点位置;将所述最大待预测图像分量参考值设置为在所述待预测图像分量参考值中位于所述第一采样点位置上的参考值;将与所述最小第一图像分量参考值对应的第一图像分量参考值的采样点位置,转换为第二采样点位置;以及将所述最小待预测图像分量参考值设置为在所述待预测图像分量参考值中位于所述第二采样点位置上的参考值。
  38. 根据权利要求26所述的装置,其中,
    所述确定部分,还被配置为采用所述滤波后的最大第一图像分量参考值、所述最大待预测图像分量参考值和预设初始线性模型,构建第一子分量线性模型;采用所述滤波后的最小第一图像分量参考值、所述最小待预测图像分量参考值和所述预设初始线性模型,构建第二子分量线性模型;基于所述第一子分量线性模型和所述第二子分量线性模型,得到模型参数;采用所述模型参数和所述预设初始线性模型,构建所述分量线性模型。
  39. 根据权利要求26所述的装置,其中,
    所述确定部分,还被配置为所述分量线性模型的参数包括乘性因子和加性偏移量;计算所述最大待预测图像分量参考值与所述最小待预测图像分量参考值之间的第一差值;计算所述最大第一图像分量参考值与所述最小第一图像分量参考值之间的第二差值;将所述乘性因子设置为所述第一差值与所述第二差值的比值;计算所述最大第一图像分量参考值与所述乘性因子之间的第一乘积,将所述加性偏移量设置为所述最大待预测图像分量参考值与所述第一乘积之间的差值;或者,计算所述最小第一图像分量参考值与所述乘性因子之间的第二乘积,将所述加性偏移量设置为所述最小待预测图像分量参考值与所述第二乘积之间的差值。
  40. 根据权利要求22所述的装置,其中,
    所述第一图像分量是亮度分量,所述待预测图像分量是第一或第二色度分量;或者,
    所述第一图像分量是所述第一色度分量,所述待预测图像分量是所述亮度分量或所述第二色度分量;或者,
    所述第一图像分量是所述第二色度分量,所述待预测图像分量是所述亮度分量或所述第一色度分量;或者,
    所述第一图像分量是第一色彩分量,所述待预测图像分量是第二色彩分量或第三色彩分量;或者,
    所述第一图像分量是所述第二色彩分量,所述待预测图像分量是所述第一色彩分量或所述第三色彩分量;或者,
    所述第一图像分量是所述第三色彩分量,所述待预测图像分量是所述第二色彩分量或所述第一色彩分量。
  41. 根据权利要求40所述的装置,其中,
    所述第一色彩分量为红分量,所述第二色彩分量为绿分量,所述第三色彩分量为蓝分量。
  42. 根据权利要求22所述的装置,其中,
    所述第一滤波处理为下采样滤波或低通滤波。
  43. 一种视频分量预测装置,其中,包括:
    存储器,用于存储可执行视频分量预测指令;
    处理器,用于执行所述存储器中存储的可执行视频分量预测指令时,实现权利要求1至21任一项所述的方法。
  44. 一种计算机可读存储介质,其中,存储有可执行视频分量预测指令,用于引起处理器执行时,实现权利要求1至21任一项所述的方法。
PCT/CN2019/110633 2018-10-12 2019-10-11 视频图像分量预测方法及装置、计算机存储介质 Ceased WO2020073990A1 (zh)

Priority Applications (36)

Application Number Priority Date Filing Date Title
CN202110236395.5A CN113068030B (zh) 2018-10-12 2019-10-11 视频图像分量预测方法及装置、计算机存储介质
BR112021006138-0A BR112021006138A2 (pt) 2018-10-12 2019-10-11 método de predição de componente de imagem aplicado a um decodificador, dispositivo de predição de componente de vídeo, aplicado a um decodificador, e mídia de armazenamento legível por computador
AU2019357929A AU2019357929B2 (en) 2018-10-12 2019-10-11 Video image component prediction method and apparatus, and computer storage medium
MYPI2021001809A MY208324A (en) 2018-10-12 2019-10-11 Video image component prediction method and apparatus, and computer storage medium
KR1020217014094A KR20210070368A (ko) 2018-10-12 2019-10-11 비디오 이미지 요소 예측 방법 및 장치, 컴퓨터 저장 매체
EP19870849.7A EP3843399B1 (en) 2018-10-12 2019-10-11 Video image component prediction method and apparatus, and computer storage medium
CN201980041795.1A CN112335245A (zh) 2018-10-12 2019-10-11 视频图像分量预测方法及装置、计算机存储介质
MX2021004090A MX2021004090A (es) 2018-10-12 2019-10-11 Metodo y aparato de prediccion de componente de imagen de video, y medio de almacenamiento por computadora.
JP2021517944A JP7518065B2 (ja) 2018-10-12 2019-10-11 ビデオ画像成分予測方法および装置、コンピュータ記憶媒体
IL281832A IL281832B1 (en) 2018-10-12 2019-10-11 Method and apparatus for predicting video image components and computer storage medium
KR1020257000327A KR20250008806A (ko) 2018-10-12 2019-10-11 비디오 이미지 요소 예측 방법 및 장치, 컴퓨터 저장 매체
CA3114816A CA3114816C (en) 2018-10-12 2019-10-11 Video image component prediction method and apparatus, and computer storage medium
KR1020257000316A KR20250011712A (ko) 2018-10-12 2019-10-11 비디오 이미지 요소 예측 방법 및 장치, 컴퓨터 저장 매체
CN202511250548.6A CN121099035A (zh) 2018-10-12 2019-10-11 视频图像分量预测方法及装置、计算机存储介质
CN202510071832.0A CN119835417A (zh) 2018-10-12 2019-10-11 视频图像分量预测方法及装置、计算机存储介质
SG11202103312YA SG11202103312YA (en) 2018-10-12 2019-10-11 Video image component prediction method and apparatus, and computer storage medium
CN202510071803.4A CN119835416A (zh) 2018-10-12 2019-10-11 视频图像分量预测方法及装置、计算机存储介质
PH12021550708A PH12021550708A1 (en) 2018-10-12 2021-03-30 Video image component prediction method and apparatus, and computer storage medium
ZA2021/02207A ZA202102207B (en) 2018-10-12 2021-03-31 Video image component prediction method and apparatus, and computer storage medium
US17/220,007 US11388397B2 (en) 2018-10-12 2021-04-01 Video picture component prediction method and apparatus, and computer storage medium
MX2025000211A MX2025000211A (es) 2018-10-12 2021-04-08 Metodo y aparato de prediccion de componente de imagen de video, y medio de almacenamiento por computadora
MX2025000213A MX2025000213A (es) 2018-10-12 2021-04-08 Metodo y aparato de prediccion de componente de imagen de video, y medio de almacenamiento por computadora
MX2025000212A MX2025000212A (es) 2018-10-12 2021-04-08 Metodo y aparato de prediccion de componente de imagen de video, y medio de almacenamiento por computadora
MX2025000210A MX2025000210A (es) 2018-10-12 2021-04-08 Metodo y aparato de prediccion de componente de imagen de video, y medio de almacenamiento por computadora
US17/658,787 US11876958B2 (en) 2018-10-12 2022-04-11 Video picture component prediction method and apparatus, and computer storage medium
US18/523,440 US12323584B2 (en) 2018-10-12 2023-11-29 Video picture component prediction method and apparatus, and computer storage medium
JP2024108422A JP2024129129A (ja) 2018-10-12 2024-07-04 ビデオ画像成分予測方法および装置、コンピュータ記憶媒体
AU2024287236A AU2024287236A1 (en) 2018-10-12 2024-12-27 Video image component prediction method and apparatus, and computer storage medium
AU2024287239A AU2024287239A1 (en) 2018-10-12 2024-12-27 Video image component prediction method and apparatus, and computer storage medium
AU2024287238A AU2024287238A1 (en) 2018-10-12 2024-12-27 Video image component prediction method and apparatus, and computer storage medium
AU2024287237A AU2024287237A1 (en) 2018-10-12 2024-12-27 Video image component prediction method and apparatus, and computer storage medium
US19/073,537 US20250211729A1 (en) 2018-10-12 2025-03-07 Video picture component prediction method and apparatus, and computer storage medium
JP2025146779A JP2025172916A (ja) 2018-10-12 2025-09-04 ビデオ画像成分予測方法および装置、コンピュータ記憶媒体
IL323377A IL323377A (en) 2018-10-12 2025-09-15 Method and apparatus for predicting video image components and computer storage medium
IL323378A IL323378A (en) 2018-10-12 2025-09-15 Method and apparatus for predicting video image components and computer storage medium
IL323383A IL323383A (en) 2018-10-12 2025-09-15 Method and apparatus for predicting video image components and computer storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201862744747P 2018-10-12 2018-10-12
US62/744,747 2018-10-12

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/220,007 Continuation US11388397B2 (en) 2018-10-12 2021-04-01 Video picture component prediction method and apparatus, and computer storage medium

Publications (1)

Publication Number Publication Date
WO2020073990A1 true WO2020073990A1 (zh) 2020-04-16

Family

ID=70164470

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/110633 Ceased WO2020073990A1 (zh) 2018-10-12 2019-10-11 视频图像分量预测方法及装置、计算机存储介质

Country Status (15)

Country Link
US (4) US11388397B2 (zh)
EP (1) EP3843399B1 (zh)
JP (3) JP7518065B2 (zh)
KR (3) KR20250011712A (zh)
CN (5) CN121099035A (zh)
AU (5) AU2019357929B2 (zh)
BR (1) BR112021006138A2 (zh)
CA (1) CA3114816C (zh)
IL (4) IL281832B1 (zh)
MX (5) MX2021004090A (zh)
MY (1) MY208324A (zh)
PH (1) PH12021550708A1 (zh)
SG (1) SG11202103312YA (zh)
WO (1) WO2020073990A1 (zh)
ZA (1) ZA202102207B (zh)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7518065B2 (ja) * 2018-10-12 2024-07-17 オッポ広東移動通信有限公司 ビデオ画像成分予測方法および装置、コンピュータ記憶媒体
WO2023039859A1 (zh) * 2021-09-17 2023-03-23 Oppo广东移动通信有限公司 视频编解码方法、设备、系统、及存储介质
WO2023197190A1 (zh) * 2022-04-12 2023-10-19 Oppo广东移动通信有限公司 编解码方法、装置、编码设备、解码设备以及存储介质
WO2025007276A1 (zh) * 2023-07-04 2025-01-09 Oppo广东移动通信有限公司 编解码方法、码流、编码器、解码器以及存储介质
WO2025063778A1 (ko) * 2023-09-21 2025-03-27 엘지전자 주식회사 영상 인코딩/디코딩 방법 및 장치, 그리고 비트스트림을 저장한 기록 매체
WO2025147830A1 (zh) * 2024-01-08 2025-07-17 Oppo广东移动通信有限公司 编解码方法、码流、编码器、解码器以及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018039596A1 (en) * 2016-08-26 2018-03-01 Qualcomm Incorporated Unification of parameters derivation procedures for local illumination compensation and cross-component linear model prediction
WO2018045207A1 (en) * 2016-08-31 2018-03-08 Qualcomm Incorporated Cross-component filter
WO2018061588A1 (ja) * 2016-09-27 2018-04-05 株式会社ドワンゴ 画像符号化装置、画像符号化方法、及び画像符号化プログラム、並びに、画像復号装置、画像復号方法、及び画像復号プログラム
WO2018132710A1 (en) * 2017-01-13 2018-07-19 Qualcomm Incorporated Coding video data using derived chroma mode

Family Cites Families (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013110502A (ja) * 2011-11-18 2013-06-06 Sony Corp 画像処理装置及び画像処理方法
EP2805496B1 (en) * 2012-01-19 2016-12-21 Huawei Technologies Co., Ltd. Reference pixel reduction for intra lm prediction
CN103379321B (zh) * 2012-04-16 2017-02-01 华为技术有限公司 视频图像分量的预测方法和装置
RU2654129C2 (ru) 2013-10-14 2018-05-16 МАЙКРОСОФТ ТЕКНОЛОДЖИ ЛАЙСЕНСИНГ, ЭлЭлСи Функциональные возможности режима внутреннего предсказания с блочным копированием для кодирования и декодирования видео и изображений
US10425648B2 (en) * 2015-09-29 2019-09-24 Qualcomm Incorporated Video intra-prediction using position-dependent prediction combination for video coding
CN117201781A (zh) * 2015-10-16 2023-12-08 中兴通讯股份有限公司 编码处理、解码处理方法及装置、存储介质
RU2696552C1 (ru) * 2015-11-17 2019-08-02 Хуавей Текнолоджиз Ко., Лтд. Способ и устройство для видеокодирования
US20170150176A1 (en) 2015-11-25 2017-05-25 Qualcomm Incorporated Linear-model prediction with non-square prediction units in video coding
US10652575B2 (en) * 2016-09-15 2020-05-12 Qualcomm Incorporated Linear model chroma intra prediction for video coding
CN114430486A (zh) * 2017-04-28 2022-05-03 夏普株式会社 图像解码装置以及图像编码装置
US11190799B2 (en) * 2017-06-21 2021-11-30 Lg Electronics Inc. Intra-prediction mode-based image processing method and apparatus therefor
WO2019004283A1 (ja) * 2017-06-28 2019-01-03 シャープ株式会社 動画像符号化装置及び動画像復号装置
CN107580222B (zh) 2017-08-01 2020-02-14 北京交通大学 一种基于线性模型预测的图像或视频编码方法
JP2021010046A (ja) * 2017-10-06 2021-01-28 シャープ株式会社 画像符号化装置及び画像復号装置
KR20190083956A (ko) * 2018-01-05 2019-07-15 에스케이텔레콤 주식회사 YCbCr간의 상관 관계를 이용한 영상 부호화/복호화 방법 및 장치
GB2571313B (en) * 2018-02-23 2022-09-21 Canon Kk New sample sets and new down-sampling schemes for linear component sample prediction
GB2571312B (en) * 2018-02-23 2020-05-27 Canon Kk New sample sets and new down-sampling schemes for linear component sample prediction
WO2019201232A1 (en) * 2018-04-16 2019-10-24 Huawei Technologies Co., Ltd. Intra-prediction using cross-component linear model
CN116405687A (zh) * 2018-07-12 2023-07-07 华为技术有限公司 视频译码中使用交叉分量线性模型进行帧内预测
WO2020015433A1 (en) * 2018-07-15 2020-01-23 Huawei Technologies Co., Ltd. Method and apparatus for intra prediction using cross-component linear model
SG11202100412SA (en) * 2018-07-16 2021-02-25 Huawei Tech Co Ltd Video encoder, video decoder, and corresponding encoding and decoding methods
WO2020031902A1 (ja) * 2018-08-06 2020-02-13 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ 符号化装置、復号装置、符号化方法および復号方法
JP7424982B2 (ja) * 2018-08-15 2024-01-30 日本放送協会 画像符号化装置、画像復号装置、及びプログラム
CN117896531A (zh) * 2018-09-05 2024-04-16 华为技术有限公司 色度块预测方法以及设备
CN110896480B (zh) * 2018-09-12 2024-09-10 北京字节跳动网络技术有限公司 交叉分量线性模型中的尺寸相关的下采样
EP4228262A1 (en) * 2018-10-08 2023-08-16 Beijing Dajia Internet Information Technology Co., Ltd. Simplifications of cross-component linear model
JP7518065B2 (ja) * 2018-10-12 2024-07-17 オッポ広東移動通信有限公司 ビデオ画像成分予測方法および装置、コンピュータ記憶媒体

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018039596A1 (en) * 2016-08-26 2018-03-01 Qualcomm Incorporated Unification of parameters derivation procedures for local illumination compensation and cross-component linear model prediction
WO2018045207A1 (en) * 2016-08-31 2018-03-08 Qualcomm Incorporated Cross-component filter
WO2018061588A1 (ja) * 2016-09-27 2018-04-05 株式会社ドワンゴ 画像符号化装置、画像符号化方法、及び画像符号化プログラム、並びに、画像復号装置、画像復号方法、及び画像復号プログラム
WO2018132710A1 (en) * 2017-01-13 2018-07-19 Qualcomm Incorporated Coding video data using derived chroma mode

Also Published As

Publication number Publication date
KR20250008806A (ko) 2025-01-15
IL323383A (en) 2025-11-01
EP3843399A4 (en) 2021-10-20
CN119835416A (zh) 2025-04-15
US20210218958A1 (en) 2021-07-15
MX2025000212A (es) 2025-02-10
JP2022503990A (ja) 2022-01-12
US20250211729A1 (en) 2025-06-26
AU2024287236A1 (en) 2025-01-23
CN113068030A (zh) 2021-07-02
PH12021550708A1 (en) 2021-11-03
EP3843399C0 (en) 2023-09-06
CN119835417A (zh) 2025-04-15
IL323378A (en) 2025-11-01
KR20210070368A (ko) 2021-06-14
EP3843399B1 (en) 2023-09-06
MX2025000210A (es) 2025-02-10
IL281832B1 (en) 2025-10-01
AU2024287238A1 (en) 2025-01-23
MX2025000213A (es) 2025-02-10
JP2024129129A (ja) 2024-09-26
AU2024287239A1 (en) 2025-01-23
CN113068030B (zh) 2023-01-03
BR112021006138A2 (pt) 2021-06-29
IL281832A (en) 2021-05-31
AU2019357929B2 (en) 2024-10-03
CA3114816C (en) 2023-08-22
IL323377A (en) 2025-11-01
AU2019357929A1 (en) 2021-04-29
ZA202102207B (en) 2022-07-27
JP7518065B2 (ja) 2024-07-17
US11876958B2 (en) 2024-01-16
CA3114816A1 (en) 2020-04-16
MY208324A (en) 2025-04-30
KR20250011712A (ko) 2025-01-21
US12323584B2 (en) 2025-06-03
CN121099035A (zh) 2025-12-09
US20220239901A1 (en) 2022-07-28
AU2024287237A1 (en) 2025-01-23
CN112335245A (zh) 2021-02-05
MX2021004090A (es) 2021-06-08
JP2025172916A (ja) 2025-11-26
SG11202103312YA (en) 2021-04-29
US11388397B2 (en) 2022-07-12
MX2025000211A (es) 2025-02-10
EP3843399A1 (en) 2021-06-30
US20240098255A1 (en) 2024-03-21

Similar Documents

Publication Publication Date Title
CN113068030B (zh) 视频图像分量预测方法及装置、计算机存储介质
JP2025066835A (ja) 情報処理方法および装置、機器、記憶媒体
RU2800683C2 (ru) Способ и устройство предсказывания компонента видеоизображения и компьютерный носитель данных
JP2023522845A (ja) 参照領域を使用する映像符号化の方法及びシステム
CN116546216A (zh) 解码预测方法、装置及计算机存储介质
WO2020181503A1 (zh) 帧内预测方法及装置、计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19870849

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 3114816

Country of ref document: CA

ENP Entry into the national phase

Ref document number: 2021517944

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2019870849

Country of ref document: EP

Effective date: 20210325

NENP Non-entry into the national phase

Ref country code: DE

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: 112021006138

Country of ref document: BR

ENP Entry into the national phase

Ref document number: 2019357929

Country of ref document: AU

Date of ref document: 20191011

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 20217014094

Country of ref document: KR

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 2021111917

Country of ref document: RU

ENP Entry into the national phase

Ref document number: 112021006138

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20210330

WWG Wipo information: grant in national office

Ref document number: MX/A/2021/004090

Country of ref document: MX

WWD Wipo information: divisional of initial pct application

Ref document number: 323383

Country of ref document: IL