WO2020073990A1 - 视频图像分量预测方法及装置、计算机存储介质 - Google Patents
视频图像分量预测方法及装置、计算机存储介质 Download PDFInfo
- Publication number
- WO2020073990A1 WO2020073990A1 PCT/CN2019/110633 CN2019110633W WO2020073990A1 WO 2020073990 A1 WO2020073990 A1 WO 2020073990A1 CN 2019110633 W CN2019110633 W CN 2019110633W WO 2020073990 A1 WO2020073990 A1 WO 2020073990A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image component
- component
- reference value
- predicted
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/105—Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/107—Selection of coding mode or of prediction mode between spatial and temporal predictive coding, e.g. picture refresh
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/11—Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/117—Filters, e.g. for pre-processing or post-processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/132—Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
- H04N19/159—Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/182—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/186—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/59—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/80—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
- H04N19/82—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
Definitions
- the embodiments of the present application relate to the technical field of video encoding and decoding, and in particular, to a method and device for predicting a video image component, and a computer storage medium.
- H.265 / High Efficiency Video Coding is the latest international video compression standard.
- the compression performance of H.265 / HEVC is higher than the previous generation video coding standard H.264 / Advanced Video Coding (Advanced Video Coding).
- AVC Advanced Video Coding
- VVC Versatile Video Coding
- the embodiments of the present application provide a video image component prediction method and device, and a computer storage medium, which can reduce the complexity of video component prediction, improve prediction efficiency, and thereby improve video coding and decoding efficiency.
- An embodiment of the present application provides a video component prediction method, including:
- mapping processing on the reconstruction value of the first image component of the current block according to the component linear model to obtain a mapping value
- the prediction value of the image component to be predicted of the current block is determined according to the mapping value.
- An embodiment of the present application provides a video component prediction apparatus, including:
- the acquiring part is configured to acquire the reference value set of the first image component of the current block
- a determining section configured to determine a plurality of first image component reference values from the reference value set of the first image component
- the filtering part is configured to perform a first filtering process on the sample values of the pixels corresponding to the multiple first image component reference values, respectively, to obtain multiple filtered first image reference sample values;
- the determining section is further configured to determine a reference value of an image component to be predicted corresponding to the plurality of filtered first image reference samples, wherein the image component to be predicted is different from the first image component The image component of; and determining the parameters of the component linear model according to the plurality of filtered first image reference samples and the reference value of the image component to be predicted, wherein the component linear model characterizes the first image
- the sample mapping of the component is a linear mapping relationship between the samples of the image component to be predicted;
- the filtering part is further configured to perform mapping processing on the reconstruction value of the first image component of the current block according to the component linear model to obtain a mapping value;
- the prediction part is configured to determine the prediction value of the image component to be predicted of the current block according to the mapping value.
- An embodiment of the present application provides a video component prediction apparatus, including:
- Memory used to store executable video component prediction instructions
- the processor is configured to implement the video component prediction method provided by the embodiments of the present application when executing executable video component prediction instructions stored in the memory.
- An embodiment of the present application provides a computer-readable storage medium that stores executable video component prediction instructions for causing a processor to execute, to implement the video component prediction method provided by the embodiment of the present application.
- An embodiment of the present application provides a video image component prediction method.
- the video image component prediction apparatus may first select a plurality of first image component reference values based on the directly acquired reference value set of the first image component corresponding to the current block , Performing filtering processing based on the pixel positions of the selected multiple first image component reference values to obtain multiple filtered first image reference samples, and then, finding multiple filtered first image reference samples Corresponding to the reference value of the image component to be predicted, the parameters of the component linear model are obtained. Based on the parameters of the component linear model, the component linear model is constructed, and then the component linear model is constructed to predict the image component to be predicted.
- the component linear model multiple reference values of the first image component are first selected, and then filtering is performed according to the positions corresponding to the selected multiple first image component reference values, and then the component linear model is constructed. Saves the workload of filtering the pixels corresponding to the current block, that is, reduces the filtering operation, thereby reducing the complexity of building a component linear model, thereby reducing the complexity of video component prediction, improving prediction efficiency, and improving Video codec efficiency.
- FIG. 1 is a schematic diagram of a relationship between a current block and neighboring reference pixels provided by an embodiment of this application;
- FIG. 2 is an architecture diagram of a video image component prediction system provided by an embodiment of the present application.
- 3A is a schematic block diagram of a composition of a video encoding system provided by an embodiment of this application.
- 3B is a schematic block diagram of a composition of a video decoding system provided by an embodiment of the present application.
- FIG. 4 is a flowchart 1 of a video image component prediction method according to an embodiment of the present application.
- FIG. 5 is a flowchart 2 of a video image component prediction method according to an embodiment of the present application.
- FIG. 6 is a flowchart 3 of a video image component prediction method according to an embodiment of the present application.
- FIG. 7 is a schematic structural diagram of constructing a prediction model based on a maximum value and a minimum value provided by an embodiment of this application;
- FIG. 8 is a first schematic structural diagram of a video image component prediction apparatus according to an embodiment of the present application.
- FIG. 9 is a second schematic structural diagram of a video image component prediction apparatus according to an embodiment of the present application.
- the main function of prediction codec is to use the reconstructed image in space or time to construct the predicted value of the current block in the video codec, and only the difference between the original value and the predicted value is transmitted to achieve the purpose of reducing the amount of transmitted data .
- the main function of intra prediction is to construct the prediction value of the current block by using the current block and the adjacent pixel unit in the previous row and the pixel unit in the left column. As shown in FIG. 1, using the neighboring pixels that have been restored around the current block 101 (ie, the pixel unit in the previous row 102 adjacent to the current block and the pixel unit in the left column 103), each pixel unit of the current block 101 is prediction.
- the three image components are generally used to characterize the processing block.
- the three image components are a luminance component, a blue chroma component and a red chroma component.
- the luminance component is generally represented by the symbol Y
- the blue chrominance component is generally represented by the symbol Cb
- the red chrominance component is generally represented by the symbol Cr.
- the sampling format commonly used for video images is the YCbCr format.
- the YCbCr format includes:
- the video image adopts YCbCr 4: 2: 0 format
- the luminance image component of the video image is a 2N ⁇ 2N processing block
- the corresponding blue chrominance component or red chrominance component is N ⁇ N Processing block, where N is the side length of the processing block.
- the following will take the 4: 2: 0 format as an example for description, but the technical solutions of the embodiments of the present application are also applicable to other sampling formats.
- FIG. 2 is a composition of a video codec network architecture according to an embodiment of the present application
- the network architecture includes one or more electronic devices 11 to 1N and a communication network 01, where the electronic devices 11 to 1N can perform video interaction through the communication network 01.
- the electronic device may be various types of devices with video encoding and decoding functions during the implementation process.
- the electronic device may include a mobile phone, a tablet computer, a personal computer, a personal digital assistant, a navigator, a digital phone, a video phone, TV sets, sensor devices, servers, etc. are not limited in the embodiments of the present application.
- the intra prediction device in the embodiment of the present application may be the above-mentioned electronic device.
- the electronic device in the embodiment of the present application has a video codec function, and generally includes a video encoder and a video decoder.
- the composition structure of the video encoder 21 includes: a transform and quantization unit 211, an intra estimation unit 212, an intra prediction unit 213, a motion compensation unit 214, a motion estimation unit 215, an inverse transform and Inverse quantization unit 216, filter control analysis unit 217, filtering unit 218, entropy encoding unit 219, and decoded image buffer unit 210, etc.
- the filtering unit 218 can implement deblocking filtering and sample adaptive indentation (SampleAdaptive0ffset, SAO ) Filtering
- the entropy coding unit 219 can implement header information coding and context-based adaptive binary arithmetic coding (Context-based Adaptive Binary Arithmatic Coding, CABAC).
- the coding tree block (Coding Tree Unit, CTU) can be divided to obtain a block to be coded for the current video frame, and then the intra-frame prediction or inter-frame prediction of the block to be coded
- the residual information is transformed by the transform and quantization unit 211, including transforming the residual information from the pixel domain to the transform domain, and quantizing the resulting transform coefficients to further reduce the bit rate; intra-estimation unit 212 and intra-prediction
- the unit 213 is used to perform intra prediction on the block to be encoded, for example, to determine the intra prediction mode used to encode the block to be encoded;
- the motion compensation unit 214 and the motion estimation unit 215 are used to perform the block to be encoded with respect to one or more Inter prediction coding of one or more blocks in a reference frame to provide temporal prediction information; wherein, the motion estimation unit 215 is used to estimate the motion vector, the motion vector can estimate the motion of the block to be coded, and then the motion compensation unit 214 Perform motion compensation based on the motion vector
- the reconstructed residual block removes the block effect artifacts through the filter control analysis unit 217 and the filtering unit 218, and then adds the reconstructed residual block to the frame of the decoded image buffer unit 210.
- the block may be used to encode information indicating the determined intra prediction mode and output the code stream of the video data; and the decoded image buffer unit 210 is used to store the reconstructed video encoding block for prediction reference. As the video encoding proceeds, new reconstructed video encoding blocks will be continuously generated, and these reconstructed video encoding blocks will be stored in the decoded image buffer unit 210.
- the composition structure of the video decoder 22 corresponding to the video encoder 21 is shown in FIG. 3B and includes: an entropy decoding unit 221, an inverse transform and inverse quantization unit 222, an intra prediction unit 223, a motion compensation unit 224, and a filtering unit 225 And the decoded image buffer unit 226, etc., wherein the entropy decoding unit 221 can implement header information decoding and CABAC decoding, and the filtering unit 225 can implement deblocking filtering and SAO filtering.
- the code stream of the video signal is output; the code stream is input to the video decoder 22, and first passes through the entropy decoding unit 221 to obtain the decoded transform coefficient;
- the inverse transform and inverse quantization unit 222 processes to generate a residual block in the pixel domain;
- the intra prediction unit 223 may be used to generate based on the determined intra prediction mode and data from the previously decoded block of the current frame or picture The prediction data of the current decoding block;
- the motion compensation unit 224 determines the prediction information of the current decoding block by analyzing the motion vector and other associated syntax elements, and uses the prediction information to generate the predictive block of the current decoding block being decoded;
- the residual block from the inverse transform and inverse quantization unit 222 is summed with the corresponding predictive block generated by the intra prediction unit 223 or the motion compensation unit 224 to form a decoded video block;
- the decoded video block passes through the filtering unit 225 In order to remove the artifacts of the square effect,
- the video image component prediction method provided in the embodiment of the present application is a prediction in the intra prediction process in the prediction codec, which can be applied to the video encoder 21 or the video decoder 22.
- the application examples do not specifically limit this.
- CCLM implements the prediction between the luma component to the blue chroma component, the luma component to the red chroma component, and the blue chroma component and the red chroma component.
- Embodiments of the present application provide a video image component prediction method.
- the method is applied to a video image component prediction device.
- the functions implemented by the method can be implemented by calling a program code by a processor in the video image component prediction device.
- the program code It can be stored in a computer storage medium.
- the video image component prediction apparatus includes at least a processor and a storage medium.
- FIG. 4 is a schematic flowchart of an implementation method of a video image component prediction method provided by an embodiment of the present application. As shown in FIG. 4, the method includes:
- S102 Determine a plurality of first image component reference values from the reference value set of the first image component
- S103 Perform a first filtering process on the sample values of the pixels corresponding to the multiple first image component reference values, respectively, to obtain multiple filtered first image reference sample values;
- S104 Determine reference values of image components to be predicted corresponding to a plurality of filtered first image reference samples; where the image components to be predicted are image components different from the first image component;
- S105 Determine the parameters of the component linear model according to the plurality of filtered first image reference samples and the reference value of the image component to be predicted, where the component linear model characterizes the mapping of the samples of the first image component to the image component to be predicted Linear mapping relationship of samples;
- the current block is an encoding block or a decoding block that is currently to be subjected to image component prediction.
- the video image component prediction apparatus obtains the first image component reference value of the current block, where the reference value set of the first image component includes one or more first image component reference values.
- the reference value of the current block may be obtained from the reference block.
- the reference block may be an adjacent block of the current block or a non-adjacent block of the current block, which is not limited in this embodiment of the present application.
- the video image component prediction apparatus determines one or more reference pixel points located outside the current block, and uses the one or more reference pixel points as one or more first image component reference values.
- the adjacent processing block corresponding to the current block is a processing block adjacent to one or more edges of the current block, and the adjacent one or more edges may include the adjacent of the current block
- the upper side of may also refer to the adjacent left side of the current block, or may refer to the adjacent upper side and left side of the current block, which is not specifically limited in this embodiment of the present application.
- the video image component prediction device determines that pixels adjacent to the current block are one or more reference pixels.
- one or more reference pixels may be adjacent pixels or non-adjacent pixels.
- the embodiments of the present application are not limited.
- the present application uses adjacent pixels as Examples.
- one or more side adjacent pixel points corresponding to the adjacent processing block of the current block are regarded as one or more adjacent reference pixel points corresponding to the current block, and each adjacent reference pixel point corresponds to three image component references Value (ie, the first image component reference value, the second image component reference value, and the third image component reference value). Therefore, the video image component prediction apparatus may obtain the reference value of the first image component in each adjacent reference pixel of one or more adjacent reference pixels corresponding to the current block as the reference value set of the first image component
- one or more first image component reference values are obtained, that is, one or more first image component reference values represent one or more adjacent pixel points in the adjacent reference block corresponding to the current block The reference value of the corresponding first image component.
- the role of the first image component in the embodiment of the present application is used to predict other image components.
- the combination of the first image component and the image component to be predicted includes at least one of the following:
- the first image component is a luminance component
- the image component to be predicted is a first or second chrominance component
- the first image component is a first chrominance component
- the image component to be predicted is a luminance component or a second chrominance component
- the first image component is a second chrominance component, and the image component to be predicted is a luminance component or a first chrominance component; or,
- the first image component is a first color component
- the image component to be predicted is a second color component or a third color component
- the first image component is the second color component, and the image component to be predicted is the first color component or the third color component; or,
- the first image component is a third color component
- the image component to be predicted is a second color component or a first color component.
- the first color component is a red component
- the second color component is a green component
- the third color component is a blue component
- the first chroma component may be a blue chroma component
- the second chroma component may be a red chroma component
- the first chroma component may be a red chroma component
- the second chroma component may be a blue chroma component.
- the first chroma component and the second chroma component only need to represent the blue chroma component and the red chroma component.
- the first chroma component may be a blue chroma component
- the second chroma component may be a red chroma component.
- the first image component is a luminance component
- the video image component prediction apparatus may use the luminance component to predict the blue chroma component; the first image component is the luminance component, and the image component to be predicted is the first
- the video image component prediction device can use the luminance component to predict the red chroma component; when the first image component is the first chroma component, and when the image component to be predicted is the second chroma component, the video image component prediction device It is possible to use the blue chroma component to predict the red chroma component; when the first image component is the second chroma component and the image component to be predicted is the first chroma component, the video image component prediction apparatus can use the red chroma component to predict the blue Chroma component.
- the video image component prediction device may determine a plurality of first image component reference values from one or more first image component reference values.
- the video image component prediction apparatus may compare one or more first image component reference values contained in the reference value set of the first image component to determine the maximum first image component reference value and the minimum first Image component reference value.
- the video image component prediction apparatus may determine the maximum value and the minimum value of the multiple first image component reference values from the one or more first image component reference values, which may be from one or more
- the first image component reference value determines a reference value that characterizes or represents the maximum or minimum value of the first image component reference value.
- the video image component prediction apparatus determines the maximum first image component reference value and the minimum first image component reference value from the reference value set of the first image component.
- the video image component prediction apparatus may obtain the maximum first image component reference value and the minimum first image component reference value in various ways.
- Method 1 Compare each first image component reference value among the one or more first image component reference values in sequence to determine the one with the largest reference image value and the smallest with the first image component reference value The minimum first image component reference value.
- Method 2 Select at least two first image component reference values at preset positions from one or more first image component reference values; divide the at least two first sub-image component reference values into the largest according to the numerical value The image component reference value set and the minimum image component reference value set; based on the maximum image component reference value set and the minimum image component reference value set, the maximum first image component reference value and the minimum first image component reference value are obtained.
- the video image component prediction apparatus may select the largest first image component reference value from the one or more first image component reference values as the largest first image component reference value, The one with the smallest value is selected as the smallest first image component reference value.
- the determination methods can be compared one by one in sequence, or can be determined after sorting. Specific determination methods are not limited in the embodiments of the present application.
- the video image component prediction apparatus may also select several first image component reference values corresponding to a preset position (preset pixel point position) from pixel positions corresponding to one or more first image component reference values, as At least two first image component reference values, and then based on at least two first image component reference values, divide the largest data set (the largest image component reference value set) and the smallest data set (the smallest image component reference value set), Based on the largest data set and the smallest data set, the largest first image component reference value and the smallest first image component reference value are determined.
- the process of determining the maximum reference value of the first image component and the minimum reference value of the first image component based on the largest data set and the smallest data set can be performed by averaging the largest data set to obtain the largest first image
- the minimum data set is averaged to obtain the minimum first image component reference value
- the maximum and minimum values can also be determined in other ways, which is not limited in this embodiment of the present application.
- the number of values in the largest data set and the smallest data set is an integer greater than or equal to 1.
- the number of values in the two sets may be the same or may be inconsistent, and the embodiments of the present application are not limited.
- the video image component prediction apparatus may also directly select from the at least two first sub-image component reference values as the at least two first sub-image component reference values for the corresponding first image component reference values at the preset positions
- the maximum value is selected as the maximum first image component reference value
- the minimum value is selected as the minimum first image component reference value.
- the video image component prediction apparatus divides M (M may be a value greater than 4 or no limit) among the at least two first sub-image component reference values into the largest first sub-image component reference value as the largest
- the image component reference value set which is divided into the minimum image component reference value set except the M largest first sub image component reference values among the at least two first sub image component reference values; and finally the maximum image component reference value Mean value processing is performed in the set to obtain the maximum first image component reference value, and mean value processing is performed in the minimum image component reference value set as the minimum first image component reference value.
- the maximum first image component reference value and the minimum first image component reference value are the maximum value and the minimum value that can be directly determined by the numerical value, or the preset can be selected first After the first image component reference value (at least two first sub-image component reference values) whose position can represent the validity of the reference value, a set with a larger value is first divided from the effective first image component reference value, and A set with the smallest value, and then determine the maximum first image component reference value based on the larger set, and determine the minimum first image component reference value based on the smaller set; or directly correspond to the effective first at the preset position In the image component reference value set, the maximum first image component reference value of the maximum value and the minimum first image component reference value of the minimum value are determined by the numerical value.
- the video image component prediction device does not limit the determination method of the maximum first image component reference value and the minimum first image component reference value.
- the video image component prediction apparatus may also divide one or more first image component reference values into three or even four sets according to size, and then process each set to obtain a representative parameter, and then select the representative parameter The maximum and minimum parameters are selected as the maximum first image component reference value and the minimum first image component reference value, etc.
- the selection of the preset position may be a position representing the validity of the reference value of the first image component, and the number of preset positions is not limited, for example, it may be 4, 6, etc .; It can be all positions of adjacent pixels, which is not limited in the embodiments of the present application.
- the preset position may be a preset reference value of the first image component selected on both sides according to the sampling frequency based on the center of the row or example; it may also be the position after removing the edge point in the row or column
- the reference values of the first image component at other positions are not limited in this embodiment of the present application.
- the allocation of the preset positions in the rows and columns may be evenly divided, or may be allocated in a preset manner, which is not limited in the embodiments of the present application. For example, when the number of preset positions is 4, and adjacent rows and adjacent columns are positions corresponding to one or more first image component reference values, you can select 2 from the first image component reference values corresponding to adjacent rows 2 of the first image component reference values corresponding to adjacent columns; 1 can be selected from the first image component reference values corresponding to adjacent rows, and 3 can be selected from the first image component reference values corresponding to adjacent columns
- the embodiments of the present application are not limited.
- the video image component prediction apparatus may determine the maximum value and the minimum value of the one or more first image component reference values from the one or more first image component reference values, that is, one or more first image components are obtained
- the minimum value among the one or more first image component reference values the minimum first image component reference value.
- After determining a plurality of reference values from the preset positions in the one or more first image component reference values after processing, the largest first image component reference value with the largest characterizing value and the smallest first image component with the smallest characterizing value are obtained An image component reference value.
- the video image component prediction device performs a first filtering process on the pixel samples corresponding to the determined multiple first image component reference values, respectively, to obtain multiple filtered first image reference samples.
- the multiple filtered first image reference sample values may be the filtered maximum first image component reference value and the filtered minimum first image component reference value, or may include other
- the multiple reference sample values of the image component reference value and the filtered minimum first image component reference value, or other multiple reference sample values, are not limited in this embodiment of the present application.
- the video image component prediction device performs filtering (that is, first filter processing) on the pixel position corresponding to the determined first image component reference value (that is, the sample value of the corresponding pixel), so that it can The corresponding filtered first image reference samples are obtained, so that the candidate can build the component linear model based on the multiple filtered first image reference samples.
- filtering that is, first filter processing
- the video image component prediction device performs first filtering processing on the sample values of the pixels corresponding to the maximum first image component reference value and the minimum first image component reference value, respectively, to obtain the filtered maximum An image component reference value and the filtered minimum first image component reference value.
- the filtering process may be used to determine the maximum first image component
- the reference value and the pixel position of the minimum first image component reference value that is, the sample value of the corresponding pixel
- are filtered that is, the first filtering process
- the filtered minimum image component reference value that is, multiple filtered first image reference samples
- the filtering method may be up-sampling, down-sampling, and low-pass filtering.
- the embodiments of the present application are not limited.
- the down-sampling methods may include: mean, interpolation, or median. The embodiment is not limited.
- the first filtering process may be down-sampling filtering and low-pass filtering.
- the video image component prediction apparatus performs downsampling and filtering on the pixel positions for determining the maximum first image component reference value and the minimum first image component reference value, so that the corresponding filtered maximum first image can be obtained The component reference value and the filtered minimum first image component reference value.
- the video component prediction apparatus performs an average calculation of the first image component for an area composed of the position corresponding to the maximum reference value of the first image component and the position of its neighboring pixel points, and fuses the pixels of this area into one pixel.
- the average result is the reference value of the first image component corresponding to the fused pixel, that is, the filtered maximum reference value of the first image component; similarly, the video component prediction device corresponds to the position and
- the area formed by the positions of its adjacent pixels is calculated as the average value of the first image component, and the pixels in this area are fused into one pixel, and the average result is the reference value of the first image component corresponding to the fused pixel. That is, the filtered minimum reference value of the first image component.
- the downsampling process of the video image component prediction device is implemented by using a filter, and the specific vector pixel position range adjacent to the position corresponding to the maximum first image component reference value
- the determination of can be determined by the type of filter, which is not limited in the embodiments of the present application.
- the filter type may be a 6-tap filter or a 4-tap filter, which is not limited in the embodiment of the present application.
- the video image component prediction device determines the reference value of the image component to be predicted corresponding to the plurality of filtered first image reference samples, where the image component to be predicted is an image component different from the first image component ( For example, the second image component or the third image component); then, based on the multiple filtered first image reference samples and the reference value of the image component to be predicted, the parameters of the component linear model are determined.
- the sample mapping of an image component is a linear mapping relationship of the sample values of the image component to be predicted, for example, the function relationship.
- the video image component prediction device determines the maximum image component reference value to be predicted corresponding to the filtered maximum first image component reference value, and the corresponding minimum image component reference value after filtering The minimum reference value of the image component to be predicted.
- the video image component prediction apparatus may adopt the construction method of the maximum value and the minimum value, and derive the model parameters (that is, the parameters of the component linear model) according to the principle of "two points are determined by one line", Furthermore, a component linear model is constructed, that is, a simplified cross-component linear prediction model (Cross-component Linear Model Prediction, CCLM).
- CCLM Cross-component Linear Model Prediction
- the video image component prediction device performs downsampling (that is, filtering) to achieve alignment with the position of the image to be predicted, so that the corresponding sample value of the filtered first image component reference sample can be determined
- the reference value of the image component to be predicted For example, the maximum image component reference value to be predicted corresponding to the filtered maximum first image component reference value and the minimum image component reference value to be predicted corresponding to the filtered minimum first image component reference value are determined.
- the video image component prediction apparatus has determined (the filtered maximum first image component reference value, the maximum image component to be predicted reference value) and (the filtered minimum first image component reference value, the minimum image component to be predicted (Reference value) these two points, so that the model parameters can be derived according to the principle of "two points determine one line", and then a component linear model is constructed.
- the video image component prediction device is based on the filtered maximum first image component reference value, the maximum image component reference value to be predicted, the filtered minimum first image component reference value, and the minimum image element to be predicted
- the reference value determines the parameters of the component linear model, where the component linear model represents a linear mapping relationship that maps the sample values of the first image component to the sample values of the image component to be predicted.
- the video image component prediction device is based on the filtered maximum first image component reference value, the maximum image component reference value to be predicted, the filtered minimum first image component reference value, and the minimum image element to be predicted
- the implementation of determining the parameters of the component linear model may include: (1).
- the parameters of the component linear model further include a multiplicative factor and an additive offset.
- the video image component prediction apparatus can calculate the first difference between the maximum image component reference value to be predicted and the minimum image component reference value to be predicted; calculate the maximum first image component reference value and the minimum first image component reference value The second difference; set the multiplier factor as the ratio of the first difference and the second difference; calculate the first product between the reference value of the largest first image component and the multiplier factor, and set the additive offset Is the difference between the maximum reference value of the image component to be predicted and the first product; or, the second product between the minimum reference value of the first image component and the multiplicative factor is calculated, and the additive offset is set to the minimum to be predicted The difference between the reference value of the image component and the second product.
- the video image component prediction apparatus may predict the image component to be predicted based on the first image component and the component linear model, and
- the image component to be predicted in the embodiment of the present application may be a chroma component.
- the component linear model can be shown in formula (1), as follows:
- Y represents the reconstruction value of the first image component corresponding to a certain pixel in the current block (after downsampling)
- C represents the predicted value of the second image component corresponding to that certain pixel in the current block
- ⁇ and ⁇ is a model parameter of the above component linear model.
- the video image component prediction apparatus may first select the maximum and minimum first image component reference values based on the directly acquired one or more first image component reference values corresponding to the current block, and then The corresponding positions of the maximum first image component reference value and the minimum first image component reference value are down-sampled, and then a component linear model is constructed, which saves the workload of down-sampling processing for the pixels corresponding to the current block, that is, reduces The filtering operation reduces the complexity of constructing the component linear model, thereby reducing the complexity of video component prediction, improves the prediction efficiency, and improves the video codec efficiency.
- the video image component prediction device may directly use the component linear model to predict the video component of the current block, and then obtain the prediction of the image component to be predicted value.
- the video image component prediction apparatus may perform a mapping process on the reconstruction value of the first image component of the current block according to the component linear model to obtain a mapping value, and then determine the prediction value of the image component to be predicted of the current block according to the mapping value.
- the video image component prediction device performs a second filtering process on the reconstructed value of the first image component to obtain a second filtered value of the reconstructed value of the first image component; according to the component linear model, the second The filtered value is mapped to obtain the mapped value.
- the video image component prediction device sets the mapping value to the prediction value of the image component to be predicted of the current block.
- the second filtering process may be down-sampling filtering or low-pass filtering.
- the video image component prediction apparatus may also perform a third filtering process on the mapping value setting to obtain a third filtering value of the mapping value; setting the third filtering value to the value of the to-be-predicted image component of the current block Predictive value.
- the third filtering process may be low-pass filtering.
- the predicted value represents the predicted value of the second image component or the predicted value of the third image component corresponding to one or more pixels of the current block.
- an embodiment of the present invention further provides a video image component prediction method, including:
- the component linear model represents a linear mapping relationship that maps the samples of the first image component to the samples of the image component to be predicted;
- the reconstruction value of the first image component of the current block is to filter the first image component of the current block to obtain the reconstruction value of the first image component corresponding to the current block, and then The component linear model and the first image component reconstruction value obtain the prediction value of the image component to be predicted of the current block.
- the minimum unit of prediction for the current block is the pixel, so the first image corresponding to each pixel of the current block is required Component reconstruction value to predict the predicted value of the image component to be predicted corresponding to the pixel.
- the video image component prediction first performs the first image component filtering (eg, downsampling) on the current block to obtain the reconstruction value of the first image component corresponding to the current block, specifically the first image of each pixel corresponding to the current block Component reconstruction value.
- the first image component reconstruction value represents the reconstruction value of the first image component corresponding to one or more pixels of the current block.
- the video image component prediction apparatus can perform a mapping process on the reconstruction value of the first image component of the current block based on the component linear model to obtain the map value, and according to the map value, thereby obtain the predicted value of the image component to be predicted of the current block.
- S204 may include: S2041-S2042. as follows:
- the video image component prediction device is based on the principle of "determining one line with two points" in the process of constructing a component linear model based on the filtered maximum image component reference value and the filtered minimum image component reference value, in order to
- the first image component is the abscissa.
- the image component to be predicted is the ordinate
- the values of the abscissas of the two points are known, and the values of the ordinates corresponding to the two points need to be determined.
- the principle of “one line is determined at two points” determines a linear model, namely a component linear model.
- the video image component prediction device converts the sampling point position of the first image component reference value corresponding to the largest first image component reference value into the first sampling point position;
- the predicted image component reference value is set to the reference value at the first sampling point position among the image component reference values to be predicted;
- the sampling point position of the first image component reference value corresponding to the smallest first image component reference value is converted to The position of the second sampling point;
- the minimum reference value of the image component to be predicted is set as the reference value at the position of the second sampling point among the reference values of the image component to be predicted.
- the reference pixel point is an adjacent pixel point as an example for description.
- the video image component prediction apparatus may obtain one or more reference values of the image components to be predicted corresponding to the current block based on the description of the neighboring blocks, where the one or more reference values of the image components to be predicted may refer to Is the reference value of the image component to be predicted in each adjacent reference pixel of one or more reference pixels corresponding to the current block, as a reference value of the image component to be predicted, so that the video image component prediction device obtains One or more reference values of the image component to be predicted.
- the video image component prediction apparatus finds the first adjacent reference pixel point where the filtered maximum first image component reference value is located from the pixel points corresponding to the one or more reference values of the image component to be predicted, and compares the first adjacent reference point
- the reference value of the image component to be predicted corresponding to the pixel is taken as the maximum reference value of the image component to be predicted, that is, the maximum reference value of the image component to be predicted corresponding to the filtered maximum first image component reference value is determined, and from one or more Among the pixels corresponding to the predicted image component reference value, find the second adjacent reference pixel where the filtered first image component reference value is located, and use the to-be-predicted image component reference value corresponding to the second adjacent reference pixel as the
- the minimum reference value of the image component to be predicted that is, the minimum reference value of the image component to be predicted corresponding to the filtered minimum first image component reference value is determined.
- the video image component prediction apparatus may further filter the positions of adjacent pixels to obtain one or more reference values of the image components to be predicted of the filtered pixels, and then select the filtered pixels. Find the first adjacent reference pixel point where the filtered maximum first image component reference value is located in the point position, and compare the reference value of the image component to be predicted corresponding to the first adjacent reference pixel point (one or more image components to be predicted One of the reference values) is used as the maximum image component reference value to be predicted, that is, the maximum image component reference value to be predicted corresponding to the filtered maximum first image component reference value is determined, and in the filtered pixel position, the filter is found The second adjacent reference pixel where the smallest first image component reference value is located, and the reference value of the image component to be predicted corresponding to the second adjacent reference pixel is taken as the minimum reference value of the image component to be predicted, that is, it is determined and filtered The minimum reference value of the image component to be predicted corresponding to the minimum reference value of the first image component.
- the video image component prediction apparatus may also first filter the positions of adjacent pixels to filter the image component to be predicted, such as the chroma image component, which is not limited in this embodiment of the present application. That is, in the embodiment of the present application, the video image component prediction apparatus may perform a fourth filtering process on the reference value of the image component to be predicted to obtain the reconstruction value of the image component to be predicted.
- the fourth filtering process may be low-pass filtering.
- the process of constructing the component linear model by the video image component prediction device is: using the filtered maximum first image component reference value, the maximum image component reference value to be predicted, and the preset initial linear model to construct the first A sub-component linear model; using the filtered minimum first image component reference value, the minimum image component reference value to be predicted, and a preset initial linear model to construct a second sub-component linear model; based on the first sub-component linear model and the second The sub-component linear model is used to obtain the model parameters; the model parameters and the preset initial linear model are used to construct the component linear model.
- the preset initial linear model is an initial model with unknown model parameters.
- the preset initial linear model may be in the form of formula (1), but ⁇ and ⁇ are unknown, and the first subcomponent linear model and the second subcomponent linear model are used to construct a binary second equation. Solving the model parameters ⁇ and ⁇ , and substituting ⁇ and ⁇ into formula (1), the linear mapping relationship model between the first image component and the image component to be predicted can be obtained.
- L max and L min represent the maximum value and minimum value searched from the reference value of the first image component corresponding to the left side and / or the upper side without down-sampling
- C max and C min represent L max and The reference value of the image component to be predicted corresponding to the adjacent reference pixel at the position corresponding to L min .
- the video image component prediction apparatus may first select the maximum and minimum first image component reference values based on the directly acquired one or more first image component reference values corresponding to the current block, and then The maximum first image component reference value and the position corresponding to the maximum first image component reference value are down-sampled (filtered), and then a component linear model is constructed, which saves the workload of down-sampling processing for pixels corresponding to the current block, That is, the filtering operation is reduced, thereby reducing the complexity of constructing the component linear model, thereby reducing the complexity of video component prediction, improving the prediction efficiency, and improving the video codec efficiency.
- embodiments of the present application provide a video component prediction apparatus, which includes each unit included and each module included in each unit, which can be implemented by a processor in the video component prediction apparatus; It can also be realized by a specific logic circuit; in the implementation process, the processor may be a central processing unit, a microprocessor, a digital signal processor (DSP, Digital Signal Processor), or a field programmable gate array.
- the processor may be a central processing unit, a microprocessor, a digital signal processor (DSP, Digital Signal Processor), or a field programmable gate array.
- an embodiment of the present application provides a video component prediction apparatus 3, including:
- the acquiring part 30 is configured to acquire a reference value set of the first image component of the current block, wherein the reference value set of the first image component includes one or more first image component reference values;
- the determining section 31 is configured to determine a plurality of first image component reference values from the reference value set of the first image component;
- the filtering part 32 is configured to perform a first filtering process on the sample values of the pixels corresponding to the multiple first image component reference values, respectively, to obtain multiple filtered first image reference sample values;
- the determining section 31 is further configured to determine a reference value of an image component to be predicted corresponding to the plurality of filtered first image reference samples, wherein the image component to be predicted is the first image component Different image components; and determining the parameters of the component linear model according to the plurality of filtered first image reference samples and the reference values of the image components to be predicted, wherein the component linear model characterizes the first
- the sample value mapping of the image component is a linear mapping relationship of the sample values of the image component to be predicted;
- the filtering part 32 is further configured to perform a mapping process on the reconstruction value of the first image component of the current block according to the component linear model to obtain a mapping value;
- the prediction section 33 is configured to determine the prediction value of the image component to be predicted of the current block according to the mapping value.
- the determining portion 31 is further configured to compare the reference values contained in the reference value set of the first image component to determine the maximum first image component reference value and the minimum first image component reference value.
- the filtering part 32 is further configured to perform the sampling on pixel samples corresponding to the maximum first image component reference value and the minimum first image component reference value, respectively In the first filtering process, the filtered maximum first image component reference value and the filtered minimum first image component reference value are obtained.
- the determining section 31 is further configured to determine a maximum reference value of the image component to be predicted corresponding to the filtered maximum first image component reference value, and the filtered The minimum reference value of the image component to be predicted corresponding to the minimum first image component reference value.
- the determining portion 31 is further configured to: according to the filtered maximum first image component reference value, the maximum image component reference value to be predicted, and the filtered minimum An image component reference value and the minimum image component reference value to be predicted determine the parameter of the component linear model, wherein the component linear model characterizes the mapping of the samples of the first image component to the image component to be predicted Linear mapping of samples.
- the determining portion 31 is further configured to determine one or more reference pixels located outside the current block;
- the acquisition section 30 is further configured to use the one or more reference pixel points as the one or more first image component reference values.
- the determining portion 31 is further configured to determine that the pixel adjacent to the current block is the one or more reference pixels.
- the filtering part 32 is further configured to perform a second filtering process on the reconstructed value of the first image component to obtain a second filtered value of the reconstructed value of the first image component And mapping the second filtered value according to the component linear model to obtain the mapped value.
- the second filtering process is down-sampling filtering or low-pass filtering.
- the prediction section 33 is further configured to set the mapping value as the prediction value of the to-be-predicted image component of the current block.
- the filtering part 32 is further configured to perform a third filtering process on the mapping value setting to obtain a third filtering value of the mapping value;
- the prediction section 33 is further configured to set the third filtered value as the predicted value of the image component to be predicted of the current block.
- the third filtering process is low-pass filtering.
- the determining portion 31 is further configured to obtain a reference value of the image component to be predicted of the current block; and among the reference value of the image component to be predicted, determine the maximum to be predicted The image component reference value and the minimum image component reference value to be predicted.
- the filtering part 32 is further configured to perform a fourth filtering process on the reference value of the image component to be predicted to obtain a reconstruction value of the image component to be predicted.
- the fourth filtering process is low-pass filtering.
- the determining portion 31 is further configured to convert the sampling point position of the first image component reference value corresponding to the maximum first image component reference value into the first sampling point position
- the determining section 31 is further configured to construct the filtered first reference value of the first image component, the largest reference value of the image component to be predicted and a preset initial linear model to construct A first sub-component linear model; using the filtered minimum first image component reference value, the minimum image component reference value to be predicted, and the preset initial linear model to construct a second sub-component linear model; based on the The first subcomponent linear model and the second subcomponent linear model are used to obtain model parameters; the component parameters are constructed using the model parameters and the preset initial linear model.
- the determining part 31 is further configured that the parameters of the component linear model include a multiplicative factor and an additive offset; calculating the maximum reference value of the image component to be predicted and the A first difference between the minimum reference value of the image component to be predicted; calculating a second difference between the maximum first image component reference value and the minimum first image component reference value; setting the multiplicative factor Is the ratio of the first difference to the second difference; calculating the first product between the maximum first image component reference value and the multiplicative factor, and setting the additive offset to The difference between the maximum reference value of the image component to be predicted and the first product; or, calculating the second product between the minimum reference value of the first image component and the multiplication factor, and adding the The sexual offset is set as the difference between the minimum reference value of the image component to be predicted and the second product.
- the parameters of the component linear model include a multiplicative factor and an additive offset
- the first image component is a luminance component
- the image component to be predicted is a first or second chrominance component
- the first image component is the first chroma component, and the image component to be predicted is the luminance component or the second chroma component; or,
- the first image component is the second chroma component, and the image component to be predicted is the luminance component or the first chroma component; or,
- the first image component is a first color component
- the image component to be predicted is a second color component or a third color component
- the first image component is the second color component, and the image component to be predicted is the first color component or the third color component; or,
- the first image component is the third color component, and the image component to be predicted is the second color component or the first color component.
- the first color component is a red component
- the second color component is a green component
- the third color component is a blue component
- the first filtering process is down-sampling filtering or low-pass filtering.
- the above video component prediction method is implemented in the form of a software function module and sold or used as an independent product, it may also be stored in a computer-readable storage medium.
- the technical solutions of the embodiments of the present application may be embodied in the form of software products in essence or part of contributions to related technologies.
- the computer software products are stored in a storage medium and include several instructions to make Electronic devices (which may be mobile phones, tablet computers, personal computers, personal digital assistants, navigators, digital phones, video phones, televisions, sensor devices, servers, etc.) perform all or part of the methods described in the embodiments of the present application.
- the foregoing storage media include various media that can store program codes, such as a U disk, a mobile hard disk, a read-only memory (Read Only Memory, ROM), a magnetic disk, or an optical disk.
- program codes such as a U disk, a mobile hard disk, a read-only memory (Read Only Memory, ROM), a magnetic disk, or an optical disk.
- an embodiment of the present application provides a video component prediction apparatus, including:
- the memory 34 is used to store executable video component prediction instructions
- the processor 35 is configured to implement the steps in the video component prediction method provided in the foregoing embodiments when executing the executable video component prediction instructions stored in the memory 34.
- an embodiment of the present application provides a computer-readable storage medium on which a video component prediction instruction is stored.
- the video component prediction instruction is executed by the processor 35, the video component prediction method provided in the foregoing embodiment is implemented. step.
- the video component prediction apparatus may first determine the reference values of the plurality of first image components based on the directly acquired reference value set of the first image component corresponding to the current block, and then according to the determined Filter the position corresponding to the reference value of the first image component, and then build a component linear model, which saves the workload of the filtering process of the pixels corresponding to the current block, that is, reduces the filtering operation, thereby reducing the construction of the component linear model Complexity of video components, thereby reducing the complexity of video component prediction, improving prediction efficiency, and improving video codec efficiency.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Image Processing (AREA)
Abstract
Description
Claims (44)
- 一种图像分量预测方法,其中,包括:获取当前块的第一图像分量的参考值集;从所述第一图像分量的参考值集中确定多个第一图像分量参考值;对所述多个第一图像分量参考值对应的像素点的样值分别进行第一滤波处理,得到多个滤波后的第一图像参考样值;确定与所述多个滤波后的第一图像参考样值对应的待预测图像分量参考值,其中,所述待预测图像分量是与所述第一图像分量不同的图像分量;根据所述多个滤波后的第一图像参考样值和所述待预测图像分量参考值,确定分量线性模型的参数,其中,所述分量线性模型表征将所述第一图像分量的样值映射为所述待预测图像分量的样值的线性映射关系;根据所述分量线性模型,对所述当前块的所述第一图像分量的重建值进行映射处理,得到映射值;根据所述映射值确定所述当前块的所述待预测图像分量的预测值。
- 根据权利要求1所述的方法,其中,所述从所述第一图像分量的参考值集中确定多个第一图像分量参考值,包括:比较所述第一图像分量的参考值集中包含的参考值,确定最大第一图像分量参考值和最小第一图像分量参考值。
- 根据权利要求2所述的方法,其中,所述对所述多个第一图像分量参考值对应的像素点的样值分别进行第一滤波处理,得到多个滤波后的第一图像参考样值,包括:对所述最大第一图像分量参考值和所述最小第一图像分量参考值对应的像素点的样值分别进行所述第一滤波处理,得到滤波后的最大第一图像分量参考值和滤波后的最小第一图像分量参考值。
- 根据权利要求3所述的方法,其中,所述确定与所述多个滤波后的第一图像参考样值对应的待预测图像分量参考值,包括:确定与所述滤波后的最大第一图像分量参考值对应的最大待预测图像分量参考值,以及与所述滤波后的最小第一图像分量参考值对应的最小待预测图像分量参考值。
- 根据权利要求4所述的方法,其中,所述根据所述多个滤波后的第一图像参考样值和所述待预测图像分量参考值,确定分量线性模型的参数,包括:根据所述滤波后的最大第一图像分量参考值、所述最大待预测图像分量参考值、所述滤波后的最小第一图像分量参考值和所述最小待预测图像分量参考值,确定分量线性模型的参数,其中,所述分量线性模型表征将所述第一图像分量的样值映射为所述待预测图像分量的样值的线性映射关系。
- 根据权利要求1所述的方法,其中,所述获取当前块的第一图像分量的参考值集,包括:确定位于所述当前块之外的一个或多个参考像素点,并将所述一个或多个参考像素点作为所述一个或多个第一图像分量参考值。
- 根据权利要求6所述的方法,其中,所述确定位于所述当前块之外的一个或多个参考像素点,包括:确定与所述当前块相邻的像素点为所述一个或多个参考像素点。
- 根据权利要求1所述的方法,其中,所述根据所述分量线性模型,对所述当前块的所述第一图像分量的重建值进行映射处理,得到映射值,包括:对所述第一图像分量的重建值进行第二滤波处理,得到所述第一图像分量的重建值的第二滤波值;根据所述分量线性模型,对所述第二滤波值进行映射处理,得到所述映射值。
- 根据权利要求8所述的方法,其中,所述第二滤波处理为下采样滤波或低通滤波。
- 根据权利要求1所述的方法,其中,所述根据所述映射值确定所述当前块的所述待预测图像分量的预测值,包括:将所述映射值设置为所述当前块的所述待预测图像分量的预测值。
- 根据权利要求1所述的方法,其中,所述根据所述映射值确定所述当前块的所述待预测 图像分量的预测值,包括:对所述映射值设置进行第三滤波处理,得到映射值的第三滤波值;将所述第三滤波值设置为所述当前块的所述待预测图像分量的预测值。
- 根据权利要求11所述的方法,其中,所述第三滤波处理为低通滤波。
- 根据权利要求4所述的方法,其中,所述确定与所述滤波后的最大第一图像分量参考值对应的最大待预测图像分量参考值,以及与所述滤波后的最小第一图像分量参考值对应的最小待预测图像分量参考值,包括:获取所述当前块的待预测图像分量参考值;在所述待预测图像分量参考值中,确定所述最大待预测图像分量参考值,以及所述最小待预测图像分量参考值。
- 根据权利要求13所述的方法,其中,所述方法还包括:对所述待预测图像分量参考值进行第四滤波处理,得到待预测图像分量重建值。
- 根据权利要求14所述的方法,其中,所述第四滤波处理为低通滤波。
- 根据权利要求13至15中任一项所述的方法,其中,在所述待预测图像分量参考值中,确定所述最大待预测图像分量参考值,以及所述最小待预测图像分量参考值,包括:将与所述最大第一图像分量参考值对应的第一图像分量参考值的采样点位置,转换为第一采样点位置;将所述最大待预测图像分量参考值设置为在所述待预测图像分量参考值中位于所述第一采样点位置上的参考值;将与所述最小第一图像分量参考值对应的第一图像分量参考值的采样点位置,转换为第二采样点位置;将所述最小待预测图像分量参考值设置为在所述待预测图像分量参考值中位于所述第二采样点位置上的参考值。
- 根据权利要求5所述的方法,其中,所述根据所述滤波后的最大第一图像分量参考值、所述最大待预测图像分量参考值、所述滤波后的最小第一图像分量参考值和所述最小待预测图像分量参考值,确定分量线性模型的参数,包括:采用所述滤波后的最大第一图像分量参考值、所述最大待预测图像分量参考值和预设初始线性模型,构建第一子分量线性模型;采用所述滤波后的最小第一图像分量参考值、所述最小待预测图像分量参考值和所述预设初始线性模型,构建第二子分量线性模型;基于所述第一子分量线性模型和所述第二子分量线性模型,得到模型参数;采用所述模型参数和所述预设初始线性模型,构建所述分量线性模型。
- 根据权利要求5所述的方法,其中,所述根据所述滤波后的最大第一图像分量参考值、所述最大待预测图像分量参考值、所述滤波后的最小第一图像分量参考值和所述最小待预测图像分量参考值,确定分量线性模型的参数,包括:所述分量线性模型的参数包括乘性因子和加性偏移量;计算所述最大待预测图像分量参考值与所述最小待预测图像分量参考值之间的第一差值;计算所述最大第一图像分量参考值与所述最小第一图像分量参考值之间的第二差值;将所述乘性因子设置为所述第一差值与所述第二差值的比值;计算所述最大第一图像分量参考值与所述乘性因子之间的第一乘积,将所述加性偏移量设置为所述最大待预测图像分量参考值与所述第一乘积之间的差值;或者,计算所述最小第一图像分量参考值与所述乘性因子之间的第二乘积,将所述加性偏移量设置为所述最小待预测图像分量参考值与所述第二乘积之间的差值。
- 根据权利要求1所述的方法,其中,所述方法还包括:所述第一图像分量是亮度分量,所述待预测图像分量是第一或第二色度分量;或者,所述第一图像分量是所述第一色度分量,所述待预测图像分量是所述亮度分量或所述第二色度分量;或者,所述第一图像分量是所述第二色度分量,所述待预测图像分量是所述亮度分量或所述第一色度分量;或者,所述第一图像分量是第一色彩分量,所述待预测图像分量是第二色彩分量或第三色彩分量;或者,所述第一图像分量是所述第二色彩分量,所述待预测图像分量是所述第一色彩分量或所述第三色彩分量;或者,所述第一图像分量是所述第三色彩分量,所述待预测图像分量是所述第二色彩分量或所述第一色彩分量。
- 如权利要求19所述的方法,其中,所述方法还包括:所述第一色彩分量为红分量,所述第二色彩分量为绿分量,所述第三色彩分量为蓝分量。
- 根据权利要求1所述的方法,其中,所述第一滤波处理为下采样滤波或低通滤波。
- 一种视频分量预测装置,其中,包括:获取部分,被配置为获取当前块的第一图像分量的参考值集;确定部分,被配置为从所述第一图像分量的参考值集中确定多个第一图像分量参考值;滤波部分,被配置为对所述多个第一图像分量参考值对应的像素点的样值分别进行第一滤波处理,得到多个滤波后的第一图像参考样值;所述确定部分,还被配置为确定与所述多个滤波后的第一图像参考样值对应的待预测图像分量参考值,其中,所述待预测图像分量是与所述第一图像分量不同的图像分量;以及根据所述多个滤波后的第一图像参考样值和所述待预测图像分量参考值,确定分量线性模型的参数,其中,所述分量线性模型表征将所述第一图像分量的样值映射为所述待预测图像分量的样值的线性映射关系;所述滤波部分,还被配置为根据所述分量线性模型,对所述当前块的所述第一图像分量的重建值进行映射处理,得到映射值;预测部分,被配置为根据所述映射值确定所述当前块的所述待预测图像分量的预测值。
- 根据权利要求22所述的装置,其中,所述确定部分,还被配置为比较所述第一图像分量的参考值集中包含的参考值,确定最大第一图像分量参考值和最小第一图像分量参考值。
- 根据权利要求23所述的装置,其中,所述滤波部分,还被配置为对所述最大第一图像分量参考值和所述最小第一图像分量参考值对应的像素点的样值分别进行所述第一滤波处理,得到滤波后的最大第一图像分量参考值和滤波后的最小第一图像分量参考值。
- 根据权利要求24所述的装置,其中,所述确定部分,还被配置为确定与所述滤波后的最大第一图像分量参考值对应的最大待预测图像分量参考值,以及与所述滤波后的最小第一图像分量参考值对应的最小待预测图像分量参考值。
- 根据权利要求25所述的装置,其中,所述确定部分,还被配置为根据所述滤波后的最大第一图像分量参考值、所述最大待预测图像分量参考值、所述滤波后的最小第一图像分量参考值和所述最小待预测图像分量参考值,确定分量线性模型的参数,其中,所述分量线性模型表征将所述第一图像分量的样值映射为所述待预测图像分量的样值的线性映射关系。
- 根据权利要求22所述的装置,其中,所述确定部分,还被配置为确定位于所述当前块之外的一个或多个参考像素点;所述获取部分,还被配置为将所述一个或多个参考像素点作为所述一个或多个第一图像分量参考值。
- 根据权利要求27所述的装置,其中,所述确定部分,还被配置为确定与所述当前块相邻的像素点为所述一个或多个参考像素点。
- 根据权利要求22所述的装置,其中,所述滤波部分,还被配置为对所述第一图像分量的重建值进行第二滤波处理,得到所述第一图像分量的重建值的第二滤波值;以及根据所述分量线性模型,对所述第二滤波值进行映射处理,得到所述映射值。
- 根据权利要求29所述的装置,其中,所述第二滤波处理为下采样滤波或低通滤波。
- 根据权利要求22所述的装置,其中,所述预测部分,还被配置为将所述映射值设置为所述当前块的所述待预测图像分量的预测值。
- 根据权利要求22所述的装置,其中,所述滤波部分,还被配置为对所述映射值设置进行第三滤波处理,得到映射值的第三滤波值;所述预测部分,还被配置为将所述第三滤波值设置为所述当前块的所述待预测图像分量的预测值。
- 根据权利要求32所述的装置,其中,所述第三滤波处理为低通滤波。
- 根据权利要求25所述的装置,其中,所述确定部分,还被配置为获取所述当前块的待预测图像分量参考值;以及在所述待预测图像分量参考值中,确定所述最大待预测图像分量参考值,以及所述最小待预测图像分量参考值。
- 根据权利要求34所述的装置,其中,所述滤波部分,还被配置为对所述待预测图像分量参考值进行第四滤波处理,得到待预测图像分量重建值。
- 根据权利要求35所述的装置,其中,所述第四滤波处理为低通滤波。
- 根据权利要求34至36任一项所述的装置,其中,所述确定部分,还被配置为将与所述最大第一图像分量参考值对应的第一图像分量参考值的采样点位置,转换为第一采样点位置;将所述最大待预测图像分量参考值设置为在所述待预测图像分量参考值中位于所述第一采样点位置上的参考值;将与所述最小第一图像分量参考值对应的第一图像分量参考值的采样点位置,转换为第二采样点位置;以及将所述最小待预测图像分量参考值设置为在所述待预测图像分量参考值中位于所述第二采样点位置上的参考值。
- 根据权利要求26所述的装置,其中,所述确定部分,还被配置为采用所述滤波后的最大第一图像分量参考值、所述最大待预测图像分量参考值和预设初始线性模型,构建第一子分量线性模型;采用所述滤波后的最小第一图像分量参考值、所述最小待预测图像分量参考值和所述预设初始线性模型,构建第二子分量线性模型;基于所述第一子分量线性模型和所述第二子分量线性模型,得到模型参数;采用所述模型参数和所述预设初始线性模型,构建所述分量线性模型。
- 根据权利要求26所述的装置,其中,所述确定部分,还被配置为所述分量线性模型的参数包括乘性因子和加性偏移量;计算所述最大待预测图像分量参考值与所述最小待预测图像分量参考值之间的第一差值;计算所述最大第一图像分量参考值与所述最小第一图像分量参考值之间的第二差值;将所述乘性因子设置为所述第一差值与所述第二差值的比值;计算所述最大第一图像分量参考值与所述乘性因子之间的第一乘积,将所述加性偏移量设置为所述最大待预测图像分量参考值与所述第一乘积之间的差值;或者,计算所述最小第一图像分量参考值与所述乘性因子之间的第二乘积,将所述加性偏移量设置为所述最小待预测图像分量参考值与所述第二乘积之间的差值。
- 根据权利要求22所述的装置,其中,所述第一图像分量是亮度分量,所述待预测图像分量是第一或第二色度分量;或者,所述第一图像分量是所述第一色度分量,所述待预测图像分量是所述亮度分量或所述第二色度分量;或者,所述第一图像分量是所述第二色度分量,所述待预测图像分量是所述亮度分量或所述第一色度分量;或者,所述第一图像分量是第一色彩分量,所述待预测图像分量是第二色彩分量或第三色彩分量;或者,所述第一图像分量是所述第二色彩分量,所述待预测图像分量是所述第一色彩分量或所述第三色彩分量;或者,所述第一图像分量是所述第三色彩分量,所述待预测图像分量是所述第二色彩分量或所述第一色彩分量。
- 根据权利要求40所述的装置,其中,所述第一色彩分量为红分量,所述第二色彩分量为绿分量,所述第三色彩分量为蓝分量。
- 根据权利要求22所述的装置,其中,所述第一滤波处理为下采样滤波或低通滤波。
- 一种视频分量预测装置,其中,包括:存储器,用于存储可执行视频分量预测指令;处理器,用于执行所述存储器中存储的可执行视频分量预测指令时,实现权利要求1至21任一项所述的方法。
- 一种计算机可读存储介质,其中,存储有可执行视频分量预测指令,用于引起处理器执行时,实现权利要求1至21任一项所述的方法。
Priority Applications (36)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202110236395.5A CN113068030B (zh) | 2018-10-12 | 2019-10-11 | 视频图像分量预测方法及装置、计算机存储介质 |
| BR112021006138-0A BR112021006138A2 (pt) | 2018-10-12 | 2019-10-11 | método de predição de componente de imagem aplicado a um decodificador, dispositivo de predição de componente de vídeo, aplicado a um decodificador, e mídia de armazenamento legível por computador |
| AU2019357929A AU2019357929B2 (en) | 2018-10-12 | 2019-10-11 | Video image component prediction method and apparatus, and computer storage medium |
| MYPI2021001809A MY208324A (en) | 2018-10-12 | 2019-10-11 | Video image component prediction method and apparatus, and computer storage medium |
| KR1020217014094A KR20210070368A (ko) | 2018-10-12 | 2019-10-11 | 비디오 이미지 요소 예측 방법 및 장치, 컴퓨터 저장 매체 |
| EP19870849.7A EP3843399B1 (en) | 2018-10-12 | 2019-10-11 | Video image component prediction method and apparatus, and computer storage medium |
| CN201980041795.1A CN112335245A (zh) | 2018-10-12 | 2019-10-11 | 视频图像分量预测方法及装置、计算机存储介质 |
| MX2021004090A MX2021004090A (es) | 2018-10-12 | 2019-10-11 | Metodo y aparato de prediccion de componente de imagen de video, y medio de almacenamiento por computadora. |
| JP2021517944A JP7518065B2 (ja) | 2018-10-12 | 2019-10-11 | ビデオ画像成分予測方法および装置、コンピュータ記憶媒体 |
| IL281832A IL281832B1 (en) | 2018-10-12 | 2019-10-11 | Method and apparatus for predicting video image components and computer storage medium |
| KR1020257000327A KR20250008806A (ko) | 2018-10-12 | 2019-10-11 | 비디오 이미지 요소 예측 방법 및 장치, 컴퓨터 저장 매체 |
| CA3114816A CA3114816C (en) | 2018-10-12 | 2019-10-11 | Video image component prediction method and apparatus, and computer storage medium |
| KR1020257000316A KR20250011712A (ko) | 2018-10-12 | 2019-10-11 | 비디오 이미지 요소 예측 방법 및 장치, 컴퓨터 저장 매체 |
| CN202511250548.6A CN121099035A (zh) | 2018-10-12 | 2019-10-11 | 视频图像分量预测方法及装置、计算机存储介质 |
| CN202510071832.0A CN119835417A (zh) | 2018-10-12 | 2019-10-11 | 视频图像分量预测方法及装置、计算机存储介质 |
| SG11202103312YA SG11202103312YA (en) | 2018-10-12 | 2019-10-11 | Video image component prediction method and apparatus, and computer storage medium |
| CN202510071803.4A CN119835416A (zh) | 2018-10-12 | 2019-10-11 | 视频图像分量预测方法及装置、计算机存储介质 |
| PH12021550708A PH12021550708A1 (en) | 2018-10-12 | 2021-03-30 | Video image component prediction method and apparatus, and computer storage medium |
| ZA2021/02207A ZA202102207B (en) | 2018-10-12 | 2021-03-31 | Video image component prediction method and apparatus, and computer storage medium |
| US17/220,007 US11388397B2 (en) | 2018-10-12 | 2021-04-01 | Video picture component prediction method and apparatus, and computer storage medium |
| MX2025000211A MX2025000211A (es) | 2018-10-12 | 2021-04-08 | Metodo y aparato de prediccion de componente de imagen de video, y medio de almacenamiento por computadora |
| MX2025000213A MX2025000213A (es) | 2018-10-12 | 2021-04-08 | Metodo y aparato de prediccion de componente de imagen de video, y medio de almacenamiento por computadora |
| MX2025000212A MX2025000212A (es) | 2018-10-12 | 2021-04-08 | Metodo y aparato de prediccion de componente de imagen de video, y medio de almacenamiento por computadora |
| MX2025000210A MX2025000210A (es) | 2018-10-12 | 2021-04-08 | Metodo y aparato de prediccion de componente de imagen de video, y medio de almacenamiento por computadora |
| US17/658,787 US11876958B2 (en) | 2018-10-12 | 2022-04-11 | Video picture component prediction method and apparatus, and computer storage medium |
| US18/523,440 US12323584B2 (en) | 2018-10-12 | 2023-11-29 | Video picture component prediction method and apparatus, and computer storage medium |
| JP2024108422A JP2024129129A (ja) | 2018-10-12 | 2024-07-04 | ビデオ画像成分予測方法および装置、コンピュータ記憶媒体 |
| AU2024287236A AU2024287236A1 (en) | 2018-10-12 | 2024-12-27 | Video image component prediction method and apparatus, and computer storage medium |
| AU2024287239A AU2024287239A1 (en) | 2018-10-12 | 2024-12-27 | Video image component prediction method and apparatus, and computer storage medium |
| AU2024287238A AU2024287238A1 (en) | 2018-10-12 | 2024-12-27 | Video image component prediction method and apparatus, and computer storage medium |
| AU2024287237A AU2024287237A1 (en) | 2018-10-12 | 2024-12-27 | Video image component prediction method and apparatus, and computer storage medium |
| US19/073,537 US20250211729A1 (en) | 2018-10-12 | 2025-03-07 | Video picture component prediction method and apparatus, and computer storage medium |
| JP2025146779A JP2025172916A (ja) | 2018-10-12 | 2025-09-04 | ビデオ画像成分予測方法および装置、コンピュータ記憶媒体 |
| IL323377A IL323377A (en) | 2018-10-12 | 2025-09-15 | Method and apparatus for predicting video image components and computer storage medium |
| IL323378A IL323378A (en) | 2018-10-12 | 2025-09-15 | Method and apparatus for predicting video image components and computer storage medium |
| IL323383A IL323383A (en) | 2018-10-12 | 2025-09-15 | Method and apparatus for predicting video image components and computer storage medium |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US201862744747P | 2018-10-12 | 2018-10-12 | |
| US62/744,747 | 2018-10-12 |
Related Child Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US17/220,007 Continuation US11388397B2 (en) | 2018-10-12 | 2021-04-01 | Video picture component prediction method and apparatus, and computer storage medium |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2020073990A1 true WO2020073990A1 (zh) | 2020-04-16 |
Family
ID=70164470
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2019/110633 Ceased WO2020073990A1 (zh) | 2018-10-12 | 2019-10-11 | 视频图像分量预测方法及装置、计算机存储介质 |
Country Status (15)
| Country | Link |
|---|---|
| US (4) | US11388397B2 (zh) |
| EP (1) | EP3843399B1 (zh) |
| JP (3) | JP7518065B2 (zh) |
| KR (3) | KR20250011712A (zh) |
| CN (5) | CN121099035A (zh) |
| AU (5) | AU2019357929B2 (zh) |
| BR (1) | BR112021006138A2 (zh) |
| CA (1) | CA3114816C (zh) |
| IL (4) | IL281832B1 (zh) |
| MX (5) | MX2021004090A (zh) |
| MY (1) | MY208324A (zh) |
| PH (1) | PH12021550708A1 (zh) |
| SG (1) | SG11202103312YA (zh) |
| WO (1) | WO2020073990A1 (zh) |
| ZA (1) | ZA202102207B (zh) |
Families Citing this family (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP7518065B2 (ja) * | 2018-10-12 | 2024-07-17 | オッポ広東移動通信有限公司 | ビデオ画像成分予測方法および装置、コンピュータ記憶媒体 |
| WO2023039859A1 (zh) * | 2021-09-17 | 2023-03-23 | Oppo广东移动通信有限公司 | 视频编解码方法、设备、系统、及存储介质 |
| WO2023197190A1 (zh) * | 2022-04-12 | 2023-10-19 | Oppo广东移动通信有限公司 | 编解码方法、装置、编码设备、解码设备以及存储介质 |
| WO2025007276A1 (zh) * | 2023-07-04 | 2025-01-09 | Oppo广东移动通信有限公司 | 编解码方法、码流、编码器、解码器以及存储介质 |
| WO2025063778A1 (ko) * | 2023-09-21 | 2025-03-27 | 엘지전자 주식회사 | 영상 인코딩/디코딩 방법 및 장치, 그리고 비트스트림을 저장한 기록 매체 |
| WO2025147830A1 (zh) * | 2024-01-08 | 2025-07-17 | Oppo广东移动通信有限公司 | 编解码方法、码流、编码器、解码器以及存储介质 |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2018039596A1 (en) * | 2016-08-26 | 2018-03-01 | Qualcomm Incorporated | Unification of parameters derivation procedures for local illumination compensation and cross-component linear model prediction |
| WO2018045207A1 (en) * | 2016-08-31 | 2018-03-08 | Qualcomm Incorporated | Cross-component filter |
| WO2018061588A1 (ja) * | 2016-09-27 | 2018-04-05 | 株式会社ドワンゴ | 画像符号化装置、画像符号化方法、及び画像符号化プログラム、並びに、画像復号装置、画像復号方法、及び画像復号プログラム |
| WO2018132710A1 (en) * | 2017-01-13 | 2018-07-19 | Qualcomm Incorporated | Coding video data using derived chroma mode |
Family Cites Families (27)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2013110502A (ja) * | 2011-11-18 | 2013-06-06 | Sony Corp | 画像処理装置及び画像処理方法 |
| EP2805496B1 (en) * | 2012-01-19 | 2016-12-21 | Huawei Technologies Co., Ltd. | Reference pixel reduction for intra lm prediction |
| CN103379321B (zh) * | 2012-04-16 | 2017-02-01 | 华为技术有限公司 | 视频图像分量的预测方法和装置 |
| RU2654129C2 (ru) | 2013-10-14 | 2018-05-16 | МАЙКРОСОФТ ТЕКНОЛОДЖИ ЛАЙСЕНСИНГ, ЭлЭлСи | Функциональные возможности режима внутреннего предсказания с блочным копированием для кодирования и декодирования видео и изображений |
| US10425648B2 (en) * | 2015-09-29 | 2019-09-24 | Qualcomm Incorporated | Video intra-prediction using position-dependent prediction combination for video coding |
| CN117201781A (zh) * | 2015-10-16 | 2023-12-08 | 中兴通讯股份有限公司 | 编码处理、解码处理方法及装置、存储介质 |
| RU2696552C1 (ru) * | 2015-11-17 | 2019-08-02 | Хуавей Текнолоджиз Ко., Лтд. | Способ и устройство для видеокодирования |
| US20170150176A1 (en) | 2015-11-25 | 2017-05-25 | Qualcomm Incorporated | Linear-model prediction with non-square prediction units in video coding |
| US10652575B2 (en) * | 2016-09-15 | 2020-05-12 | Qualcomm Incorporated | Linear model chroma intra prediction for video coding |
| CN114430486A (zh) * | 2017-04-28 | 2022-05-03 | 夏普株式会社 | 图像解码装置以及图像编码装置 |
| US11190799B2 (en) * | 2017-06-21 | 2021-11-30 | Lg Electronics Inc. | Intra-prediction mode-based image processing method and apparatus therefor |
| WO2019004283A1 (ja) * | 2017-06-28 | 2019-01-03 | シャープ株式会社 | 動画像符号化装置及び動画像復号装置 |
| CN107580222B (zh) | 2017-08-01 | 2020-02-14 | 北京交通大学 | 一种基于线性模型预测的图像或视频编码方法 |
| JP2021010046A (ja) * | 2017-10-06 | 2021-01-28 | シャープ株式会社 | 画像符号化装置及び画像復号装置 |
| KR20190083956A (ko) * | 2018-01-05 | 2019-07-15 | 에스케이텔레콤 주식회사 | YCbCr간의 상관 관계를 이용한 영상 부호화/복호화 방법 및 장치 |
| GB2571313B (en) * | 2018-02-23 | 2022-09-21 | Canon Kk | New sample sets and new down-sampling schemes for linear component sample prediction |
| GB2571312B (en) * | 2018-02-23 | 2020-05-27 | Canon Kk | New sample sets and new down-sampling schemes for linear component sample prediction |
| WO2019201232A1 (en) * | 2018-04-16 | 2019-10-24 | Huawei Technologies Co., Ltd. | Intra-prediction using cross-component linear model |
| CN116405687A (zh) * | 2018-07-12 | 2023-07-07 | 华为技术有限公司 | 视频译码中使用交叉分量线性模型进行帧内预测 |
| WO2020015433A1 (en) * | 2018-07-15 | 2020-01-23 | Huawei Technologies Co., Ltd. | Method and apparatus for intra prediction using cross-component linear model |
| SG11202100412SA (en) * | 2018-07-16 | 2021-02-25 | Huawei Tech Co Ltd | Video encoder, video decoder, and corresponding encoding and decoding methods |
| WO2020031902A1 (ja) * | 2018-08-06 | 2020-02-13 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ | 符号化装置、復号装置、符号化方法および復号方法 |
| JP7424982B2 (ja) * | 2018-08-15 | 2024-01-30 | 日本放送協会 | 画像符号化装置、画像復号装置、及びプログラム |
| CN117896531A (zh) * | 2018-09-05 | 2024-04-16 | 华为技术有限公司 | 色度块预测方法以及设备 |
| CN110896480B (zh) * | 2018-09-12 | 2024-09-10 | 北京字节跳动网络技术有限公司 | 交叉分量线性模型中的尺寸相关的下采样 |
| EP4228262A1 (en) * | 2018-10-08 | 2023-08-16 | Beijing Dajia Internet Information Technology Co., Ltd. | Simplifications of cross-component linear model |
| JP7518065B2 (ja) * | 2018-10-12 | 2024-07-17 | オッポ広東移動通信有限公司 | ビデオ画像成分予測方法および装置、コンピュータ記憶媒体 |
-
2019
- 2019-10-11 JP JP2021517944A patent/JP7518065B2/ja active Active
- 2019-10-11 EP EP19870849.7A patent/EP3843399B1/en active Active
- 2019-10-11 MY MYPI2021001809A patent/MY208324A/en unknown
- 2019-10-11 CN CN202511250548.6A patent/CN121099035A/zh active Pending
- 2019-10-11 SG SG11202103312YA patent/SG11202103312YA/en unknown
- 2019-10-11 CN CN202510071832.0A patent/CN119835417A/zh active Pending
- 2019-10-11 MX MX2021004090A patent/MX2021004090A/es unknown
- 2019-10-11 CA CA3114816A patent/CA3114816C/en active Active
- 2019-10-11 IL IL281832A patent/IL281832B1/en unknown
- 2019-10-11 CN CN201980041795.1A patent/CN112335245A/zh active Pending
- 2019-10-11 KR KR1020257000316A patent/KR20250011712A/ko active Pending
- 2019-10-11 BR BR112021006138-0A patent/BR112021006138A2/pt unknown
- 2019-10-11 CN CN202510071803.4A patent/CN119835416A/zh active Pending
- 2019-10-11 AU AU2019357929A patent/AU2019357929B2/en active Active
- 2019-10-11 CN CN202110236395.5A patent/CN113068030B/zh active Active
- 2019-10-11 WO PCT/CN2019/110633 patent/WO2020073990A1/zh not_active Ceased
- 2019-10-11 KR KR1020217014094A patent/KR20210070368A/ko not_active Ceased
- 2019-10-11 KR KR1020257000327A patent/KR20250008806A/ko active Pending
-
2021
- 2021-03-30 PH PH12021550708A patent/PH12021550708A1/en unknown
- 2021-03-31 ZA ZA2021/02207A patent/ZA202102207B/en unknown
- 2021-04-01 US US17/220,007 patent/US11388397B2/en active Active
- 2021-04-08 MX MX2025000211A patent/MX2025000211A/es unknown
- 2021-04-08 MX MX2025000212A patent/MX2025000212A/es unknown
- 2021-04-08 MX MX2025000210A patent/MX2025000210A/es unknown
- 2021-04-08 MX MX2025000213A patent/MX2025000213A/es unknown
-
2022
- 2022-04-11 US US17/658,787 patent/US11876958B2/en active Active
-
2023
- 2023-11-29 US US18/523,440 patent/US12323584B2/en active Active
-
2024
- 2024-07-04 JP JP2024108422A patent/JP2024129129A/ja active Pending
- 2024-12-27 AU AU2024287236A patent/AU2024287236A1/en active Pending
- 2024-12-27 AU AU2024287237A patent/AU2024287237A1/en active Pending
- 2024-12-27 AU AU2024287239A patent/AU2024287239A1/en active Pending
- 2024-12-27 AU AU2024287238A patent/AU2024287238A1/en active Pending
-
2025
- 2025-03-07 US US19/073,537 patent/US20250211729A1/en active Pending
- 2025-09-04 JP JP2025146779A patent/JP2025172916A/ja active Pending
- 2025-09-15 IL IL323383A patent/IL323383A/en unknown
- 2025-09-15 IL IL323378A patent/IL323378A/en unknown
- 2025-09-15 IL IL323377A patent/IL323377A/en unknown
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2018039596A1 (en) * | 2016-08-26 | 2018-03-01 | Qualcomm Incorporated | Unification of parameters derivation procedures for local illumination compensation and cross-component linear model prediction |
| WO2018045207A1 (en) * | 2016-08-31 | 2018-03-08 | Qualcomm Incorporated | Cross-component filter |
| WO2018061588A1 (ja) * | 2016-09-27 | 2018-04-05 | 株式会社ドワンゴ | 画像符号化装置、画像符号化方法、及び画像符号化プログラム、並びに、画像復号装置、画像復号方法、及び画像復号プログラム |
| WO2018132710A1 (en) * | 2017-01-13 | 2018-07-19 | Qualcomm Incorporated | Coding video data using derived chroma mode |
Also Published As
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN113068030B (zh) | 视频图像分量预测方法及装置、计算机存储介质 | |
| JP2025066835A (ja) | 情報処理方法および装置、機器、記憶媒体 | |
| RU2800683C2 (ru) | Способ и устройство предсказывания компонента видеоизображения и компьютерный носитель данных | |
| JP2023522845A (ja) | 参照領域を使用する映像符号化の方法及びシステム | |
| CN116546216A (zh) | 解码预测方法、装置及计算机存储介质 | |
| WO2020181503A1 (zh) | 帧内预测方法及装置、计算机可读存储介质 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19870849 Country of ref document: EP Kind code of ref document: A1 |
|
| ENP | Entry into the national phase |
Ref document number: 3114816 Country of ref document: CA |
|
| ENP | Entry into the national phase |
Ref document number: 2021517944 Country of ref document: JP Kind code of ref document: A |
|
| ENP | Entry into the national phase |
Ref document number: 2019870849 Country of ref document: EP Effective date: 20210325 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112021006138 Country of ref document: BR |
|
| ENP | Entry into the national phase |
Ref document number: 2019357929 Country of ref document: AU Date of ref document: 20191011 Kind code of ref document: A |
|
| ENP | Entry into the national phase |
Ref document number: 20217014094 Country of ref document: KR Kind code of ref document: A |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2021111917 Country of ref document: RU |
|
| ENP | Entry into the national phase |
Ref document number: 112021006138 Country of ref document: BR Kind code of ref document: A2 Effective date: 20210330 |
|
| WWG | Wipo information: grant in national office |
Ref document number: MX/A/2021/004090 Country of ref document: MX |
|
| WWD | Wipo information: divisional of initial pct application |
Ref document number: 323383 Country of ref document: IL |