WO2023201450A1 - 编解码方法、码流、编码器、解码器以及存储介质 - Google Patents
编解码方法、码流、编码器、解码器以及存储介质 Download PDFInfo
- Publication number
- WO2023201450A1 WO2023201450A1 PCT/CN2022/087255 CN2022087255W WO2023201450A1 WO 2023201450 A1 WO2023201450 A1 WO 2023201450A1 CN 2022087255 W CN2022087255 W CN 2022087255W WO 2023201450 A1 WO2023201450 A1 WO 2023201450A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- reconstructed
- point cloud
- identification information
- slice
- filter
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/42—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
Definitions
- the embodiments of the present application relate to the field of video coding and decoding technology, and in particular, to a coding and decoding method, a code stream, an encoder, a decoder, and a storage medium.
- the encoding of point cloud attribute information is mainly aimed at encoding color information.
- the color information is converted from RGB color space to YUV color space.
- the point cloud is recolored using the reconstructed geometric information so that the unencoded attribute information corresponds to the reconstructed geometric information.
- three predictive transformation methods are mainly used: Predicting Transform, Lifting Transform, and Region Adaptive Hierarchal Transform (RAHT), and finally generate binary codes. flow.
- the existing G-PCC encoding and decoding framework only performs basic reconstruction on the initial point cloud, and in the case of attribute lossy coding, the reconstructed point cloud may be different from the initial point cloud after reconstruction. Relatively large, the distortion is more serious, thus affecting the quality of the entire point cloud.
- Embodiments of the present application provide a coding and decoding method, a code stream, an encoder, a decoder, and a storage medium, which can not only improve the quality of point clouds, but also have universal applicability, save bit rates, and thereby improve coding and decoding performance.
- inventions of the present application provide an encoding method, which is applied to an encoder.
- the method includes:
- the first filter identification information indicates filtering the reconstructed slice, encode the first filter coefficient
- embodiments of the present application provide a code stream that includes parameter information for determining the decoded point cloud; wherein the parameter information includes at least one of the following: residual values of attribute information of points in the initial slice, First filter identification information and first filter coefficient.
- embodiments of the present application provide a decoding method, applied to a decoder, and the method includes:
- the code stream is parsed to determine the first filter coefficient
- embodiments of the present application provide an encoder, which includes a first determination unit, a first filtering unit and a coding unit; wherein,
- the first determination unit is configured to determine the reconstructed slice of the reconstructed point cloud; and when the reconstructed point cloud meets the preset conditions, determine the first filter coefficient according to the reconstructed slice and the initial slice corresponding to the reconstructed slice;
- a first filtering unit configured to filter the reconstructed slice according to the first filter coefficient and determine the filtered slice corresponding to the reconstructed slice;
- the first determining unit is further configured to determine the first filtering identification information according to the filtered slice; wherein the first filtering identification information indicates whether to perform filtering processing on the reconstructed slice;
- An encoding unit configured to encode the first filter identification information; and if the first filter identification information indicates filtering the reconstructed slice, encode the first filter coefficient
- the encoding unit is also configured to write the obtained encoded bits into the code stream.
- embodiments of the present application provide an encoder, which includes a first memory and a first processor; wherein,
- a first memory for storing a computer program capable of running on the first processor
- the first processor is used to execute the method of the first aspect when running the computer program.
- embodiments of the present application provide a decoder, which includes a decoding unit and a second filtering unit; wherein,
- the decoding unit is configured to parse the code stream and determine the first filter identification information; and if the first filter identification information indicates that the reconstructed slice of the reconstructed point cloud is filtered, parse the code stream and determine the first filter coefficient;
- the second filtering unit is configured to filter the reconstructed slice according to the first filter coefficient and determine the filtered slice corresponding to the reconstructed slice.
- embodiments of the present application provide a decoder, which includes a second memory and a second processor; wherein,
- a second memory for storing a computer program capable of running on the second processor
- the second processor is configured to perform the method described in the third aspect when running the computer program.
- embodiments of the present application provide a computer storage medium that stores a computer program.
- the computer program implements the method described in the first aspect when executed by a first processor, or is executed by a second processor.
- the processor implements the method described in the third aspect.
- Embodiments of the present application provide a coding and decoding method, a code stream, an encoder, a decoder, and a storage medium.
- the reconstructed slice of the reconstructed point cloud is determined; when the reconstructed point cloud meets the preset conditions, the reconstructed point cloud is determined according to the reconstructed point cloud.
- the initial slice corresponding to the slice and the reconstructed slice determines the first filter coefficient; the reconstructed slice is filtered according to the first filter coefficient to determine the filtered slice corresponding to the reconstructed slice; the first filter identification information is determined according to the filtered slice; wherein, The first filtering identification information indicates whether to perform filtering processing on the reconstructed slice; encode the first filtering identification information; if the first filtering identification information indicates filtering processing on the reconstructed slice, encode the first filter coefficient; and encode the obtained The encoded bits are written into the code stream.
- the code stream is parsed to determine the first filter identification information; if the first filter identification information indicates that the reconstructed slice of the reconstructed point cloud is filtered, the code stream is parsed to determine the first filter coefficient; according to the first filter coefficient, the The reconstructed slice is filtered, and the filtered slice corresponding to the reconstructed slice is determined.
- the encoding end will perform filtering based on the divided reconstructed slices, and only after determining that the reconstructed slices need to be filtered, the corresponding filter coefficients will be passed to the decoder; correspondingly Therefore, the decoder can directly decode to obtain the filter coefficients, and then use the filter coefficients to filter the reconstructed slices; in this way, filtering based on the reconstructed slices not only avoids memory overflow due to limited memory resources when processing large point clouds , making it highly universal; it can also optimize the reconstructed point cloud, which can improve the quality of the point cloud, save bit rates, and improve encoding and decoding efficiency.
- Figure 1 is a schematic diagram of the composition framework of a G-PCC encoder
- Figure 2 is a schematic diagram of the composition framework of a G-PCC decoder
- Figure 3 is a schematic structural diagram of a zero-run encoding
- Figure 4 is a schematic flow chart 1 of an encoding method provided by an embodiment of the present application.
- Figure 5 is a schematic flow chart 2 of an encoding method provided by an embodiment of the present application.
- Figure 6 is a schematic flow chart 3 of an encoding method provided by an embodiment of the present application.
- Figure 7 is a schematic flowchart 1 of a decoding method provided by an embodiment of the present application.
- Figure 8 is a schematic flow chart 2 of a decoding method provided by an embodiment of the present application.
- Figure 9 is a schematic flow chart 1 of a coding end filtering process provided by an embodiment of the present application.
- Figure 10 is a schematic flow chart 2 of an encoding-side filtering process provided by an embodiment of the present application.
- Figure 11 is a schematic flow chart of a decoding end filtering process provided by an embodiment of the present application.
- Figure 12 is a schematic diagram of test results of predictive transformation under CY test conditions provided by the embodiment of the present application.
- Figure 13 is a schematic diagram of the test results of lifting transformation under C1 test conditions provided by the embodiment of the present application.
- Figure 14 is a schematic diagram of the test results of lifting transformation under C2 test conditions provided by the embodiment of the present application.
- Figure 15 is a schematic diagram of the test results of RAHT transformation under C1 test conditions provided by the embodiment of the present application.
- Figure 16 is a schematic diagram of the test results of RAHT transformation under C2 test conditions provided by the embodiment of the present application.
- Figure 17 is a schematic structural diagram of an encoder provided by an embodiment of the present application.
- Figure 18 is a schematic diagram of the specific hardware structure of an encoder provided by an embodiment of the present application.
- Figure 19 is a schematic structural diagram of a decoder provided by an embodiment of the present application.
- Figure 20 is a schematic diagram of the specific hardware structure of a decoder provided by an embodiment of the present application.
- Figure 21 is a schematic structural diagram of a coding and decoding system provided by an embodiment of the present application.
- G-PCC Geometry-based Point Cloud Compression
- V-PCC Video-based Point Cloud Compression
- RAHT Region Adaptive Hierarchal Transform
- PSNR Peak Signal to Noise Ratio
- MMSE Minimum Mean Squared Error
- Luminance component (Luminance, Luma or Y)
- Red chroma component Chroma red, Cr
- Point cloud is a three-dimensional representation of the surface of an object.
- collection equipment such as photoelectric radar, lidar, laser scanner, and multi-view camera, the point cloud (data) of the surface of the object can be collected.
- Point Cloud refers to a collection of massive three-dimensional points.
- the points in the point cloud can include point location information and point attribute information.
- the position information of the point may be the three-dimensional coordinate information of the point.
- the position information of a point can also be called the geometric information of the point.
- the point attribute information may include color information and/or reflectivity, etc.
- color information can be information on any color space.
- the color information may be RGB information. Among them, R represents red (Red, R), G represents green (Green, G), and B represents blue (Blue, B).
- the color information may be brightness and chrominance (YCbCr, YUV) information. Among them, Y represents brightness, Cb(U) represents blue chroma, and Cr(V) represents red chroma.
- the points in the point cloud can include the three-dimensional coordinate information of the point and the laser reflection intensity (reflectance) of the point.
- the points in the point cloud may include the three-dimensional coordinate information of the point and the color information of the point.
- a point cloud is obtained by combining the principles of laser measurement and photogrammetry.
- the points in the point cloud may include the three-dimensional coordinate information of the point, the laser reflection intensity (reflectance) of the point, and the color information of the point.
- Point clouds can be divided into:
- the first type of static point cloud that is, the object is stationary and the device that obtains the point cloud is also stationary;
- the second type of dynamic point cloud the object is moving, but the device that obtains the point cloud is stationary;
- the third type of dynamically acquired point cloud the device that acquires the point cloud is in motion.
- point clouds are divided into two categories according to their uses:
- Category 1 Machine perception point cloud, which can be used in scenarios such as autonomous navigation systems, real-time inspection systems, geographic information systems, visual sorting robots, and rescue and disaster relief robots;
- Category 2 Human eye perception point cloud, which can be used in point cloud application scenarios such as digital cultural heritage, free-viewpoint broadcasting, three-dimensional immersive communication, and three-dimensional immersive interaction.
- the point cloud is a collection of massive points, storing the point cloud will not only consume a lot of memory, but is also not conducive to transmission. There is not such a large bandwidth to support the direct transmission of the point cloud at the network layer without compression. Therefore, it is necessary to The cloud performs compression.
- the point cloud coding framework that can compress point clouds can be the G-PCC codec framework or V-PCC codec framework provided by the Moving Picture Experts Group (MPEG), or it can be audio and video coding.
- the G-PCC encoding and decoding framework can be used to compress the first type of static point cloud and the third type of dynamic point cloud
- the V-PCC encoding and decoding framework can be used to compress the second type of dynamic point cloud.
- the description here mainly focuses on the G-PCC encoding and decoding framework.
- each slice can be independently encoded.
- FIG. 1 is a schematic diagram of the composition framework of a G-PCC encoder. As shown in Figure 1, this G-PCC encoder is applied to the point cloud encoder.
- this G-PCC coding framework for the point cloud data to be encoded, the point cloud data is first divided into multiple slices through slice division. In each slice, the geometric information of the point cloud and the attribute information corresponding to each point cloud are encoded separately. In the process of geometric encoding, the geometric information is transformed into coordinates so that all point clouds are contained in a Bounding Box, and then quantized. This quantification step mainly plays a scaling role. Due to the quantization rounding, the geometry of a part of the point cloud is The information is the same, so it is decided whether to remove duplicate points based on parameters.
- the process of quantifying and removing duplicate points is also called the voxelization process.
- the bounding box is divided into eight equal parts into eight sub-cubes, and the non-empty sub-cubes (containing points in the point cloud) continue to be divided into eight equal parts until the leaf structure is obtained.
- the division stops when the point is a 1 ⁇ 1 ⁇ 1 unit cube, and the points in the leaf nodes are arithmetic encoded to generate a binary geometric bit stream, that is, a geometric code stream.
- this Trisoup does not need to divide the point cloud step by step. It is divided into a unit cube with a side length of 1 ⁇ 1 ⁇ 1, but is divided into sub-blocks (Blocks) when the side length is W. Based on the surface formed by the distribution of point clouds in each Block, the surface and block are obtained. At most twelve intersection points (Vertex) generated by the twelve edges, the Vertex is arithmetic encoded (surface fitting based on the intersection points), and a binary geometric bit stream is generated, that is, the geometric code stream. Vertex is also used in the implementation of the geometric reconstruction process, and the reconstructed set information is used when encoding the attributes of the point cloud.
- the geometric encoding is completed. After the geometric information is reconstructed, color conversion is performed to convert the color information (ie, attribute information) from the RGB color space to the YUV color space. Then, the point cloud is recolored using the reconstructed geometric information so that the unencoded attribute information corresponds to the reconstructed geometric information. Attribute encoding is mainly carried out for color information. In the process of color information encoding, there are two main transformation methods. One is distance-based lifting transformation that relies on LOD division, and the other is direct RAHT transformation. Both methods will convert the color information.
- the geometrically encoded data and quantized coefficient processing attributes after octree division and surface fitting are After the encoded data is slice-synthesized, the Vertex coordinates of each block are sequentially encoded (that is, arithmetic coding) to generate a binary attribute bit stream, that is, an attribute code stream.
- FIG. 2 is a schematic diagram of the composition framework of a G-PCC decoder. As shown in Figure 2, this G-PCC decoder is applied to the point cloud encoder. In this G-PCC decoding framework, for the obtained binary code stream, the geometry bit stream and attribute bit stream in the binary code stream are first independently decoded.
- the geometric information of the point cloud is obtained through arithmetic decoding - octree synthesis - surface fitting - reconstructed geometry - inverse coordinate conversion; when decoding the attribute bit stream, through arithmetic decoding - inverse Quantization - LOD-based lifting inverse transformation or RAHT-based inverse transformation - inverse color conversion to obtain the attribute information of the point cloud, and restore the three-dimensional image model of the point cloud data to be encoded based on the geometric information and attribute information.
- LOD division is mainly used for two methods: Predicting Transform and Lifting Transform in point cloud attribute transformation.
- the process of LOD division is after the geometric reconstruction of the point cloud.
- the geometric coordinate information of the point cloud can be directly obtained.
- the decoding operation is performed according to the encoding zero-run encoding method.
- the size of the first zero_cnt in the code stream is solved. If it is greater than 0, it means that there are consecutive zero_cnt residuals of 0; if zero_cnt is equal to 0, it means that the attribute residual at this point is 0. If the difference is not 0, decode the corresponding residual value, then inversely quantize the decoded residual value and add it to the color prediction value of the current point to obtain the reconstructed value of the point. Continue this operation until all points are decoded. Cloud points.
- FIG. 3 is a schematic structural diagram of a zero-run encoding.
- the current point will be used as the nearest neighbor of the subsequent LOD midpoint, and the color reconstruction value of the current point will be used to predict the attributes of subsequent points.
- the existing G-PCC encoding and decoding framework only performs basic reconstruction of point cloud sequences; for attribute lossy (or almost lossless) coding methods, after reconstruction, no certain processing is performed to further improve the reconstruction points. Quality of cloud color properties. This may cause a large difference between the reconstructed point cloud and the initial point cloud, causing serious distortion, which will affect the quality and visual effects of the entire point cloud.
- the G-PCC point cloud standard test data set there are a total of 46 point cloud sequences (only single frame point clouds are considered here), belonging to Cat1-A and Cat1-B respectively. Among them, there are many large point clouds in Cat1-B, with more than 10 million points and a size exceeding 2GB. For these point cloud sequences, memory overflow may occur when using related technologies for point cloud processing, causing the program to crash. Therefore, the current related technology still has certain limitations, cannot effectively process all point clouds, and has poor universality.
- the embodiment of the present application proposes a coding and decoding method, which can affect the arithmetic coding and subsequent parts in the G-PCC coding framework, and can also affect the part after attribute reconstruction in the G-PCC decoding framework.
- the embodiment of the present application proposes a coding method, which can be applied to the arithmetic coding marked by the dotted box in Figure 1 and subsequent parts.
- the embodiment of the present application also proposes a decoding method, which can be applied to the part after attribute reconstruction marked with a dotted box in Figure 2.
- the encoding end will perform filtering based on the divided reconstructed slices, and only after determining that the reconstructed slices need to be filtered, the corresponding filter coefficients will be passed to the decoder; correspondingly Therefore, the decoder can directly decode to obtain the filter coefficients, and then use the filter coefficients to filter the reconstructed slices.
- filtering based on reconstructed slices can not only avoid memory overflow due to limited memory resources when processing large point clouds, making it highly universal; it can also optimize the reconstructed point cloud and improve the point cloud. The quality is improved, the bit rate is saved, and the encoding and decoding performance is improved.
- FIG. 4 shows a schematic flowchart 1 of an encoding method provided by an embodiment of the present application.
- the method may include:
- the encoding method described in the embodiment of the present application specifically refers to the point cloud encoding method, which can be applied to a point cloud encoder (in the embodiment of the present application, it may be referred to as "encoder" for short).
- the method may further include: slicing the initial point cloud to obtain at least one initial slice; sequentially encoding and reconstructing the at least one initial slice to obtain at least one reconstructed slice; The reconstructed slices are aggregated to obtain a reconstructed point cloud.
- the initial point cloud first needs to be sliced, that is, the point cloud with a large number of points will be divided into multiple slices, and each slice contains about 800,000 points. ⁇ 1 million or so; then encoding and reconstruction processing are performed on each slice to obtain reconstructed slices; by aggregating all the reconstructed slices together, the reconstructed point cloud can be obtained.
- slicing processing is usually required for the initial point cloud.
- the related technology is to aggregate all reconstructed slices into a reconstructed point cloud, and then consider whether to perform filtering based on the entire reconstructed point cloud.
- some large point clouds for example, with more than 10 million points, a large amount of memory space will be occupied, resulting in memory overflow or even program crash when processing these large point clouds.
- the point cloud compression efficiency can be ensured to the greatest extent , and at the same time, it can avoid memory overflow caused by occupying a large amount of memory space when processing large point clouds, making the method of the embodiment of the present application more universal.
- the initial point cloud for a point in the initial slice, when encoding the point, it can be used as a point to be encoded in the initial slice, and there are multiple Coded points.
- a point in the initial slice it corresponds to a piece of geometric information and a piece of attribute information; among which, the geometric information represents the spatial position of the point, and the attribute information represents the attribute value (such as color component value) of the point.
- the attribute information may include color components, specifically color information in any color space.
- the attribute information may be color information in RGB space, color information in YUV space, color information in YCbCr space, etc., which are not limited in the embodiments of this application.
- the color component may include at least one of the following: a first color component, a second color component, and a third color component.
- the first color component, the second color component, and the third color component are: R component, G component, and B component; if the color component conforms to the YUV color space, then it can be determined
- the first color component, the second color component and the third color component are: Y component, U component, V component; if the color component conforms to the YCbCr color space, then the first color component, the second color component and the third color can be determined
- the components are: Y component, Cb component, Cr component.
- the attribute information of the point can be a color component, or the attribute information of the point can also be reflectivity, refractive index or other attributes.
- the application examples are not limited in any way.
- the predicted value and residual value of the attribute information of the point can be determined first, and then the predicted value and residual value are further used to calculate and obtain the point's Reconstructed values of attribute information to construct reconstructed slices.
- the geometric information and attribute information of multiple target neighbor points of the point can be used, and the geometric information of the point can be combined to predict the point.
- the attribute information is predicted to obtain the corresponding prediction value, and then the corresponding reconstruction value can be determined.
- the point after determining the reconstruction value of the attribute information of the point, the point can be used as the nearest neighbor of the subsequent LOD midpoint, so that the reconstruction value of the attribute information of the point can be used to continue to reconstruct the subsequent points.
- Points are predicted for attributes, so that reconstructed slices can be constructed; then all reconstructed slices are aggregated together to obtain a reconstructed point cloud.
- the initial point cloud can be obtained directly through the point cloud reading function of the encoding and decoding program, and the reconstructed point cloud is obtained after all encoding operations are completed.
- the reconstructed point cloud in the embodiment of the present application can be the reconstructed point cloud output after decoding, or can be used as a reference for subsequent decoding point clouds; in addition, the reconstructed point cloud here can not only be within the prediction loop, that is, as an inloop When used as a filter, it can be used as a reference for decoding subsequent point clouds; it can also be used outside the prediction loop, that is, as a post filter, and is not used as a reference for decoding subsequent point clouds; the embodiments of this application do not specifically limit this.
- the method may further include:
- the parameter information of the reconstructed point cloud it is determined whether the reconstructed point cloud meets the preset conditions.
- determining the parameter information of the reconstructed point cloud may include: determining the number of points in the reconstructed point cloud. That is to say, the embodiment of the present application can determine whether the reconstructed point cloud meets the preset conditions based on the number of points in the reconstructed point cloud, and then determine whether to perform filtering processing on the entire reconstructed point cloud or on divided reconstructed slices. .
- the method may also include:
- the number of points in the reconstructed point cloud is less than the preset threshold, it is determined that the reconstructed point cloud does not meet the preset conditions.
- the preset threshold here can be used to measure whether the reconstructed point cloud is a large point cloud, that is, whether the reconstructed point cloud is filtered as a whole or filtered according to divided reconstructed slices. For example, if the number of points in the reconstructed point cloud is greater than or equal to a preset threshold, it means that the number of points in the reconstructed point cloud is too many. In order to avoid memory overflow when filtering it, it can be determined that the When the reconstructed point cloud meets the preset conditions, filtering is performed according to the divided reconstruction slices; if the number of points in the reconstructed point cloud is less than the preset threshold, it means that the number of points in the reconstructed point cloud will not cause any problems during filtering.
- the reconstructed point cloud does not meet the preset conditions.
- the reconstructed point cloud is filtered as a whole; in this way, the point cloud compression efficiency can be ensured to the greatest extent and it is more universal. sex.
- determining the number of midpoints in the reconstructed point cloud may include: performing geometric quantization processing on the initial point cloud to obtain a quantified point cloud; determining the reconstructed point cloud based on the number of midpoints in the quantized point cloud. The number of points in the point cloud.
- the method of the embodiment of the present application can be applied to all encoding methods except attribute lossless, such as geometric lossless, attribute lossy encoding method, geometric lossy, attribute lossy encoding method, geometric lossless, attribute Almost lossless encoding, etc.
- attribute lossless such as geometric lossless, attribute lossy encoding method, geometric lossy, attribute lossy encoding method, geometric lossless, attribute Almost lossless encoding, etc.
- geometric losslessness after the initial point cloud is subjected to geometric quantification processing, the number and geometric coordinates of the points in the quantized point cloud have not changed, making the number of points in the initial point cloud different from the reconstructed point cloud.
- the data of the midpoints are the same; but in the case of geometric loss, after the initial point cloud is subjected to geometric quantization processing, depending on the set code rate, the number of midpoints and geometric coordinates of the obtained quantized point cloud will be larger. Changes make the number of midpoints in the initial point cloud and the number of midpoints in the reconstructed point cloud different, resulting in the inability to directly determine the number of midpoints in the reconstructed point cloud based on the number of midpoints in the initial point cloud.
- the quantized point cloud can be obtained. Since the number of points in the quantized point cloud is the same as the number of points in the reconstructed point cloud (if the geometry is lossless, it is also the same as the number of points in the initial point cloud); therefore, in the embodiment of the present application, the encoding end can be based on Quantify the number of midpoints in the point cloud to determine the number of midpoints in the reconstructed point cloud; then, based on the relationship between the number of midpoints in the reconstructed point cloud and the preset threshold, determine whether the reconstructed point cloud meets the preset conditions, and then determine Whether to filter the entire reconstructed point cloud or filter the divided reconstructed slices separately.
- the first filter coefficient can be determined based on the reconstructed slice and the initial slice corresponding to the reconstructed slice. It should be noted that there is a corresponding relationship between the points in the initial slice and the points in the reconstructed slice.
- the K 1 target points corresponding to the midpoint of the reconstructed slice can include the current point and (K 1 -1) neighbor points adjacent to the current point, where K 1 is an integer greater than 1. .
- determining the first filter coefficient according to the reconstructed slice and the initial slice corresponding to the reconstructed slice may include: determining the first filter coefficient according to K 1 target points corresponding to the initial slice midpoint and the reconstructed slice midpoint. coefficient.
- determining the K 1 target points corresponding to the midpoint of the reconstructed slice may include:
- the first point can be a point in the reconstructed slice.
- the filter here may be an adaptive filter, for example, it may be a filter based on a neural network, a Wiener filter, etc., which is not specifically limited here.
- the Wiener filter is taken as an example.
- the main function of the Wiener filter is to calculate the filter coefficient and at the same time determine whether the quality of the point cloud after Wiener filtering has been improved. That is to say, the filter coefficients (such as the first filter coefficient, the second filter coefficient, etc.) described in the embodiments of the present application can be coefficients used for Wiener filter processing, that is, the filter coefficients can also be called Wiener filter output. coefficient.
- Wiener Filter is a linear filter with minimization of the mean square error as the optimal criterion.
- the square of the difference between its output and a given function (often called the expected output) is minimized, and through mathematical operations it can eventually become a problem of solving the Toblitz equation.
- Wiener filter is also called least squares filter or least squares filter.
- Wiener filtering is a method that uses the correlation characteristics and spectral characteristics of stationary random processes to filter signals mixed with noise. It is currently one of the basic filtering methods.
- the specific algorithm of Wiener filtering is as follows:
- the output when the filter length or order is M is as follows,
- M is the length or order of the filter
- y(n) is the output signal
- x(n) is a column of input signals (mixed with noise).
- the Wiener filter takes the minimum mean square error as the objective function, so the objective function is expressed as follows,
- the reciprocal of the coefficient of the objective function should be 0, that is:
- Rxd and Rxx are the correlation matrix of the input signal and the expected signal and the autocorrelation matrix of the input signal respectively. Therefore, by calculating the optimal solution according to the Wiener-Hough equation, the filter coefficient H can be obtained:
- determining the first filter coefficient according to the reconstructed slice and the initial slice corresponding to the reconstructed slice may include: inputting the reconstructed slice and the initial slice corresponding to the reconstructed slice into the Wiener filter for calculation, and outputting The first filter coefficient.
- inputting the reconstructed slice and the initial slice corresponding to the reconstructed slice into the Wiener filter for calculation and outputting the first filter coefficient may include:
- the first filter coefficient is determined.
- determining the first filter coefficient based on the first attribute parameter and the second attribute parameter may include:
- Coefficient calculation is performed based on the cross-correlation parameters and auto-correlation parameters to obtain the first filter coefficient.
- the color component can be first used (such as Y component, U component, V component), the first attribute parameter and the second attribute parameter of the color component are determined, and then the user can be further determined based on the first attribute parameter and the second attribute parameter.
- the first filter coefficient used for filtering.
- the first attribute parameter is determined based on the original value of the attribute information of at least one point in the initial slice; the second attribute parameter is determined based on the reconstructed value of the attribute information of K 1 target points corresponding to at least one point in the reconstructed slice.
- the K 1 target points here include the current point and (K 1 -1) neighbor points adjacent to the current point.
- the order of the Wiener filter is also involved.
- the order of the Wiener filter can be set equal to M.
- the values of M and K 1 may be the same or different, and are not specifically limited here.
- the filter type can be used to indicate the filter order, and/or filter shape, and/or filter dimension.
- filter shapes include diamonds, rectangles, etc.
- filter dimensions include one-dimensional, two-dimensional or even more dimensions.
- different filter types can correspond to Wiener filters of different orders.
- the order values can be 12, 32, and 128 Wiener filters; different types can also be Filters corresponding to different dimensions, such as one-dimensional filters, two-dimensional filters, etc., are not specifically limited here. That is to say, if you need to determine a 16th-order filter, you can use 16 points to determine a 16th-order asymmetric filter, or an 8th-order one-dimensional symmetric filter, or other quantities (such as more special two-dimensional, three-dimensional filter, etc.), the filter is not specifically limited here.
- the color in the process of determining the first filter coefficient based on the initial slice and the reconstructed slice, the color can first be determined based on the original value of the color component at the midpoint of the initial slice.
- the first attribute parameter of the color component at the same time, the second attribute parameter of the color component can be determined based on the reconstructed values of the color components of K 1 target points corresponding to the midpoint of the reconstructed slice; finally, the second attribute parameter of the color component can be determined based on the first attribute parameter and the second Attribute parameters determine the first filter coefficient vector corresponding to the color component.
- the corresponding first attribute parameter and second attribute parameter are calculated for a color component, and the first filter coefficient vector corresponding to the color component is determined using the first attribute parameter and the second attribute parameter.
- the first filter coefficient vector of each color component can be obtained, so that the first filter can be determined based on the first filter coefficient vector of each color component. coefficient.
- the cross-correlation parameter corresponding to the color component can be determined based on the first attribute parameter and the second attribute parameter corresponding to the color component; at the same time, the cross-correlation parameter corresponding to the color component can be determined based on the second attribute
- the parameter determines the autocorrelation parameter corresponding to the color component; then the first filter coefficient vector corresponding to the color component can be determined based on the cross-correlation parameter and the autocorrelation parameter; finally, all color components can be traversed and the first filter coefficient vector corresponding to all color components can be used to determine the first filter coefficient.
- K 1 target points may include the point itself and (K 1 -1) neighboring points adjacent to it.
- the point cloud sequence is n
- use vector S(n) to represent the original value of all points in the initial slice under a certain color component (such as Y component), that is, S(n) is the original value of the Y component of all points in the initial slice.
- the cross-correlation parameter B(k) is calculated based on the first attribute parameter S(n) and the second attribute parameter P(n,k), as shown below,
- the autocorrelation parameter A(k,k) is calculated according to the second attribute parameter P(n,k), as shown below,
- the optimal coefficient H(k) under the Y component that is, the first filter coefficient vector H(k) under the Y component, is as follows,
- the U component and V component can be traversed according to the above method, and finally the first filter coefficient vector under the U component and the first filter coefficient vector under the V component can be determined, and then the first filter coefficient vector under all color components can be used The first filter coefficient is determined.
- the process of determining the first filter coefficient it is based on the YUV color space; if the initial slice or reconstructed slice does not conform to the YUV color space (for example, RGB color space), then color space conversion is also required. to make it conform to the YUV color space.
- YUV color space for example, RGB color space
- the method may also include: if the color component of the midpoint of the initial slice conforms to the RGB color space, then performing color space conversion on the initial slice so that the color component of the midpoint of the initial slice conforms to the YUV color space; if If the color component of the midpoint of the reconstructed slice conforms to the RGB color space, then the color space conversion of the reconstructed slice is performed so that the color component of the midpoint of the reconstructed slice conforms to the YUV color space.
- the first attribute parameter and the second attribute parameter corresponding to each color component can be determined based on the color component of the midpoint of the initial slice and the reconstructed slice. attribute parameters, and then the first filter coefficient vector corresponding to each color component can be determined. Finally, the first filter coefficient vector corresponding to all color components can be used to determine the first filter coefficient of the reconstructed slice.
- S403 Use the first filter coefficient to filter the reconstructed slice, and determine the filtered slice corresponding to the reconstructed slice.
- the encoder may further use the first filter coefficient to determine the filtered slice corresponding to the reconstructed slice.
- filtering the reconstructed slice according to the first filter coefficient and determining the filtered slice corresponding to the reconstructed slice may include:
- the first point represents a point in the reconstructed slice.
- the K 1 target points corresponding to the first point include the first point and the (K 1 -1) neighbor points adjacent to the first point in the reconstructed slice.
- K 1 is an integer greater than 1.
- the (K 1 -1) neighbor points here specifically refer to the (K 1 -1) neighbor points with the closest geometric distance to the first point.
- determining the K 1 target points corresponding to the first point in the reconstructed slice may include:
- filtering the K 1 target points corresponding to the first point in the reconstructed slice according to the first filter coefficient and determining the filtered slice may include:
- the filtered slice is determined according to the filter value of the attribute information of at least one point in the reconstructed slice.
- the filtered slice when filtering the K 1 target points corresponding to the first point in the reconstructed slice according to the first filter coefficient, it may include: according to the color component corresponding The first filter coefficient vector performs filtering processing on the K 1 target points corresponding to the first point in the reconstructed slice, and obtains the filtered value of the color component of the first point in the reconstructed slice. In this way, after determining the filter value of the color component of at least one point in the reconstructed slice, the filtered slice can be determined based on the filter value of the color component of the at least one point in the reconstructed slice.
- the filter value of the color component of each point in the reconstructed slice can be determined based on the first filter coefficient vector and the second attribute parameter corresponding to the color component; and then based on the filter value of the color component of each point in the reconstructed slice, Get filtered slices.
- the reconstructed slice when using the Wiener filter to filter the reconstructed slice, a noisy signal and an expected signal are required.
- the reconstructed slice can be used as a noisy signal.
- the initial slice corresponding to the reconstructed slice is used as the expected signal; therefore, the initial slice and the reconstructed slice can be input into the Wiener filter at the same time, that is, the input of the Wiener filter is the initial slice and the reconstructed slice, and the Wiener filter
- the output is the first filter coefficient; after obtaining the first filter coefficient, the filtering process of the reconstructed slice can also be completed based on the first filter coefficient to obtain the corresponding filtered slice.
- the first filter coefficient is obtained based on the initial slice and the reconstructed slice; therefore, by applying the first filter coefficient to the reconstructed slice, the corresponding initial slice can be restored to the maximum extent.
- the first filter coefficient vector corresponding to the color component can be determined based on the second attribute parameter under the color component. The filter value corresponding to the color component.
- the first filter coefficient vector H(k) under the Y component is applied to the reconstructed slice, that is, the second attribute parameter P(n,k), to obtain the Y component Filter value R(n), as shown below,
- the U component and V component can be traversed according to the above method, and finally the filter value under the U component and the filter value under the V component can be determined, and then the filter values under all color components can be used to determine the filter corresponding to the reconstructed slice. Later sliced.
- S404 Determine the first filter identification information according to the filtered slice; wherein the first filter identification information indicates whether to perform filtering processing on the reconstructed slice.
- the encoder may further determine the first filter identification information based on the filtered slice.
- the first filtering identification information may indicate whether to perform filtering processing on the reconstructed slice; further, the first filtering identification information may also indicate which one or which colors in the reconstructed slice are to be filtered. components are filtered.
- the color component may include at least one of the following: a first color component, a second color component, and a third color component; wherein the component to be processed of the attribute information may be the first color component. Any one of the color component, the second color component, and the third color component.
- a schematic flow chart 2 of an encoding method provided by an embodiment of the present application is shown.
- the determination of the first filter identification information may include the following steps S501 to S503:
- S501 Determine the first generation value of the to-be-processed component of the attribute information of the reconstructed slice, and determine the second-generation value of the to-be-processed component of the attribute information of the filtered slice.
- determining the first-generation value of the component to be processed of the attribute information of the reconstructed slice may include: using the rate-distortion cost method to perform the processing on the component to be processed of the attribute information of the reconstructed slice. Cost value calculation, using the obtained first rate distortion value as the first generation value;
- Determining the second generation value of the to-be-processed component of the attribute information of the filtered slice may include: using a rate distortion cost method to calculate the cost value of the to-be-processed component of the filtered slice's attribute information, and calculating the resulting second-generation value.
- the distortion value is obtained as the second generation value.
- the first filter identification information can be determined using a rate-distortion cost method.
- the utilization-distortion cost method determines the first-generation value corresponding to the to-be-processed component of the attribute information of the reconstructed slice and the second-generation value corresponding to the to-be-processed component of the filtered slice's attribute information; then based on the first-generation value and the second-generation value
- the comparison result of the second generation value determines the first filter identification information of the component to be processed.
- the cost value here may be a distortion value used for distortion measurement, or may be a rate-distortion cost result, etc., which is not specifically limited in the embodiment of the present application.
- embodiments of this application can also perform rate-distortion trade-offs on filtered slices and reconstructed slices at the same time.
- the rate-distortion cost method can be used to calculate the overall quality improvement and code stream.
- Increased rate distortion value can be used to calculate the overall quality improvement and code stream.
- the first rate distortion value and the second rate distortion value can respectively represent the rate distortion cost results of the reconstructed slice and the filtered slice under the same color component, and are used to represent the compression efficiency of the point cloud before and after filtering.
- the specific calculation formula is as follows,
- J is the rate distortion value
- D is the SSE of the initial slice and the reconstructed slice or the filtered slice, that is, the sum of squares of the corresponding point errors
- ⁇ is a quantity related to the quantization parameter QP, which can be selected in the embodiment of this application
- R i is the code stream size of the color component, expressed in bits.
- the first rate distortion value of the component to be processed of the reconstructed slice and the second rate distortion value of the component to be processed of the filtered slice can be calculated according to the above method, and used as the first rate distortion value. cost value and the second generation value, and then determine the first filter identification information of the component to be processed based on the comparison result between the two.
- determining the first-generation value of the component to be processed of the attribute information of the reconstructed slice may include: using the rate-distortion cost method to determine the attribute information of the reconstructed slice to be processed.
- the cost value of the component is calculated to obtain the first rate distortion value;
- the preset performance measurement index is used to calculate the performance value of the component to be processed of the attribute information of the reconstructed slice, and the first performance value is obtained; according to the first rate distortion value and the first performance value, determine the first generation value.
- Determining the second generation value of the to-be-processed component of the attribute information of the filtered slice may include: using a rate-distortion cost method to calculate the cost value of the to-be-processed component of the filtered slice's attribute information to obtain the second rate-distortion value;
- the preset performance measurement index is used to calculate the performance value of the component to be processed of the attribute information of the filtered slice to obtain the second performance value; the second generation value is determined based on the second rate distortion value and the second performance value.
- the first performance value and the second performance value can respectively represent the encoding and decoding performance of the reconstructed slice and the filtered slice under the same color component.
- the first performance value may be the PSNR value of the color component of the midpoint of the reconstructed slice
- the second performance value may be the PSNR value of the color component of the midpoint of the filtered slice.
- the embodiments of this application can not only consider the PSNR value to decide whether to perform filtering at the decoding end, but can also use the rate-distortion cost method to perform rate-distortion trade-offs before and after filtering. That is to say, after obtaining the reconstructed slice and the filtered slice, the first performance value and the first rate distortion value of the to-be-processed component of the reconstructed slice can be calculated according to the above method to obtain the first generation value; and the filtered slice can be calculated according to the above method.
- the second performance value and the second rate distortion value of the component to be processed are sliced to obtain the second generation value, and then the first filter identification information of the component to be processed is determined based on the comparison result of the first generation value and the second generation value.
- S502 Determine the first filter identification information of the component to be processed based on the first-generation value and the second-generation value.
- determining the first filter identification information of the component to be processed based on the first generation value and the second generation value may include:
- the value of the first filter identification information of the component to be processed is determined to be the second value.
- the value of the first filter identification information of the component to be processed can be determined to be the first value; or, the first value of the component to be processed can also be determined.
- the value of the filter identification information is the second value.
- the value of the first filter identification information of the component to be processed is the first value, then it can be determined that the first filter identification information of the component to be processed indicates that the component to be processed of the attribute information of the reconstructed slice is filtered. ; Or, if the value of the first filter identification information of the component to be processed is the second value, it can be determined that the first filter identification information of the component to be processed indicates that the component to be processed of the attribute information of the reconstructed slice is not to be filtered.
- determining the first filter identification information of the component to be processed based on the first generation value and the second generation value may include:
- the second performance value is greater than the first performance value and the second rate distortion value is less than the first rate distortion value, determine that the value of the first filter identification information of the component to be processed is the first value
- the second performance value is less than the first performance value, it is determined that the value of the first filter identification information of the component to be processed is the second value.
- determining the first filter identification information of the component to be processed based on the first generation value and the second generation value may include:
- the second performance value is greater than the first performance value and the second rate distortion value is less than the first rate distortion value, determine that the value of the first filter identification information of the component to be processed is the first value
- the second rate distortion value is greater than the first rate distortion value, it is determined that the value of the first filter identification information of the component to be processed is the second value.
- the value of the first filter identification information of the component to be processed is the first value. Alternatively, it may also be determined that the value of the first filter identification information of the component to be processed is the second value.
- the value of the first filter identification information of the component to be processed is the first value, then it can be determined that the first filter identification information of the component to be processed indicates that the component to be processed of the attribute information of the reconstructed slice is filtered. ; Or, if the value of the first filter identification information of the component to be processed is the second value, it can be determined that the first filter identification information of the component to be processed indicates that the component to be processed of the attribute information of the reconstructed slice is not to be filtered.
- the component to be processed is a color component
- it can be determined based on the first rate distortion value and the second rate distortion value that one or more of the color components of the filtered slice compared to the reconstructed slice Whether the rate distortion cost (RDCost) has decreased; alternatively, the filtered slice can also be measured based on the first performance value, the first rate distortion value, the second performance value and the second rate distortion value compared to the reconstructed slice. Whether the performance value of one or several of these color components increases while the rate-distortion cost decreases, and then it is determined that one or several of these color components need to be filtered. The following uses RDcost to determine whether one or several of these color components need to be filtered as an example for a detailed introduction.
- determining the first filter identification information of the component to be processed based on the first generation value and the second generation value may be include:
- the value of the first filter identification information of the first color component is determined to be the second value.
- the first filter identification information of the component to be processed is determined based on the first generation value and the second generation value, Can include:
- the value of the first filter identification information of the second color component is determined to be the second value.
- determining the first filter identification information of the component to be processed based on the first generation value and the second generation value Can include:
- the value of the first filter identification information of the third color component is determined to be the second value.
- the rate distortion cost result (RDCost) of the Y component when determining the first filter identification information of the Y component based on the first rate distortion value and the second rate distortion value, If the RDCost1 corresponding to the reconstructed slice is smaller than the RDCost2 corresponding to the filtered slice, that is, the rate distortion cost of the filtered Y component increases, then the filtering effect can be considered to be poor. At this time, the first filter identification information of the Y component will be set to indicate that the Y component is not correct.
- a filter identification information indicates filtering of the Y component.
- embodiments of the present application can traverse the U component and the V component according to the above method, and finally determine the first filter identification information of the U component and the first filter identification information of the V component, so as to utilize the first filter identification information of all color components. to determine the final first filter identification information.
- S503 Obtain the first filter identification information according to the first filter identification information of the component to be processed.
- determining the first filter identification information of the reconstructed slice according to the first filter identification information of the component to be processed may include: when the component to be processed is a color component, determining The first filter identification information of the first color component, the first filter identification information of the second color component and the first filter identification information of the third color component; according to the first filter identification information of the first color component, the first filter identification information of the second color component The first filter identification information and the first filter identification information of the third color component are used to obtain first filter identification information.
- the filter identification information here can be in the form of an array, specifically a 1 ⁇ 3 array, which is composed of the first filter identification information of the first color component, the first filter identification information of the second color component and the third color It consists of the first filter identification information of the component.
- the first filter identification information can be represented by [y u v], where y represents the first filter identification information of the first color component, u represents the first filter identification information of the second color component, and v represents the third color component The first filter identification information.
- the method may further include: if the value of the first filter identification information of the component to be processed is the first value, determining to perform filtering processing on the component to be processed of the attribute information of the reconstructed slice; if the value of the first filter identification information of the component to be processed is the first value; If the value of the first filter identification information of the processed component is the second value, it is determined that the component to be processed of the attribute information of the reconstructed slice will not be filtered.
- the attribute information takes color components as an example, and the components to be processed may include first color components, second color components, and third color components.
- the method may further include:
- the value of the first filter identification information of the first color component is the first value, it is determined to perform filtering processing on the first color component of the reconstructed slice; if the value of the first filter identification information of the first color component is the second value. value, it is determined not to filter the first color component of the reconstructed slice; or,
- the value of the first filter identification information of the second color component is the first value, it is determined to perform filtering processing on the second color component of the reconstructed slice; if the value of the first filter identification information of the second color component is the second value. value, it is determined not to filter the second color component of the reconstructed slice; or,
- the value of the first filter identification information of the third color component is the first value, it is determined to perform filtering processing on the third color component of the reconstructed slice; if the value of the first filter identification information of the third color component is the second value. value, it is determined not to perform filtering on the third color component of the reconstructed slice.
- the color component can be instructed to perform filtering processing; if the first filter identification information corresponding to a certain color component is The second value can indicate that the color component is not filtered.
- the method may also include:
- the first filter identification information of the first color component, the first filter identification information of the second color component and the first filter identification information of the third color component is a first value, it is determined that the first filter identification information indicates the pair Reconstruct slices for filtering;
- the first filter identification information of the first color component If all of the first filter identification information of the first color component, the first filter identification information of the second color component, and the first filter identification information of the third color component are second values, it is determined that the first filter identification information indicates that reconstruction is not performed. Slices are filtered.
- the first filter identification information corresponding to these color components are all the second value, then it can be instructed not to perform filtering processing on these color components, and it can be determined that the first filter identification information indication The reconstructed slice is not filtered; accordingly, if at least one of the first filter identification information of these color components is a first value, then it can be indicated that at least one color component is filtered, and it can be determined that the first filter identification information indicates that the Reconstruct slices for filtering.
- the first value and the second value are different, and the first value and the second value may be in parameter form or in numerical form.
- the first filter identification information corresponding to these color components may be parameters written in the profile, or may be the value of a flag, which is not specifically limited here.
- the first value can be set to 1, and the second value can be set to 0; or, the first value can also be set to true, and the second value can also be set to false. ; But there is no specific limit here.
- the first filter identification information may be encoded; if the first filter identification information indicates filtering of the reconstructed slice, then the third filter identification information also needs to be A filter coefficient is encoded, and the resulting encoded bits are written into the code stream.
- the encoder obtains the first filter identification information
- the first filter identification information indicates filtering of the reconstructed slice
- the first filter coefficient can also be selectively written into the code stream.
- the first filter identification information can be [1 0 1].
- the value of the first filter identification information of the first color component is 1, that is, the first color component of the reconstructed slice needs to be Filtering processing
- the value of the first filter identification information of the second color component is 0, that is, there is no need to perform filtering processing on the second color component of the reconstructed slice
- the value of the first filter identification information of the third color component is 1, That is, the third color component of the reconstructed slice needs to be filtered; at this time, only the first filter coefficient vector corresponding to the first color component and the first filter coefficient vector corresponding to the third color component need to be written into the code stream, without the need to The first filter coefficient vector corresponding to the second color component is written into the code stream.
- the method may also include: if the first filtering identification information indicates that the reconstructed slice is not to be filtered, then the first filtering coefficient is not encoded, and at this time, only the first filtering identification information is performed. Encoding, writing the resulting encoded bits into the code stream.
- the encoder may further determine the first filter identification information based on the reconstructed slice and the filtered slice. If the first filter identification information indicates that the reconstructed slice is filtered, then the first filter identification information and the first filter coefficient need to be written into the code stream; if the first filter identification information indicates that the reconstructed slice is not filtered, then no filtering is performed at this time. To write the first filter coefficient into the code stream, you only need to write the first filter identification information into the code stream for subsequent transmission to the decoder through the code stream.
- the method may also include: determining the predicted value of the attribute information of the midpoint of the initial slice; determining the initial value according to the original value and predicted value of the color component attribute information of the midpoint of the initial slice.
- the residual value of the attribute information at the midpoint of the slice ; encode the residual value of the attribute information at the midpoint of the initial slice, and write the resulting coded bits into the code stream. That is to say, after the encoder determines the residual value of the attribute information, it also needs to write the residual value of the attribute information into the code stream for subsequent transmission to the decoder through the code stream.
- a Wiener filtering operation can be performed on the reconstructed slice, specifically as shown in steps S401 to S407. The operations shown; until all operations on these multiple slices are completed.
- FIG. 6 a schematic flowchart 3 of an encoding method provided by an embodiment of the present application is shown. As shown in Figure 6, for the filtering process of the entire reconstructed point cloud, the method can also include:
- S602 Filter the reconstructed point cloud according to the second filter coefficient, and determine the filtered point cloud corresponding to the reconstructed point cloud.
- S603 Determine the second filtering identification information according to the filtered point cloud; wherein the second filtering identification information indicates whether to perform filtering processing on the reconstructed point cloud.
- S606 Write the obtained encoded bits into the code stream.
- the second filter coefficient can be determined based on the reconstructed point cloud and the initial point cloud corresponding to the reconstructed point cloud. . It should be noted that the points in the initial point cloud and the points in the reconstructed point cloud have a corresponding relationship.
- the K 2 target points corresponding to the midpoints of the reconstructed point cloud can include the current point and (K 2 -1) neighbor points adjacent to the current point, and K 2 is greater than 1. integer.
- determining the second filter coefficient based on the reconstructed point cloud and the initial point cloud corresponding to the reconstructed point cloud may include: K 2 target points corresponding to the midpoint of the initial point cloud and the midpoint of the reconstructed point cloud. , determine the second filter coefficient.
- determining the K 2 target points corresponding to the points in the reconstructed point cloud may include:
- the first point can be a point in the reconstructed point cloud.
- K 2 target points corresponding to the first point is to say, for K 2 target points, in addition to the first point itself, it also includes (K 2 -1) neighbor points with the closest geometric distance to the first point, which altogether constitute the reconstructed point cloud.
- the filter here can also be an adaptive filter, for example, it can be a filter based on a neural network, a Wiener filter, etc., which is not specifically limited here.
- the Wiener filter is taken as an example.
- the main function of the Wiener filter is to calculate the filter coefficient and at the same time determine whether the quality of the point cloud after Wiener filtering has been improved.
- the second filter coefficient here may also be a coefficient used for Wiener filter processing, that is, it may also be called a coefficient output by the Wiener filter.
- determining the second filter coefficient according to the reconstructed point cloud and the initial point cloud corresponding to the reconstructed point cloud may include: inputting the reconstructed point cloud and the initial point cloud corresponding to the reconstructed point cloud into the Wiener filter Calculation is performed in the processor and the second filter coefficient is output.
- the reconstructed point cloud and the initial point cloud corresponding to the reconstructed point cloud are input into the Wiener filter for calculation, and the second filter coefficient is output, which may include:
- the second filter coefficient is determined.
- determining the second filter coefficient based on the third attribute parameter and the fourth attribute parameter may include:
- Coefficient calculation is performed based on the cross-correlation parameters and auto-correlation parameters to obtain the second filter coefficient.
- the initial point cloud and the reconstructed point cloud taking the color component of the attribute information as an example, if the color component conforms to the YUV space, then when determining the second filter coefficient based on the initial point cloud and the reconstructed point cloud, you can First, based on the original value and reconstructed value of the color component (such as Y component, U component, V component), the third attribute parameter and the fourth attribute parameter of the color component are determined, and then the third attribute parameter and the fourth attribute parameter can be determined based on the third attribute parameter and the fourth attribute parameter. A second filter coefficient used for filtering is further determined.
- the third attribute parameter is determined based on the original value of the attribute information of at least one point in the initial point cloud; the fourth attribute parameter is determined based on the attribute information of K 2 target points corresponding to at least one point in the reconstructed point cloud.
- the reconstruction value is determined, where the K 2 target points include the current point and (K 2 -1) neighbor points adjacent to the current point.
- the order of the Wiener filter is also involved.
- the order of the Wiener filter can be set equal to M.
- the values of M and K 2 may be the same or different, and are not specifically limited here.
- the original value of the color component of the point in the initial point cloud can first be determined. , determine the third attribute parameter of the color component; at the same time, the fourth attribute parameter of the color component can be determined based on the reconstructed values of the color components of K 2 target points corresponding to the midpoint of the reconstructed point cloud; finally, the fourth attribute parameter of the color component can be determined based on the third attribute parameters and the fourth attribute parameter to determine the second filter coefficient vector corresponding to the color component.
- the corresponding third attribute parameter and fourth attribute parameter are calculated for a color component, and the second filter coefficient vector corresponding to the color component is determined using the third attribute parameter and the fourth attribute parameter.
- the second filter coefficient vector of each color component can be obtained, so that the second filter can be determined based on the second filter coefficient vector of each color component. coefficient.
- the cross-correlation parameter corresponding to the color component can first be determined based on the third attribute parameter and the fourth attribute parameter corresponding to the color component; at the same time, the cross-correlation parameter corresponding to the color component can be determined based on the third attribute parameter.
- the four attribute parameters determine the autocorrelation parameters corresponding to the color components; then the second filter coefficient vector corresponding to the color components can be determined based on the cross-correlation parameters and autocorrelation parameters; finally, all color components can be traversed and the second filtering corresponding to all color components can be used coefficient vector to determine the second filter coefficient.
- K 2 target points may include the point itself and (K 2 -1) neighboring points adjacent to it.
- the point cloud sequence is n
- S(n) is the Y component of all points in the initial point cloud
- the third attribute parameter composed of the original value of n, k) is the fourth attribute parameter composed of the reconstructed values of the Y components of K 2 target points corresponding to all points in the reconstructed point cloud.
- the cross-correlation parameter B(k) is calculated according to the third attribute parameter S(n) and the fourth attribute parameter P(n,k), as shown in the above formula (10); according to the fourth attribute parameter P(n,k) ) to calculate the autocorrelation parameter A(k,k), as shown in the above formula (11); according to the Wiener-Hough equation, the optimal coefficient H(k) under the Y component is the The two filter coefficient vectors H(k) are shown in the above formula (12).
- the U component and V component can be traversed according to the above method, and the second filter coefficient vector under the U component and the second filter coefficient vector under the V component are finally determined, and then the second filter coefficient vector under all color components can be used. Determine the second filter coefficient.
- the process of determining the second filter coefficient it is also based on the YUV color space; if the initial point cloud or the reconstructed point cloud does not conform to the YUV color space (for example, RGB color space), then the color space also needs to be Convert to conform to YUV color space.
- the YUV color space for example, RGB color space
- the method may also include: if the color components of the points in the initial point cloud conform to the RGB color space, then performing color space conversion on the initial point cloud so that the color components of the points in the initial point cloud conform to the YUV color space; if the color components of the points in the reconstructed point cloud conform to the RGB color space, then perform color space conversion on the reconstructed point cloud so that the color components of the points in the reconstructed point cloud conform to the YUV color space.
- the third attribute corresponding to each color component can be determined based on the color components of the points in the initial point cloud and the reconstructed point cloud. parameters and fourth attribute parameters, and then the second filter coefficient vector corresponding to each color component can be determined. Finally, the second filter coefficient vector corresponding to all color components can be used to determine the second filter coefficient of the reconstructed point cloud.
- the encoder may further use the second filter coefficient to determine the filtered point cloud corresponding to the reconstructed point cloud.
- filtering the reconstructed point cloud according to the second filter coefficient and determining the filtered point cloud corresponding to the reconstructed point cloud may include:
- the first point represents a point in the reconstructed point cloud
- the K 2 target points corresponding to the first point include the first point and the (K 2 -1) neighbors adjacent to the first point in the reconstructed point cloud.
- K 2 is an integer greater than 1.
- the (K 2 -1) neighbor points here specifically refer to the (K 2 -1) neighbor points with the closest geometric distance to the first point in the reconstructed point cloud.
- the K 2 target points corresponding to the first point in the reconstructed point cloud it is first necessary to determine the K 2 target points corresponding to the first point in the reconstructed point cloud. Specifically, taking the first point as an example, you can use the K nearest neighbor search method to search a preset number of candidate points in the reconstructed point cloud, calculate the distance value between the first point and these candidate points, and then select from these candidate points.
- filtering the K 2 target points corresponding to the first point in the reconstructed point cloud according to the second filter coefficient and determining the filtered point cloud may include:
- the filtered point cloud is determined according to the filter value of the attribute information of at least one point in the reconstructed point cloud.
- the second filter coefficient vector performs filtering on the K 2 target points corresponding to the first point in the reconstructed point cloud, and obtains the filtered value of the color component of the first point in the reconstructed point cloud.
- the filtered point cloud can be determined based on the filter value of the color component of the at least one point in the reconstructed point cloud.
- the filter value of the color component of each point in the reconstructed point cloud can be determined based on the first filter coefficient vector and the fourth attribute parameter corresponding to the color component; and then based on the filtering of the color component of each point in the reconstructed point cloud value to obtain the filtered point cloud.
- the reconstructed point cloud when using the Wiener filter to filter the reconstructed point cloud, the reconstructed point cloud can be used as a noisy signal, and the initial point cloud corresponding to the reconstructed point cloud can be used as the expected signal; therefore,
- the initial point cloud and reconstructed point cloud can be input into the Wiener filter at the same time, that is, the input of the Wiener filter is the initial point cloud and the reconstructed point cloud, and the output of the Wiener filter is the second filter coefficient; after obtaining the second After the filtering coefficient, the filtering process of the reconstructed point cloud can also be completed based on the second filtering coefficient to obtain the corresponding filtered point cloud.
- the second filter coefficient is obtained based on the initial point cloud and the reconstructed point cloud; therefore, applying the second filter coefficient to the reconstructed point cloud can restore the corresponding initial point to the maximum extent cloud.
- the second filter coefficient vector corresponding to the color component can be determined based on the fourth attribute parameter under the color component. The filter value corresponding to this color component.
- the second filter coefficient vector H(k) under the Y component is applied to the reconstructed point cloud, that is, the fourth attribute parameter P(n,k), to obtain the Y component under
- the filter value R(n) is shown in the above formula (13). Then you can also traverse the U component and V component according to the above method, and finally determine the filter value under the U component and the filter value under the V component. Then you can use the filter values under all color components to determine the corresponding value of the reconstructed point cloud. Point cloud after filtering.
- the encoder can also determine the second filter identification information based on the filtered point cloud.
- the second filtering identification information may indicate whether to perform filtering processing on the reconstructed point cloud; further, the second filtering identification information may also indicate which color component or color components in the reconstructed point cloud are subjected to filtering processing.
- the color component may include at least one of the following: a first color component, a second color component, and a third color component; wherein the component to be processed of the attribute information may be the first color component. Any one of the color component, the second color component, and the third color component.
- determining the second filter identification information based on the filtered point cloud may include:
- the second filter identification information of the component to be processed is obtained.
- determining the third-generation value of the to-be-processed component of the reconstructed point cloud's attribute information may include: using a rate-distortion cost method to calculate the value of the to-be-processed component of the reconstructed point cloud's attribute information. Calculate and use the obtained third rate distortion value as the third generation value.
- Determining the fourth generation value of the to-be-processed component of the attribute information of the filtered point cloud may include: using the rate distortion cost method to calculate the cost value of the to-be-processed component of the filtered point cloud's attribute information, and calculating the obtained fourth-generation value of the attribute information of the filtered point cloud.
- the fourth rate distortion value is used as the fourth generation value.
- the first filter identification information can be determined using a rate-distortion cost method.
- the utilization distortion cost method is used to determine the third-generation value corresponding to the to-be-processed component of the attribute information of the reconstructed point cloud and the fourth-generation value corresponding to the to-be-processed component of the filtered point cloud's attribute information; then based on the third-generation value
- the comparison result with the fourth generation value determines the second filter identification information of the component to be processed.
- the cost value here may be a distortion value used for distortion measurement, or may be a rate-distortion cost result, etc., which is not specifically limited in the embodiment of the present application.
- embodiments of this application can also perform rate-distortion trade-offs on the filtered point cloud and the reconstructed point cloud at the same time.
- the rate-distortion cost method can be used to calculate the overall quality improvement and The rate distortion value after the code stream is increased.
- the third rate distortion value and the fourth rate distortion value can respectively represent the rate distortion cost results of the reconstructed point cloud and the filtered point cloud under the same color component, and are used to represent the compression efficiency of the point cloud before and after filtering.
- the specific calculation is as shown in the above equation (14).
- J is the rate distortion value
- D is the SSE of the initial point cloud and the reconstructed point cloud or the filtered point cloud, that is, the sum of squares of the corresponding point errors
- ⁇ is a quantity related to the quantization parameter QP, which can be selected in the embodiment of this application
- R i is the code stream size of the color component, expressed in bits.
- the third rate distortion value of the component to be processed of the reconstructed point cloud and the fourth rate distortion value of the component to be processed of the filtered point cloud can be calculated according to the above method. It is used as the third generation value and the fourth generation value, and then the second filter identification information of the component to be processed is determined based on the comparison result between the two.
- determining the third generation value of the component to be processed of the attribute information of the reconstructed point cloud may include: using a rate-distortion cost method to generate the value of the component to be processed of the attribute information of the reconstructed point cloud. value calculation to obtain the third rate distortion value; use preset performance measurement indicators to calculate the performance value of the to-be-processed component of the attribute information of the reconstructed point cloud, and obtain the third performance value; according to the third rate distortion value and the third performance value, Determine third generation value.
- Determining the fourth generation value of the to-be-processed component of the attribute information of the filtered point cloud may include: using the rate distortion cost method to calculate the cost value of the to-be-processed component of the filtered point cloud's attribute information to obtain the fourth rate distortion. value; use the preset performance measurement index to calculate the performance value of the to-be-processed component of the attribute information of the filtered point cloud to obtain the fourth performance value; determine the fourth generation value based on the fourth rate distortion value and the fourth performance value.
- the third performance value and the fourth performance value can respectively represent the encoding and decoding performance of the reconstructed point cloud and the filtered point cloud under the same color component.
- the third performance value may be the PSNR value of the color component of the point in the reconstructed point cloud
- the fourth performance value may be the PSNR value of the color component of the point in the point cloud after filtering.
- the embodiments of the present application can not only consider the PSNR value to decide whether to perform filtering at the decoding end, but can also use the rate-distortion cost method to perform rate-distortion trade-offs before and after filtering.
- the third performance value and the third rate distortion value of the to-be-processed component of the reconstructed point cloud can be calculated according to the above method to obtain the third generation value; and calculation
- the fourth performance value and the fourth rate distortion value of the component to be processed of the filtered point cloud are obtained to obtain the fourth generation value, and then the second value of the component to be processed is determined based on the comparison result of the third generation value and the fourth generation value.
- Filter identification information In this way, not only the improvement of attribute value quality is considered, but also the cost of writing the second filter coefficient and other information into the code stream is calculated.
- the performance of the two is combined to determine whether the compression performance after filtering has been improved, thereby deciding whether the encoding end should perform Transfer of the second filter coefficient.
- determining the second filter identification information of the component to be processed based on the third generation value and the fourth generation value may include: if the fourth generation value is less than the third generation value, determining the second filter identification information of the component to be processed. The value of the second filter identification information is the third value; if the fourth generation value is greater than the third generation value, it is determined that the value of the second filter identification information of the component to be processed is the fourth value.
- the value of the second filter identification information of the component to be processed can be determined to be the third value; or, the second value of the component to be processed can also be determined.
- the value of the filter identification information is the fourth value.
- the value of the second filter identification information of the component to be processed is determined to be the third value; if the fourth rate distortion value is greater than the third rate distortion value, the value of the second filter identification information of the component to be processed is determined to be the third value. rate distortion value, it is determined that the value of the second filter identification information of the component to be processed is the fourth value.
- the fourth performance value is greater than the third performance value and the fourth rate distortion value is less than the third rate distortion value, then it is determined that the value of the second filter identification information of the component to be processed is the third value; if the fourth performance value is less than the third performance value or the fourth rate distortion value is greater than the third rate distortion value, then the value of the second filter identification information of the component to be processed is determined to be the fourth value.
- the value of the second filter identification information of the component to be processed is the third value; alternatively, the value of the second filter identification information of the component to be processed may also be determined to be the fourth value.
- the value of the second filter identification information of the component to be processed is the third value, then it can be determined that the second filter identification information of the component to be processed indicates filtering of the component to be processed of the attribute information of the reconstructed point cloud. Process; or, if the value of the second filter identification information of the component to be processed is the fourth value, it can be determined that the second filter identification information of the component to be processed indicates that the component to be processed of the attribute information of the reconstructed point cloud is not to be filtered.
- the component to be processed when the component to be processed is a color component, it can be determined based on the third rate distortion value and the fourth rate distortion value that compared with the reconstructed point cloud, one of these color components or Whether the rate distortion cost (RDCost) of certain components has decreased; or, the phase of the filtered point cloud can also be measured based on the third performance value, the third rate distortion value, the fourth performance value and the fourth rate distortion value.
- RDCost rate distortion cost
- the fourth rate distortion value is less than the third rate distortion value, it is determined that the value of the second filter identification information of the first color component is the third value; if the fourth rate distortion value is greater than the third rate distortion value, the value of the second filter identification information of the first color component is determined to be the fourth value.
- the component to be processed is the second color component
- if the fourth rate distortion value is less than the third rate distortion value it is determined that the value of the second filter identification information of the second color component is the third rate distortion value.
- the component to be processed is the third color component
- if the fourth rate distortion value is less than the third rate distortion value it is determined that the value of the second filter identification information of the third color component is the third color component.
- the rate distortion cost result (RDCost) of the Y component when determining the first filter identification information of the Y component based on the third rate distortion value and the fourth rate distortion value, if the RDCost1 corresponding to the reconstructed point cloud If it is less than the RDCost2 corresponding to the filtered point cloud, that is, the rate distortion cost of the filtered Y component increases, then the filtering effect can be considered to be poor.
- the second filter identification information of the Y component will be set to indicate that the Y component will not be filtered; accordingly If the RDCost1 corresponding to the reconstructed point cloud is greater than the RDCost2 corresponding to the filtered point cloud, that is, the rate distortion cost of the Y component after filtering is reduced, then the filtering effect can be considered to be better, and the second filter flag of the Y component will be set at this time.
- the information indicates filtering of the Y component. In this way, embodiments of the present application can traverse the U component and the V component according to the above method, and finally determine the second filter identification information of the U component and the second filter identification information of the V component, so as to utilize the second filter identification information of all color components. to determine the final second filter identification information.
- determining the second filter identification information of the reconstructed point cloud according to the second filter identification information of the component to be processed may include: when the component to be processed is a color component, determining the first color The second filter identification information of the second color component, the second filter identification information of the second color component, and the second filter identification information of the third color component; according to the second filter identification information of the first color component, the second filtering information of the second color component The identification information and the second filter identification information of the third color component are used to obtain the second filter identification information.
- the second filter identification information here can also be in the form of an array, specifically a 1 ⁇ 3 array, which is composed of the second filter identification information of the first color component, the second filter identification information of the second color component and It consists of the second filter identification information of the third color component.
- the second filter identification information can be represented by [Y U V], where Y represents the second filter identification information of the first color component, U represents the second filter identification information of the second color component, and V represents the third color component The second filter identification information.
- the second filter identification information includes filter identification information for each color component, the second filter identification information can not only be used to determine whether to filter the reconstructed point cloud, but also to determine whether to filter the reconstructed point cloud. Specifically which color component is filtered. Therefore, in some embodiments, the method may further include: if the value of the second filter identification information of the component to be processed is a third value, determining to perform filtering processing on the component to be processed of the attribute information of the reconstructed point cloud; if If the value of the second filter identification information of the component to be processed is the fourth value, it is determined that the component to be processed of the attribute information of the reconstructed point cloud is not to be filtered.
- the attribute information takes color components as an example, and the components to be processed may include first color components, second color components, and third color components.
- the method may further include:
- the value of the second filter identification information of the first color component is the third value, it is determined to perform filtering processing on the first color component of the reconstructed point cloud; if the value of the second filter identification information of the first color component is the third value. Four values, then it is determined not to filter the first color component of the reconstructed point cloud; or,
- the value of the second filter identification information of the second color component is the third value, it is determined to perform filtering processing on the second color component of the reconstructed point cloud; if the value of the second filter identification information of the second color component is the third value. Four values, then it is determined not to filter the second color component of the reconstructed point cloud; or,
- the value of the second filter identification information of the third color component is the third value, then it is determined to perform filtering processing on the third color component of the reconstructed point cloud; if the value of the second filter identification information of the third color component is the third value. Four values, it is determined that the third color component of the reconstructed point cloud will not be filtered.
- the color component can be instructed to perform filtering processing; if the second filter identification information corresponding to a certain color component is The fourth value can indicate that the color component is not filtered.
- the method may also include:
- the second filter identification information of the first color component, the second filter identification information of the second color component and the second filter identification information of the third color component is a third value, it is determined that the second filter identification information indicates the pair Reconstruct the point cloud for filtering;
- the second filter identification information of the first color component If all of the second filter identification information of the first color component, the second filter identification information of the second color component, and the second filter identification information of the third color component are fourth values, it is determined that the second filter identification information indicates that reconstruction is not performed.
- the point cloud is filtered.
- the second filter identification information corresponding to these color components are all the fourth value, then it can be instructed that these color components are not filtered, and the second filter identification information indication can be determined.
- the reconstructed point cloud is not filtered; accordingly, if at least one of the second filter identification information of these color components is the third value, then it can be indicated that at least one color component is filtered, and the second filter identification information indication can be determined Filter the reconstructed point cloud.
- the third value and the fourth value are different, and the third value and the fourth value may be in parameter form or in numerical form.
- the second filter identification information corresponding to these color components may be parameters written in the profile, or may be the value of a flag, which is not specifically limited here.
- the first filter identification information and the second filter identification information have different expression forms.
- the first filter identification information can be [0 1 1]
- the second filter identification information can be [0 2 2]; where the array [0 11] indicates that it is necessary to reconstruct the sum of the second color component in the slice The third color component is filtered.
- the array [0 2 2] indicates that the second color component and the third color component in the reconstructed point cloud need to be filtered.
- the second filter identification information can be encoded; if the second filter identification information indicates filtering of the reconstructed point cloud, then the second filter identification information also needs to be processed.
- the filter coefficients are encoded, and the resulting encoded bits are written into the code stream.
- the second filter coefficient may be selectively written into the code stream.
- the value of the second filter identification information of the first color component is 0, that is, there is no need to filter the first color component of the reconstructed point cloud;
- the value of the second filter identification information of the second color component is 2, that is, the second color component of the reconstructed point cloud needs to be filtered;
- the value of the second filter identification information of the third color component is 2, that is, the second color component of the reconstructed point cloud needs to be filtered.
- the third color component of the reconstructed point cloud is filtered; at this time, it is only necessary to write the second filter coefficient vector corresponding to the second color component and the second filter coefficient vector corresponding to the third color component into the code stream, without the need to write the first
- the second filter coefficient vector corresponding to the color component is written into the code stream.
- the method may also include: if the second filter identification information indicates that the reconstructed point cloud is not to be filtered, then the second filter coefficient is not encoded, and only the second filter identification information is encoded at this time. Encode and write the resulting encoded bits into the code stream.
- the encoder can further determine the second filter identification information based on the reconstructed point cloud and the filtered point cloud. If the second filter identification information indicates that the reconstructed point cloud is filtered, then the second filter identification information and the second filter coefficient need to be written into the code stream; if the second filter identification information indicates that the reconstructed point cloud is not filtered, then this At this time, there is no need to write the second filter coefficient into the code stream. It is only necessary to write the second filter identification information into the code stream for subsequent transmission to the decoder through the code stream.
- the method may also include: determining the predicted value of the attribute information of the midpoint of the initial slice; determining the initial value according to the original value and predicted value of the color component attribute information of the midpoint of the initial slice.
- the residual value of the attribute information at the midpoint of the slice ; encode the residual value of the attribute information at the midpoint of the initial slice, and write the resulting coded bits into the code stream. That is to say, after the encoder determines the residual value of the attribute information, it also needs to write the residual value of the attribute information into the code stream for subsequent transmission to the decoder through the code stream.
- the method may also include:
- n initial slices are determined from the multiple initial slices, and the n initial slices are aggregated to obtain initial aggregated slices;
- the third filter coefficient is encoded
- the third filter coefficient is not encoded. At this time, only the third filter identification information is encoded, and the resulting coded bits are written. Input code stream.
- Wiener filtering can still be used to determine the third filter coefficient.
- the specific determination process is similar to the aforementioned determination process of the first filter coefficient and the second filter coefficient, and will not be described in detail here.
- the reconstructed aggregate slice is filtered according to the third filter coefficient.
- the K 3 target points corresponding to the first point in the reconstructed aggregate slice are filtered according to the third filter coefficient to determine the filtered aggregate slice.
- the first point here represents the point in the reconstructed aggregation slice.
- the K 3 target points corresponding to the first point include the first point and the (K 3 -1) neighbor points adjacent to the first point in the reconstructed aggregation slice.
- K 3 is an integer greater than 1; the (K 3 -1) neighbor points here specifically refer to the (K 3 -1) neighbor points with the closest geometric distance to the first point in the reconstructed aggregation slice.
- n (1 ⁇ n ⁇ S, S is the total number of slices) slices can be first constructed into a relatively large point cloud.
- the quality enhancement processing method is then applied to the slice; thus, the number of transmitted filter coefficients can be reduced, and the compression efficiency can be improved; and this method can also be successfully implemented when S is not too large.
- the embodiments of the present application may also perform quality enhancement processing on reconstructed slices first, and then perform an overall quality enhancement processing on the entire reconstructed point cloud after all reconstructed slices are processed to achieve the purpose of secondary enhancement. Therefore, in some embodiments, the method may further include:
- the fourth filtering identification information indicates that the first filtered point cloud is not to be filtered, the fourth filtering coefficient will not be encoded. At this time, only the fourth filtering identification information will be encoded, and the obtained The encoded bits are written into the code stream.
- Wiener filtering can still be used to determine the fourth filter coefficient.
- the specific determination process is similar to the aforementioned determination process of the first filter coefficient and the second filter coefficient, and will not be described in detail here.
- the first filtered point cloud is filtered according to the fourth filter coefficient.
- the K 4 target points corresponding to the first point in the first filtered point cloud are filtered according to the fourth filter coefficient to determine the Second filtered point cloud.
- the first point here represents the point in the first filtered point cloud
- the K 4 target points corresponding to the first point include the first point and (K 4 ) adjacent to the first point in the first filtered point cloud.
- -1) neighbor points K 4 is an integer greater than 1; the (K 4 -1) neighbor points here specifically refer to the (K 4 -1) closest geometric distance to the first point in the first filtered point cloud ) nearest neighbor points.
- K 1 , K 2 , K 3 and K 4 may be the same or different.
- K 1 , K 2 , K 3 , and K 4 can all be set to 16, but this is not specifically limited in this embodiment.
- Wiener filter proposed in the embodiment of this application can be used within the prediction loop, that is, as an inloop filter, and can be used as a reference for decoding subsequent point clouds; it can also be used outside the prediction loop, that is, as an inloop filter.
- Post filter is used and is not used as a reference for decoding subsequent point clouds. There is no specific limitation on this.
- the Wiener filter proposed in the embodiment of the present application is an in-loop filter
- the parameter information indicating the filtering process needs to be set, such as The filter identification information is written into the code stream, and the filter coefficients also need to be written into the code stream.
- the Wiener filter proposed in the embodiment of this application is a post-processing filter
- the filter coefficient corresponding to the filter is located in a separate auxiliary information data unit (for example, supplementary enhancement information SEI)
- auxiliary information data unit for example, supplementary enhancement information SEI
- the decoder does not obtain the supplementary enhancement information, it will not Reconstruct the point cloud for filtering.
- the filter coefficients and other information corresponding to the filter are in an auxiliary information data unit.
- the parameter information indicating the filtering process such as the filter identification information
- the filter coefficients also need to be written into the code stream; accordingly, after determining not to perform filtering, you can choose not to write parameter information indicating filtering, such as filter identification information, into the code stream, and at the same time, do not write the filtering information into the code stream.
- the coefficients are written into the code stream.
- the embodiments of this application propose a technology that adaptively performs Wiener filtering on the YUV component in the encoding and decoding attribute information to better enhance the quality of the point cloud. Specifically, it can be based on the number of points in the reconstructed point cloud. , decide whether to perform Wiener filtering on the entire reconstructed point cloud, or filter the divided slices separately. This can not only ensure the point cloud compression efficiency to the greatest extent, but also be more universal, that is, this method has application value for any point cloud.
- the embodiment of the present application proposes an adaptive point cloud color quality improvement method, which is suitable for any point cloud sequence.
- the number of points in the point cloud is used as a priori knowledge, thereby achieving efficiency and generality.
- it when applied to the entire point cloud, it usually has the advantages of slightly better compression efficiency and slightly lower time complexity.
- the disadvantage is that it takes up a lot of resources such as memory. When the memory is limited, it is difficult to perform its due function on large point clouds. function; when applied to each slice, the compression efficiency improvement is slightly lower than the former due to more information being transmitted, but this method consumes less resources.
- the preset threshold T of points is set in advance. The setting of this value should be based on the specific conditions of the current device, so as to maximize the improvement of compression efficiency; therefore, this technical solution is not limited to Wiener filtering operations. Solutions can be provided for any method that is constrained by resources, etc.
- This embodiment provides an encoding method, which is applied to the encoder.
- determine the reconstructed slice of the reconstructed point cloud when the reconstructed point cloud meets the preset conditions, determine the first filter coefficient according to the reconstructed slice and the initial slice corresponding to the reconstructed slice; filter the reconstructed slice according to the first filter coefficient, Determine the filtered slice corresponding to the reconstructed slice; determine the first filter identification information according to the filtered slice; wherein the first filter identification information indicates whether to filter the reconstructed slice; encode the first filter identification information; if the first filter If the identification information indicates that the reconstructed slice is filtered, the first filter coefficient is encoded; the resulting encoded bits are written into the code stream.
- the encoding end will perform filtering based on the divided reconstructed slices, and only after determining that the reconstructed slices need to be filtered, the corresponding filter coefficients will be passed to the decoder; so , filtering based on reconstructed slices can not only avoid memory overflow due to limited memory resources when processing large point clouds, making it highly universal; it can also optimize the reconstructed point cloud, which can improve the quality of the point cloud. quality, save bit rate, and thus improve encoding and decoding performance.
- the embodiment of the present application also provides a code stream, which may contain parameter information for determining the decoded point cloud; wherein the parameter information includes at least one of the following: initial slice midpoint The residual value of the attribute information, the first filter identification information and the first filter coefficient.
- the decoder obtains the attribute information of the midpoint of the initial slice through decoding.
- the residual value is then constructed to construct a reconstructed slice; and the decoder obtains the first filter identification information through decoding, and can also determine whether the reconstructed slice needs to be filtered; if it is necessary to filter the reconstructed slice, it can be obtained directly through decoding a first filter coefficient, and filter the reconstructed slice according to the first filter coefficient.
- the parameter information may also include second filter identification information and second filter coefficients; after the decoder constructs the reconstructed point cloud, the second filter can be obtained through decoding Identification information to determine whether the reconstructed point cloud needs to be filtered; if it is necessary to filter the reconstructed point cloud, the second filter coefficient can also be directly obtained through decoding, and the reconstructed point cloud can be filtered according to the second filter coefficient deal with. In this way, it not only realizes the optimization of the reconstructed point cloud and improves the quality of the point cloud; it also solves the problem of memory overflow due to limited memory resources, and has universal applicability.
- FIG. 7 shows a schematic flowchart 1 of a decoding method provided by an embodiment of the present application.
- the method may include:
- S701 Analyze the code stream and determine the first filter identification information.
- S703 Filter the reconstructed slice according to the first filter coefficient, and determine the filtered slice corresponding to the reconstructed slice.
- the decoding method described in the embodiment of the present application specifically refers to the point cloud decoding method, which can be applied to a point cloud decoder (in the embodiment of the present application, it may be referred to as a "decoder" for short).
- the decoder may first parse the code stream to determine filter identification information, such as first filter identification information, second filter identification information, etc.
- the first filtering identification information may indicate whether to perform filtering processing on the reconstructed slice
- the second filtering identification information may indicate whether to perform filtering processing on the reconstructed point cloud.
- the first filter identification information may also indicate which color component or color components in the reconstructed slice are to be filtered
- the second filter identification information may also indicate which color component or color components in the reconstructed point cloud to be filtered. Perform filtering.
- the reconstructed point cloud for a point in the reconstructed slice, when decoding this point, it can be used as a point to be decoded in the reconstructed slice, and the point There are multiple decoded points surrounding a point.
- a point in the reconstructed slice it corresponds to a piece of geometric information and an attribute information; wherein, the geometric information represents the spatial position of the point, and the attribute information represents the attribute value (such as color component value) of the point.
- the attribute information may include color components, specifically color information in any color space.
- the attribute information may be color information in RGB space, color information in YUV space, color information in YCbCr space, etc., which are not limited in the embodiments of this application.
- the color component may include at least one of the following: a first color component, a second color component, and a third color component.
- the first color component, the second color component, and the third color component are: R component, G component, and B component; if the color component conforms to the YUV color space, then it can be determined
- the first color component, the second color component and the third color component are: Y component, U component, V component; if the color component conforms to the YCbCr color space, then the first color component, the second color component and the third color can be determined
- the components are: Y component, Cb component, Cr component.
- the attribute information of the point can be a color component, or it can be reflectivity, refractive index or other attributes, and the embodiment of the present application does not make any limitation. .
- the decoder can determine the residual value of the attribute information of the midpoint of the initial slice by decoding the code stream in order to construct the reconstructed slice. Therefore, in some embodiments, the method may further include:
- the reconstructed slice is determined based on the reconstructed value of the attribute information of the initial slice midpoint.
- the predicted value and residual value of the attribute information of the point can be determined first, and then the predicted value and residual value are further used to calculate and obtain the attribute information of the point.
- Reconstruction values to construct reconstructed slices Specifically, for a point in the reconstructed slice, when determining the predicted value of the attribute information of the point, the geometric information and attribute information of multiple target neighbor points of the point can be used, and the geometric information of the point can be combined to predict the point. The attribute information is predicted to obtain the corresponding prediction value, and then the corresponding reconstruction value can be determined.
- the point after determining the reconstruction value of the attribute information of the point, the point can be used as the nearest neighbor of the subsequent LOD midpoint, so that the reconstruction value of the attribute information of the point can be used to continue to reconstruct subsequent points. Attribute prediction is performed so that reconstructed slices can be constructed.
- the method may further include: after determining at least one reconstructed slice, performing aggregation processing on the at least one reconstructed slice to determine the reconstructed point cloud.
- the initial point cloud can be obtained directly through the point cloud reading function of the codec program, and the reconstructed point cloud is obtained after attribute encoding, attribute reconstruction, and geometric compensation.
- the reconstructed point cloud in the embodiment of the present application can be the reconstructed point cloud output after decoding, or can be used as a reference for subsequent decoding point clouds; in addition, the reconstructed point cloud here can not only be within the prediction loop, that is, as an inloop When used as a filter, it can be used as a reference for decoding subsequent point clouds; it can also be used outside the prediction loop, that is, as a post filter, and is not used as a reference for decoding subsequent point clouds; the embodiments of this application do not specifically limit this.
- the method may further include: determining parameter information of the reconstructed point cloud; and determining whether the reconstructed point cloud satisfies a preset condition based on the parameter information of the reconstructed point cloud.
- analyzing the code stream and determining the first filter identification information may include: when the reconstructed point cloud meets the preset conditions, parsing the code stream and determining the first filter identification information.
- determining the parameter information of the reconstructed point cloud may be determining the number of points in the reconstructed point cloud.
- the parameter information of the reconstructed point cloud (such as the number of points in the reconstructed point cloud) can be obtained by parsing the code stream, or can be determined based on the decoded geometric information. There is no limitation here. In this way, embodiments of the present application can determine whether the reconstructed point cloud meets the preset conditions based on the number of points in the reconstructed point cloud, and then determine whether to perform filtering processing on the entire reconstructed point cloud or on divided reconstruction slices.
- the method may also include:
- the number of points in the reconstructed point cloud is less than the preset threshold, it is determined that the reconstructed point cloud does not meet the preset conditions.
- the preset threshold here can be used to measure whether the reconstructed point cloud is a large point cloud, that is, whether the reconstructed point cloud is filtered as a whole or filtered according to divided reconstructed slices. For example, if the number of points in the reconstructed point cloud is greater than or equal to a preset threshold, it means that the number of points in the reconstructed point cloud is too many. In order to avoid memory overflow when filtering it, it can be determined that the When the reconstructed point cloud meets the preset conditions, filtering is performed according to the divided reconstruction slices; if the number of points in the reconstructed point cloud is less than the preset threshold, it means that the number of points in the reconstructed point cloud will not cause any problems during filtering.
- the reconstructed point cloud does not meet the preset conditions.
- the reconstructed point cloud is filtered as a whole; in this way, the point cloud compression efficiency can be ensured to the greatest extent and it is more universal. sex.
- the first filter identification information may include filter identification information of the component to be processed of the attribute information.
- parsing the code stream and determining the first filter identification information may include: parsing the code stream and determining the first filter identification information of the component to be processed; wherein, the component to be processed
- the first filtering identification information indicates whether to perform filtering processing on the to-be-processed component of the attribute information of the reconstructed slice.
- parsing the code stream and determining the first filter identification information may also include: if the value of the first filter identification information of the component to be processed is the first value, determining whether to reconstruct the slice.
- the to-be-processed component of the attribute information of the reconstructed slice is filtered; if the value of the first filter identification information of the to-be-processed component is the second value, it is determined that the to-be-processed component of the attribute information of the reconstructed slice is not to be filtered.
- parsing the code stream and determining the first filter identification information may include: parsing the code stream and determining the first filter identification information of the first color component and the first filter identification information of the second color component. and first filter identification information of the third color component.
- the first filter identification information of the first color component indicates whether to perform filtering processing on the first color component of the attribute information of the reconstructed slice
- the first filter identification information of the second color component indicates whether to perform filtering processing on the second part of the attribute information of the reconstructed slice.
- the color component is filtered
- the first filter identification information of the third color component indicates whether to perform filtering on the third color component of the attribute information of the reconstructed slice.
- the first filter identification information may be in the form of an array, specifically a 1 ⁇ 3 array, which is composed of the first filter identification information of the first color component, the first filter identification information of the second color component and the third color It consists of the first filter identification information of the component.
- the first filter identification information can be represented by [y u v], where y represents the first filter identification information of the first color component, u represents the first filter identification information of the second color component, and v represents the third color component The first filter identification information.
- the first filter identification information includes the first filter identification information corresponding to each color component, the first filter identification information can not only be used to determine whether to perform filtering processing on the reconstructed slice, but also can be used to determine whether to filter the reconstructed slice. Determine which color component or color components are to be filtered.
- the value of the first filter identification information of the first color component is the first value, then it is determined to perform filtering processing on the first color component of the attribute information of the reconstructed slice; if the first color component The value of the first filter identification information of is the second value, then it is determined not to perform filtering processing on the first color component of the attribute information of the reconstructed slice; or, if the value of the first filter identification information of the second color component is the first value, it is determined that the second color component of the attribute information of the reconstructed slice is filtered; if the value of the first filter identification information of the second color component is the second value, it is determined that the second color of the attribute information of the reconstructed slice is not filtered.
- the component is subjected to filtering processing; or, if the value of the first filter identification information of the third color component is the first value, it is determined to perform filtering processing on the third color component of the attribute information of the reconstructed slice; if the third color component of the third color component is filtered. If the value of the filter identification information is the second value, it is determined that the third color component of the attribute information of the reconstructed slice will not be filtered. That is to say, in the embodiment of the present application, if the first filter identification information corresponding to a certain color component is the first value, then the color component can be instructed to perform filtering processing; if the first filter identification information corresponding to a certain color component is The second value can indicate that the color component is not filtered.
- the method may also include:
- the first filter identification information of the first color component, the first filter identification information of the second color component and the first filter identification information of the third color component is a first value, it is determined that the first filter identification information indicates the pair Reconstruct slices for filtering;
- the first filter identification information of the first color component If all of the first filter identification information of the first color component, the first filter identification information of the second color component, and the first filter identification information of the third color component are second values, it is determined that the first filter identification information indicates that reconstruction is not performed. Slices are filtered.
- the first filter identification information corresponding to these color components are all the second value, then it can be instructed not to perform filtering processing on these color components, and it can be determined that the first filter identification information indication The reconstructed slice is not filtered; accordingly, if at least one of the first filter identification information of these color components is a first value, then it can be indicated that at least one color component is filtered, and it can be determined that the first filter identification information indicates that the Reconstruct slices for filtering.
- the first value and the second value are different, and the first value and the second value may be in parameter form or in numerical form.
- the identification information corresponding to these color components may be parameters written in the profile, or may be the value of a flag, which is not specifically limited here.
- the first value can be set to 1, and the second value can be set to 0; or, the first value can also be set to true, and the second value can also be set to false. ; But there is no specific limit here.
- the decoder after the decoder decodes the code stream and determines the first filtering identification information, if the first filtering identification information indicates filtering of the reconstructed slice, it can further decode to obtain the filtering process.
- the first filter coefficient After the decoder decodes the code stream and determines the first filtering identification information, if the first filtering identification information indicates filtering of the reconstructed slice, it can further decode to obtain the filtering process.
- the first filter coefficient the first filter coefficient.
- the filter may be an adaptive filter, for example, it may be a filter based on a neural network, a Wiener filter, etc., which is not specifically limited here.
- Wiener filter as an example, the first filter coefficients described in the embodiments of the present application can be used for Wiener filtering, that is, the first filter coefficients are coefficients processed by Wiener filtering.
- Wiener filter is a linear filter with minimization of the mean square error as the optimal criterion.
- the square of the difference between its output and a given function (often called the expected output) is minimized, and through mathematical operations it can eventually become a problem of solving the Toblitz equation.
- Wiener filter is also called least squares filter or least squares filter.
- the first filter coefficient may refer to the first filter coefficient corresponding to the component to be processed.
- parsing the code stream and determining the first filter coefficient may include: if the first filter identification information of the component to be processed is If the value is the first value, then the code stream is parsed to determine the first filter coefficient vector corresponding to the component to be processed.
- the color component here includes a first color component, a second color component and a third color component; accordingly, the first filter coefficient may be the first color component.
- the code stream is parsed to determine the first filter coefficient vector corresponding to the first color component; or if If the value of the first filter identification information of the second color component is the first value, then the code stream is parsed to determine the first filter coefficient vector corresponding to the second color component; or if the value of the first filter identification information of the third color component is If the value is the first value, the code stream is parsed to determine the first filter coefficient vector corresponding to the third color component.
- the method may further include: if the first filter identification information indicates that the reconstructed slice is not to be filtered, the step of parsing the code stream and determining the first filter coefficient is not performed.
- the decoder when the first filter identification information indicates that the reconstructed slice is filtered, the decoder can decode to directly obtain the first filter coefficient. However, after the decoder decodes and determines the first filter identification information, if the first filter identification information indicates that a certain color component of the reconstructed slice is not to be filtered, the decoder does not need to decode to obtain the filter coefficient vector corresponding to the color component.
- filtering the reconstructed slice according to the first filter coefficient and determining the filtered slice corresponding to the reconstructed slice may include:
- the first point represents a point in the reconstructed slice.
- the K 1 target points corresponding to the first point include the first point and the (K 1 -1) neighbor points adjacent to the first point in the reconstructed slice.
- K 1 is an integer greater than 1.
- the (K 1 -1) neighbor points here specifically refer to the (K 1 -1) neighbor points with the closest geometric distance to the first point.
- determining the K 1 target points corresponding to the first point in the reconstructed slice may include:
- filtering the K 1 target points corresponding to the first point in the reconstructed slice according to the first filter coefficient and determining the filtered slice may include:
- the filtered slice is determined according to the filter value of the attribute information of at least one point in the reconstructed slice.
- the attribute information takes the color component as an example.
- Each color component of the reconstructed slice can be filtered separately.
- the first filter corresponding to the Y component can be determined respectively.
- These first filter coefficient vectors together constitute the first filter coefficients of the reconstructed slice.
- the first filter coefficient can be determined by the K 1- order filter; in this way, when the decoder performs filtering, it can perform a KNN search for each point in the reconstructed slice to determine the K 1 corresponding to the point. A target point, and then use the first filter coefficient of the point to filter.
- the filtered slice when filtering the K 1 target points corresponding to the first point in the reconstructed slice according to the first filter coefficient, it may include: first filtering according to the color component The coefficient vector performs filtering on the K 1 target points corresponding to the first point in the reconstructed slice, and obtains the filtered value of the color component of the first point in the reconstructed slice. In this way, after determining the filter value of the color component of at least one point in the reconstructed slice, the filtered slice can be determined based on the filter value of the color component of the at least one point in the reconstructed slice.
- the filter value of the color component of each point in the reconstructed slice can be determined based on the filter coefficient vector and the second attribute parameter corresponding to the color component; and then based on the filter value of the color component of each point in the reconstructed slice, the filter value is obtained Later sliced.
- the filter value of the attribute information is determined to determine the second attribute parameter corresponding to the attribute information; then the filter value of the attribute information can be determined based on the first filter coefficient and the second attribute parameter; finally, the filtered slice can be obtained based on the filter value of the attribute information.
- the attribute information is the color component of the YUV space
- it can be first based on the reconstructed value of the color component (such as Y component, U component, V component), combined with the order of the Wiener filter number to determine the second attribute parameter (represented by P(n,k)), that is, the second attribute parameter represents the reconstructed value of the color component of K 1 target points corresponding to the midpoint of the reconstructed slice.
- the filter type can be used to indicate the filter order, and/or filter shape, and/or filter dimension.
- filter shapes include diamonds, rectangles, etc.
- filter dimensions include one-dimensional, two-dimensional or even more dimensions. That is to say, in the embodiment of the present application, different filter types can correspond to Wiener filters of different orders.
- the order values can be 12, 32, and 128 Wiener filters; different types It can also correspond to filters of different dimensions, such as one-dimensional filters, two-dimensional filters, etc., which are not specifically limited here.
- the filter is not specifically limited here.
- the second attribute parameter of the color component and the first filter coefficient vector corresponding to the color component can be used to determine the filter value of the color component. . After traversing all the color components and obtaining the filter value of each color component, the filtered slice can be obtained.
- the reconstructed slice and the first filter coefficient can be input into the Wiener filter at the same time, that is, the Wiener filter
- the inputs are the first filter coefficient and the reconstructed slice.
- the filtering process of the reconstructed slice can be completed based on the first filter coefficient, and the corresponding filtered slice can be obtained.
- the first filter coefficient is obtained based on the initial slice and the reconstructed slice. Therefore, applying the first filter coefficient to the reconstructed slice can restore the initial slice to the maximum extent.
- the order of the filter is K 1 and the point cloud sequence is n, represented by the matrix P(n,k)
- P(n,k) The reconstructed value of K 1 target points corresponding to all points in the reconstructed slice under the same color component (such as Y component), that is, P(n,k) is the Y component of K 1 target points corresponding to all points in the reconstructed slice.
- the second attribute parameter composed of the reconstructed value.
- the first filter coefficient vector H(k) under the Y component is applied to the reconstructed slice, that is, the second attribute parameter P(n,k), and the attribute information under the Y component can be obtained
- the filter value R(n) Then, you can traverse the U component and V component according to the above method, and finally determine the filter value under the U component and the filter value under the V component. Then you can use the filter values under all color components to determine the filtered value corresponding to the reconstructed slice. slice.
- the decoder can use the filtered slice to cover the reconstructed slice.
- the method may further include: after determining at least one filtered slice of the reconstructed point cloud, performing aggregation processing on the at least one filtered slice to determine the filtered point cloud.
- the decoder performs a Wiener filtering operation on each slice by parsing the code stream, specifically the operations shown in steps S701 to S703; until all the multiple slices are operated Once completed, multiple filtered slices can be obtained. By aggregating these filtered slices together, the filtered point cloud corresponding to the reconstructed point cloud can be obtained.
- Figure 8 shows a schematic flowchart 2 of a decoding method provided by an embodiment of the present application. As shown in Figure 8, the method may also include:
- S801 Analyze the code stream and determine the second filter identification information.
- S803 Filter the reconstructed point cloud according to the second filter coefficient, and determine the filtered point cloud corresponding to the reconstructed point cloud.
- analyzing the code stream and determining the second filter identification information may include: when the reconstructed point cloud does not meet the preset conditions, parsing the code stream and determining the second filter identification information.
- determining the parameter information of the reconstructed point cloud may be determining the number of points in the reconstructed point cloud.
- the parameter information of the reconstructed point cloud (such as the number of points in the reconstructed point cloud) can be obtained by parsing the code stream, or can be determined based on the decoded geometric information. There is no limitation here. In this way, embodiments of the present application can determine whether the reconstructed point cloud meets the preset conditions based on the number of points in the reconstructed point cloud, and then determine whether to perform filtering processing on the entire reconstructed point cloud or on divided reconstruction slices.
- the second filter identification information after parsing the code stream, if the second filter identification information is obtained, it means that the filtering process is performed on the entire reconstructed point cloud.
- the second filter identification information may also include filter identification information of the component to be processed of the attribute information.
- parsing the code stream and determining the second filter identification information may include: parsing the code stream and determining the second filter identification information of the component to be processed; wherein, the component to be processed The second filtering identification information indicates whether to perform filtering processing on the component to be processed of the attribute information of the reconstructed point cloud.
- the code stream and determining the second filter identification information may also include: if the value of the second filter identification information of the component to be processed is a third value, determining the reconstruction point The to-be-processed component of the attribute information of the cloud is filtered; if the value of the second filter identification information of the to-be-processed component is the fourth value, it is determined that the to-be-processed component of the attribute information of the reconstructed point cloud is not to be filtered.
- parsing the code stream and determining the second filter identification information may include: parsing the code stream and determining the second filter identification information of the first color component and the second filter identification information of the second color component. and second filter identification information of the third color component.
- the second filter identification information of the first color component indicates whether to perform filtering processing on the first color component of the attribute information of the reconstructed point cloud
- the second filter identification information of the second color component indicates whether to perform filtering processing on the attribute information of the reconstructed point cloud.
- the second color component is filtered
- the second filter identification information of the third color component indicates whether to perform filtering on the third color component of the attribute information of the reconstructed point cloud.
- the second filter identification information may be in the form of an array, specifically a 1 ⁇ 3 array, which is composed of the second filter identification information of the first color component, the second filter identification information of the second color component and the third color It consists of the second filter identification information of the component.
- the second filter identification information can be represented by [Y U V], where Y represents the second filter identification information of the first color component, U represents the second filter identification information of the second color component, and V represents the third color component The second filter identification information.
- the second filter identification information includes the second filter identification information corresponding to each color component, the second filter identification information can not only be used to determine whether to filter the reconstructed point cloud, but also can be used to determine whether to filter the reconstructed point cloud. It is used to determine which color component or color components are to be filtered.
- the value of the second filter identification information of the first color component is the third value, it is determined to perform filtering processing on the first color component of the attribute information of the reconstructed point cloud; if the first color If the value of the second filter identification information of the component is the fourth value, it is determined not to perform filtering processing on the first color component of the attribute information of the reconstructed point cloud; or, if the value of the second filter identification information of the second color component is If the third value is the third value, it is determined that the second color component of the attribute information of the reconstructed point cloud is filtered; if the value of the second filter identification information of the second color component is the fourth value, it is determined that the attribute information of the reconstructed point cloud is not filtered.
- the second color component of the reconstructed point cloud is filtered; or, if the value of the second filter identification information of the third color component is the third value, it is determined that the third color component of the attribute information of the reconstructed point cloud is filtered; if If the value of the second filter identification information of the three color components is the fourth value, it is determined that the third color component of the attribute information of the reconstructed point cloud will not be filtered. That is to say, in the embodiment of the present application, if the second filter identification information corresponding to a certain color component is the third value, then the color component can be instructed to perform filtering processing; if the second filter identification information corresponding to a certain color component is The fourth value can indicate that the color component is not filtered.
- the method may also include:
- the second filter identification information of the first color component, the second filter identification information of the second color component and the second filter identification information of the third color component is a third value, it is determined that the second filter identification information indicates the pair Reconstruct the point cloud for filtering;
- the second filter identification information of the first color component If all of the second filter identification information of the first color component, the second filter identification information of the second color component, and the second filter identification information of the third color component are fourth values, it is determined that the second filter identification information indicates that reconstruction is not performed.
- the point cloud is filtered.
- the second filter identification information corresponding to these color components are all the fourth value, then it can be instructed that these color components are not filtered, and the second filter identification information indication can be determined.
- the reconstructed point cloud is not filtered; accordingly, if at least one of the second filter identification information of these color components is the third value, then it can be indicated that at least one color component is filtered, and the second filter identification information indication can be determined Filter the reconstructed point cloud.
- the third value and the fourth value are different, and the third value and the fourth value may be in parameter form or in numerical form.
- the second filter identification information corresponding to these color components may be parameters written in the profile, or may be the value of a flag, which is not specifically limited here.
- the first filtering identification information indicates whether to perform filtering processing on the reconstructed slice
- the second filtering identification information indicates whether to perform filtering processing on the reconstructed point cloud
- the first filtering identification information and the second filtering identification information are different.
- the first filter identification information can be [0 1 1]
- the second filter identification information can be [0 2 2]; if decoding obtains [0 1 1], then it means that the second filter in the reconstructed slice needs to be The color component and the third color component are filtered; if the decoding obtains [0 2 2], then it means that the second color component and the third color component in the reconstructed point cloud need to be filtered.
- the decoder after the decoder decodes the code stream and determines the second filtering identification information, if the second filtering identification information indicates filtering of the reconstructed point cloud, it can further decode to obtain the filtering information for filtering. the second filter coefficient.
- the filter may be an adaptive filter, for example, it may be a filter based on a neural network, a Wiener filter, etc., which is not specifically limited here.
- Wiener filter as an example, the second filter coefficients described in the embodiments of the present application can be used for Wiener filtering, that is, the second filter coefficients are coefficients processed by Wiener filtering.
- the second filter coefficient may refer to the second filter coefficient corresponding to the component to be processed.
- parsing the code stream and determining the second filter coefficient may include: if the second filter identification information of the component to be processed is the third value, then analyze the code stream and determine the second filter coefficient vector corresponding to the component to be processed.
- the component to be processed is a color component
- the color component here includes a first color component, a second color component and a third color component; then correspondingly, the second filter coefficient may be the first color component.
- the code stream is parsed to determine the second filter coefficient vector corresponding to the first color component; or if The value of the second filter identification information of the second color component is the third value, then the code stream is analyzed to determine the second filter coefficient vector corresponding to the second color component; or, if the value of the second filter identification information of the third color component is If the value is the third value, the code stream is parsed to determine the second filter coefficient vector corresponding to the third color component.
- the method may further include: if the second filtering identification information indicates that the reconstructed point cloud is not to be filtered, the step of parsing the code stream and determining the second filter coefficient is not performed. That is to say, in this embodiment of the present application, when the second filter identification information indicates that the reconstructed point cloud is filtered, the decoder can decode to directly obtain the second filter coefficient. However, after the decoder decodes and determines the second filter identification information, if the second filter identification information indicates that a certain color component of the reconstructed point cloud is not to be filtered, then the decoder does not need to decode to obtain the filter coefficient vector corresponding to the color component. .
- filtering the reconstructed point cloud according to the second filter coefficient and determining the filtered point cloud corresponding to the reconstructed point cloud may include:
- the first point represents a point in the reconstructed point cloud
- the K 2 target points corresponding to the first point include the first point and the (K 2 -1) neighbors adjacent to the first point in the reconstructed point cloud.
- K 2 is an integer greater than 1.
- the (K 2 -1) neighboring points here specifically refer to the (K 2 -1) neighboring points with the closest geometric distance to the first point.
- determining the K 2 target points corresponding to the first point in the reconstructed point cloud may include:
- the K nearest neighbor search method to search a preset number of candidate points in the reconstructed point cloud, calculate the distance value between the first point and these candidate points, and then use these candidate points to Select the (K 2 -1) nearest neighbor points to the first point; that is, in addition to the first point itself, it also includes the (K 2 -1) closest geometric distance to the first point
- the neighboring points constitute a total of K 2 target points corresponding to the first point in the reconstructed point cloud.
- filtering the K 2 target points corresponding to the first point in the reconstructed point cloud according to the second filter coefficient and determining the filtered point cloud includes:
- the filtered point cloud is determined based on the filter value of the attribute information of at least one point in the reconstructed point cloud.
- the attribute information takes the color component as an example, and each color component of the reconstructed point cloud can be filtered separately.
- the second value corresponding to the Y component can be determined respectively.
- These second filter coefficient vectors together constitute the second filter coefficients of the reconstructed point cloud.
- the second filter coefficient can be determined by K 2nd order filter; in this way, when the decoder performs filtering, it can perform a KNN search for each point in the reconstructed point cloud to determine the K corresponding to the point. 2 target points, and then use the second filter coefficient of the point to filter.
- the second filter coefficient when using the second filter coefficient to filter the reconstructed point cloud, you can first use the order of the Wiener filter and the K 2 target points corresponding to the midpoint of the reconstructed point cloud.
- the reconstructed value of the attribute information determines the second attribute parameter corresponding to the attribute information; then the filter value of the attribute information can be determined based on the second filter coefficient and the second attribute parameter; finally, the filtered slice can be obtained based on the filter value of the attribute information .
- the attribute information is the color component of the YUV space
- it can be first based on the reconstructed value of the color component (such as Y component, U component, V component), combined with the order of the Wiener filter number, determine the second attribute parameter (represented by P(n,k)), that is, the second attribute parameter represents the reconstructed value of the color component of K 2 target points corresponding to the point in the reconstructed point cloud.
- the filter type can be used to indicate the filter order, and/or filter shape, and/or filter dimension.
- filter shapes include diamonds, rectangles, etc.
- filter dimensions include one-dimensional, two-dimensional or even more dimensions. That is to say, in the embodiment of the present application, different filter types can correspond to Wiener filters of different orders.
- the order values can be 12, 32, and 128 Wiener filters; different types It can also correspond to filters of different dimensions, such as one-dimensional filters, two-dimensional filters, etc., which are not specifically limited here.
- the filter is not specifically limited here.
- the second attribute parameter of the color component and the second filter coefficient vector corresponding to the color component can be used to determine the filter value of the color component. . After traversing all the color components and obtaining the filter value of each color component, the filtered slice can be obtained.
- the reconstructed point cloud and the second filter coefficient can be input into the Wiener filter at the same time, that is, the Wiener filter
- the input of the filter is the second filter coefficient and the reconstructed point cloud.
- the filtering process of the reconstructed point cloud can be completed based on the second filter coefficient, and the corresponding filtered point cloud can be obtained.
- the second filter coefficient is obtained based on the initial point cloud and the reconstructed point cloud. Therefore, applying the second filter coefficient to the reconstructed point cloud can restore the initial point cloud to the maximum extent.
- the order of the filter is K 2 and the point cloud sequence is n, represented by the matrix P(n,k)
- P(n,k) The reconstruction value of K 2 target points corresponding to all points in the reconstructed point cloud under the same color component (such as Y component), that is, P(n,k) is the reconstruction value of K 2 target points corresponding to all points in the reconstructed point cloud.
- the second attribute parameter composed of the reconstructed value of the Y component.
- the second filter coefficient vector H(k) under the Y component is applied to the reconstructed point cloud, that is, the second attribute parameter P(n,k), and the attributes under the Y component can be obtained
- the filter value R(n) of the information Then, you can traverse the U component and V component according to the above method, and finally determine the filter value under the U component and the filter value under the V component. Then you can use the filter values under all color components to determine the filter corresponding to the reconstructed point cloud. Back point cloud.
- the decoder determines the filtered point cloud corresponding to the reconstructed point cloud, it can use the filtered point cloud to cover the reconstructed point cloud.
- the quality of the filtered point cloud obtained after filtering is significantly enhanced; therefore, after obtaining the filtered point cloud, the filtered point cloud can be used to overwrite the original reconstruction. Point cloud, thereby realizing the entire encoding, decoding and quality enhancement operations.
- this method can also Including: If the color component of the point cloud after filtering does not conform to the RGB color space (for example, YUV color space, YCbCr color space, etc.), then perform color space conversion on the filtered point cloud so that the color of the point cloud after filtering is The components conform to the RGB color space.
- the RGB color space for example, YUV color space, YCbCr color space, etc.
- the method may also include:
- the code stream is parsed to determine the third filter coefficient; where the reconstructed aggregate slice is obtained by aggregating n reconstructed slices, and n is an integer greater than 1;
- Filtering is performed on the reconstructed aggregation slice according to the third filter coefficient, and a filtered aggregation slice corresponding to the reconstructed aggregation slice is determined.
- the step of parsing the code stream and determining the third filter coefficient is not performed, that is, the filtering step for the reconstructed aggregate slice is skipped.
- the reconstructed aggregate slice is filtered according to the third filter coefficient.
- the K 3 target points corresponding to the first point in the reconstructed aggregate slice are filtered according to the third filter coefficient to determine the filtered Aggregate slices.
- the first point here represents the point in the reconstructed aggregation slice.
- the K 3 target points corresponding to the first point include the first point and the (K 3 -1) neighbor points adjacent to the first point in the reconstructed aggregation slice.
- K 3 is an integer greater than 1; the (K 3 -1) neighbor points here specifically refer to the (K 3 -1) neighbor points with the closest geometric distance to the first point in the reconstructed aggregation slice.
- n (1 ⁇ n ⁇ S, S is the total number of slices) slices can be first constructed into a relatively large point cloud.
- the quality enhancement processing method is applied to the slices; thus, the number of transmitted filter coefficients can be reduced, and the compression efficiency can be improved; and this method can also be successfully implemented when s is not too large.
- the embodiments of the present application may also perform quality enhancement processing on reconstructed slices first, and then perform an overall quality enhancement processing on the entire reconstructed point cloud after all reconstructed slices are processed to achieve the purpose of secondary enhancement. Therefore, in some embodiments, the method may further include:
- the fourth filtering identification information indicates filtering of the first filtered point cloud, parse the code stream and determine the fourth filtering coefficient
- the first filtered point cloud is filtered according to the fourth filter coefficient, and a second filtered point cloud corresponding to the first filtered point cloud is determined.
- the step of parsing the code stream and determining the fourth filter coefficient will not be performed, that is, skipping the step of analyzing the first filtered point cloud. filtering step.
- the first filtered point cloud is filtered according to the fourth filter coefficient.
- the K 4 target points corresponding to the first point in the first filtered point cloud are filtered according to the fourth filter coefficient. to determine the second filtered point cloud.
- the first point here represents the point in the first filtered point cloud
- the K 4 target points corresponding to the first point include the first point and (K 4 ) adjacent to the first point in the first filtered point cloud.
- -1) neighbor points K 4 is an integer greater than 1; the (K 4 -1) neighbor points here specifically refer to the (K 4 -1) closest geometric distance to the first point in the first filtered point cloud ) nearest neighbor points.
- K 1 , K 2 , K 3 and K 4 may be the same or different.
- K 1 , K 2 , K 3 , and K 4 can all be set to 16, but this is not specifically limited in this embodiment.
- the decoder can skip the filtering process and apply the reconstructed point cloud obtained by the original program, that is, the reconstructed point cloud will no longer be filtered and updated.
- the embodiment of this application proposes a technology that adaptively performs Wiener filtering on the YUV component in the encoding and decoding attribute information to better enhance the quality of the point cloud. Specifically, it can be based on the number of points in the reconstructed point cloud. , decide whether to perform Wiener filtering on the entire reconstructed point cloud, or filter the divided slices separately. This can not only ensure the point cloud compression efficiency to the greatest extent, but also be more universal, that is, this method has application value for any point cloud.
- the embodiment of the present application proposes an adaptive point cloud color quality improvement method, which is suitable for any point cloud sequence, especially for sequences with densely distributed points and medium code rate sequences, and has a more prominent optimization effect.
- the number of points in the point cloud is used as prior knowledge to achieve the greatest trade-off between efficiency and generality. Specifically, this is achieved by deciding whether point cloud processing is applied to the entire reconstructed point cloud or to each point cloud slice individually.
- point cloud processing when applied to the entire point cloud, it usually has the advantages of slightly better compression efficiency and slightly lower time complexity.
- the disadvantage is that it takes up a lot of resources such as memory. When the memory is limited, it is difficult to perform its due function on large point clouds.
- the preset threshold T of points is set in advance. The setting of this value should be based on the specific conditions of the current device, so as to maximize the improvement of compression efficiency; therefore, this technical solution is not limited to Wiener filtering operations. Solutions can be provided for any method that is constrained by resources, etc.
- Wiener filter proposed in the embodiment of this application can be used within the prediction loop, that is, as an inloop filter, and can be used as a reference for decoding subsequent point clouds; it can also be used outside the prediction loop, that is, as a post filter is used, not used as a reference for decoding subsequent point clouds. This application does not specifically limit this.
- the Wiener filter proposed in the embodiment of the present application is an in-loop filter
- the parameter information indicating the filtering process needs to be The identification information is written into the code stream, and the filter coefficients also need to be written into the code stream.
- the filter coefficients are not written into the code stream.
- the Wiener filter proposed in the embodiment of this application is a post-processing filter
- the filter coefficient corresponding to the filter is located in a separate auxiliary information data unit (for example, supplementary enhancement information SEI)
- auxiliary information data unit for example, supplementary enhancement information SEI
- the decoder does not obtain the supplementary enhancement information, it will not Reconstruct the point cloud for filtering.
- the filter coefficients and other information corresponding to the filter are in an auxiliary information data unit.
- the parameter information indicating the filtering process such as the filter identification information
- the filter coefficients also need to be written into the code stream; accordingly, after determining not to perform filtering, you can choose not to write parameter information indicating filtering, such as filter identification information, into the code stream, and at the same time, do not write the filtering information into the code stream.
- the coefficients are written into the code stream.
- one or more parts of the reconstructed point cloud/reconstructed slice can be selected to be filtered, that is, the control range of the filtering identification information can be the entire reconstructed point cloud/
- the reconstructed slice can also be a certain part of the reconstructed point cloud/reconstructed slice, which is not specifically limited here.
- This embodiment provides an encoding method, which is applied to the encoder.
- the first filter identification information is determined; if the first filter identification information indicates that the reconstructed slice of the reconstructed point cloud is filtered, the code stream is parsed to determine the first filter coefficient; and the reconstructed slice is processed according to the first filter coefficient. Filter processing to determine the filtered slice corresponding to the reconstructed slice.
- the encoding end will perform filtering based on the divided reconstructed slices, and only after determining that the reconstructed slices need to be filtered, the corresponding filter coefficients will be passed to the decoder; correspondingly Therefore, the decoder can directly decode to obtain the filter coefficients, and then use the filter coefficients to filter the reconstructed slices; in this way, filtering based on the reconstructed slices not only avoids memory overflow due to limited memory resources when processing large point clouds , making it highly universal; it can also optimize the reconstructed point cloud, which can improve the quality of the point cloud, save bit rates, and improve encoding and decoding efficiency.
- Wiener filtering which uses Wiener filtering to post-process the attribute information at the decoding end, and has achieved good results.
- the initial point cloud and reconstructed point cloud are used as input, and the order of the Wiener filter is K, then the neighborhood of each point is the K nearest neighbor of the point (while taking the point itself into account).
- the optimal coefficient of the Wiener filter is calculated for each color channel (Y, U, V) of the attribute information, and the coefficient is used to filter the corresponding color channel of the reconstructed point cloud, and finally the quality enhancement is obtained The filtered point cloud.
- the filter identification information and filter coefficients are written into the code stream.
- first decode the filter identification information then determine the color channel that needs to be filtered, and then continue to decode the filter coefficients, use the coefficients to perform filtering, and overwrite the values of the reconstructed point cloud with the obtained values to obtain filtered points with improved quality. cloud.
- this method can be applied to all encoding methods except attribute lossless, that is, geometric lossless, attribute lossy, geometry lossy, attribute lossy, geometry lossless, attribute almost lossless; and is suitable for RAHT, Predicting Transform and Lifting Transform, etc. Transformation method.
- this method compares the impact of the number of nearest neighbor points, that is, the value of K, on the experimental results, and optimizes it accordingly. Choosing K around 16 can best achieve the trade-off between effect and time complexity.
- the point cloud attribute information Wiener filter quality enhancement technology proposed in related technologies has the following problems. Some point clouds cannot apply this quality enhancement technology.
- a technology for adaptive Wiener filtering processing of the encoding and decoding attribute information (such as the YUV component of the color reconstruction value) is proposed. , compared with related technologies, it has increased universality and has similar test results.
- the point cloud will be sliced according to the number of points in the quantized point cloud, that is, a point cloud with a larger number of points will be divided into multiple 3D slice (three-dimensional slice), each slice contains about 800,000-1 million points.
- the G-PCC encoding end will perform geometric encoding and attribute encoding on each slice, and finally aggregate all the slices together to generate Reconstruct the point cloud.
- Wiener filtering is performed on the entire reconstructed point cloud, but this method is difficult to apply to some large point clouds.
- the specific process can be: the initial point cloud is After the slicing process, the initial slice (i.e.: initial point cloud slice) can be obtained; then the initial slice is encoded and reconstructed to obtain the reconstructed slice (i.e.: reconstructed point cloud slice); then the initial slice and the reconstructed slice are input separately to the Wiener filter; after calculation by the Wiener filter, the output of the Wiener filter is the first filter coefficient; the first filter coefficient can be selectively written into the attribute code stream.
- the initial slice i.e.: initial point cloud slice
- the initial slice is encoded and reconstructed to obtain the reconstructed slice (i.e.: reconstructed point cloud slice)
- the initial slice and the reconstructed slice are input separately to the Wiener filter
- the output of the Wiener filter is the first filter coefficient
- the first filter coefficient can be selectively written into the attribute code stream.
- FIG 10 it shows the situation where the number of points in the reconstructed point cloud is less than the preset threshold (which can be represented by T).
- T the preset threshold
- the specific process can be as follows: after the initial point cloud is encoded and reconstructed, the reconstructed point cloud can be obtained; Then the reconstructed point cloud and the initial point cloud are input to the Wiener filter; after calculation by the Wiener filter, the output of the Wiener filter is the second filter coefficient, and the second filter coefficient can be selectively written into the attribute code stream.
- the quantized point cloud After geometric quantization of the point cloud at the G-PCC encoding end, the quantized point cloud can be obtained.
- the quantized point cloud has the same number of points as the final reconstructed point cloud (if the geometry is lossless, it will also have the same number of points as the initial point cloud).
- the method or mode of Wiener filtering on the point cloud can be determined.
- the Wiener filtering method of fragmented point cloud is used. Since G-PCC has implemented the division of point cloud slices before geometric encoding, quality enhancement can be directly based on this slicing method. After the color attribute encoding of the point cloud is completed and the reconstruction is completed, the color-lossy reconstructed slices and the corresponding initial slices are input to the Wiener filter.
- the function of the encoding-side Wiener filter is mainly to calculate the optimal filter coefficients, and at the same time determine whether the quality of the point cloud (or point cloud slice) after Wiener filtering has been improved, and selectively transfer the optimal filter coefficients.
- KNN is used to construct a neighborhood for each point in the reconstructed slice.
- the size of K is the number of searches for the nearest neighbor points of each point, which is the order of the Wiener filter.
- the three YUV color components of the point cloud slice are calculated separately (if the point cloud is not converted to YUV format in G-PCC, the point cloud slice needs to be converted from RGB to YUV first). Filter coefficients and perform filtering processing. Then calculate separately on each color component: the RDCost size of the filtered slice relative to the initial slice and the reconstructed slice relative to the original slice. The specific calculation method is shown in the above formula (14).
- the optimal coefficient calculated for this component will be written
- the input code stream is passed to the decoding end to perform post-processing operations on the component of the decoded reconstructed slice. For the entire slice, if the rate-distortion value of one or several components is reduced relative to the reconstructed slice, for example, only the rate-distortion value of the V component is reduced, then only the V component will be filtered at the decoding end.
- a 1 ⁇ 3 decision array is written into the code stream as the first filter identification information (to determine whether it is needed and which components need to be post-processed at the decoding end, that is, filtering operations, such as (0,1,1) to describe the UV component filtering ), and then write the first filter coefficient into the code stream. If the cost of all components increases, there is no need to write the first filter identification information and the first filter coefficient into the code stream, and at the same time, the Wiener filtering operation should not be performed at the decoding end. Note that information such as filter coefficients is written into the code stream immediately after the attribute encoding information of the slice.
- the point cloud quality enhancement method of this slice has a large code stream expenditure due to the large number of transmission filter coefficients.
- Each slice may have coefficient information written into the code stream, which may have a slight impact on the compression efficiency. Therefore, this method It is only used when the number of points in the point cloud is greater than or equal to the preset threshold T.
- the Wiener filtering quality enhancement method is performed on the entire reconstructed point cloud. This method is the same as that of related technologies, and is similar to the main idea of the method described in (3).
- the reconstructed point cloud and the initial point cloud are input to the Wiener filter; when calculating the rate distortion cost, R all is the attribute code of the entire point cloud Stream size; the last transmitted second filter identification information should be different from the form of the first filter identification information in the fragmented point cloud processing method (such as (0,2,2) indicating filtering of the UV component), so that it can be used at the decoding end Determine whether to filter the slice or the entire point cloud.
- the specific process can be: after the attribute code stream is decoded and reconstructed, the reconstructed slice can be obtained; and then continue to parse the code stream , the first filter identification information (such as a decision array) can be obtained; when the first filter identification information indicates that the reconstructed slice is filtered, decoding can be continued to obtain the first filter coefficient; and then the first filter coefficient and the reconstructed slice are input to
- the Wiener filter outputs filtered slices with enhanced quality through the Wiener filter; then continues to parse the code stream to filter the next reconstructed slice; after obtaining all the filtered slices, these filtered slices can be combined, Obtain the filtered point cloud.
- a reconstructed point cloud is obtained; at this time, by continuing to parse the code stream, the second filter identification information (such as a decision array) can be obtained; after the second filter identification information indicates, the reconstructed point cloud is During the filtering process, you can continue to decode to obtain the second filter coefficient; then input the second filter coefficient and the reconstructed point cloud to the Wiener filter, and output the filtered point cloud with enhanced quality through the Wiener filter.
- the second filter identification information such as a decision array
- the decoder decodes the attribute residual of each slice, it continues to read the code stream.
- the first filter identification information in the form of a 1 ⁇ 3 array is decoded. If after judgment, it is determined that certain color components need to be filtered, it means that the encoding end has the transmission of optimal coefficients. At this time, continue to decode to obtain Corresponds to the first filter coefficient; otherwise, decoding will not continue, and the Wiener filtering part of the slice will be skipped, and reconstruction will be performed according to the original procedure.
- the reconstructed slice and the first filter coefficient can be directly input to the Wiener filter at the decoder. Through calculation, a filtered slice with enhanced quality can be obtained and overlaid with the reconstructed slice.
- the Wiener filter quality enhancement method corresponding to the entire point cloud only needs to continue decoding to obtain the second filter identification information after all attribute residuals are decoded, and determine whether to continue decoding to obtain the second filter coefficient.
- Input the second filter coefficient and the reconstructed point cloud to the Wiener filter perform a quality enhancement operation, and cover the filtered point cloud with the reconstructed point cloud.
- Figure 12 is a schematic diagram of the test results of the prediction transformation under the CY test condition provided by the embodiment of the present application
- Figure 13 is a test result of the improvement under the C1 test condition provided by the embodiment of the present application.
- Figure 14 is a schematic diagram of the test results of the lifting transformation under the C2 test condition provided by the embodiment of the present application.
- Figure 15 is the test result of the RAHT transformation under the C1 test condition provided by the embodiment of the present application.
- Figure 16 is a schematic diagram of test results of RAHT transformation under C2 test conditions provided by the embodiment of the present application.
- the CY condition is the coding method of lossless geometry and lossy attribute (lossless geometry, lossy attribute);
- the condition of C1 is the coding method of lossless geometry and lossy attribute (lossless geometry, nearly lossless attribute);
- the condition of C2 is the coding method of lossy geometry and lossy attribute Coding method (lossy geometry, lossy attribute).
- End-to-End BD-AttrRate indicates the BD-Rate of the end-to-end attribute value for the attribute code stream.
- BD-Rate reflects the difference in PSNR curves under two conditions (with or without filtering). When BD-Rate decreases, it means that when PSNR is equal, the code rate decreases and performance improves; otherwise, performance decreases. That is, the more the BD-Rate decreases, the better the compression effect.
- Cat1-A average and Cat1-B average respectively represent the average value of the point cloud sequence test results of the two data sets. Finally, Overall average is the average of all sequence test effects.
- n(1 ⁇ n ⁇ S, S is the total number of slices ) slices are first constructed into a larger point cloud slice, and then the quality enhancement method is applied to the point cloud slice.
- This can reduce the number of transfer coefficients, improve compression efficiency, and this method can be successfully implemented when S is not too large.
- the cloud quality enhancement process of segmented points can also be performed first. After all slices are processed, an overall quality enhancement is performed on the entire reconstructed point cloud to achieve two levels. The purpose of this enhancement is to further improve the quality and compression efficiency.
- the encoding and decoding method proposed in the embodiment of this application, and the adaptive G-PCC point cloud decoding end color Wiener filter quality enhancement technology proposed in this solution can not only improve the quality of the point cloud, but also improve the universality.
- the technical solutions of the embodiments of the present application have application value for all point clouds.
- An adaptive point cloud color quality improvement method is proposed here. Compared with related technologies, this method mainly uses the number of points in the point cloud as Prior knowledge, thereby achieving the greatest trade-off between efficiency and generality. Specifically, this is achieved by deciding whether point cloud processing is applied to the entire reconstructed point cloud or to each point cloud slice individually. When applied to the entire point cloud, it usually has the advantages of slightly better compression efficiency and slightly lower time complexity.
- the disadvantage is that it takes up a lot of resources such as memory. When the memory is limited, it is difficult to play its due role in large point clouds. ; When applied to each slice, the compression efficiency improvement is slightly lower than the former due to more information being transferred, but this method consumes less resources. As long as the current device can run the original program, this technology can be successfully implemented and run. Therefore, when resources are limited, regional processing can be a good choice.
- the preset threshold T is set in advance. The setting of this value should be based on the specific conditions of the current device, so as to maximize the improvement of compression efficiency; therefore, this solution is not limited to Wiener filtering operations, and can be Any method that is limited by resources etc. provides a solution. In this way, it can not only improve the quality of the point cloud, but also have universal applicability, and can also save bit rate, thereby improving encoding and decoding performance.
- FIG. 17 shows a schematic structural diagram of an encoder 300 provided by an embodiment of the present application.
- the encoder 300 may include: a first determination unit 3001, a first filtering unit 3002 and an encoding unit 3003; wherein,
- the first determination unit 3001 is configured to determine the reconstructed slice of the reconstructed point cloud; and when the reconstructed point cloud meets the preset conditions, determine the first filter coefficient according to the reconstructed slice and the initial slice corresponding to the reconstructed slice;
- the first filtering unit 3002 is configured to filter the reconstructed slice according to the first filter coefficient, and determine the filtered slice corresponding to the reconstructed slice;
- the first determining unit 3001 is further configured to determine the first filtering identification information according to the filtered slice; wherein the first filtering identification information indicates whether to perform filtering processing on the reconstructed slice;
- the encoding unit 3003 is configured to encode the first filter identification information; and if the first filter identification information indicates filtering the reconstructed slice, encode the first filter coefficient;
- the encoding unit 3003 is also configured to write the obtained encoded bits into the code stream.
- the first determination unit 3001 is further configured to determine the parameter information of the reconstructed point cloud; and determine whether the reconstructed point cloud satisfies the preset condition based on the parameter information of the reconstructed point cloud.
- the first determination unit 3001 is further configured to determine the number of points in the reconstructed point cloud; and if the number of points in the reconstructed point cloud is greater than or equal to a preset threshold, determine that the reconstructed point cloud meets the preset condition; Alternatively, if the number of points in the reconstructed point cloud is less than the preset threshold, it is determined that the reconstructed point cloud does not meet the preset condition.
- the first determination unit 3001 is further configured to perform geometric quantization processing on the initial point cloud to obtain a quantized point cloud; and determine the number of midpoints in the reconstructed point cloud based on the number of midpoints in the quantized point cloud.
- the first determination unit 3001 is further configured to perform slicing processing on the initial point cloud to obtain at least one initial slice; and to sequentially perform encoding and reconstruction processing on the at least one initial slice to obtain at least one reconstructed slice; and At least one reconstructed slice is aggregated to obtain a reconstructed point cloud.
- the first filtering unit 3002 is further configured to determine K 1 target points corresponding to the first point in the reconstructed slice; and to determine the K 1 target points corresponding to the first point in the reconstructed slice according to the first filter coefficient. Perform filtering to determine the filtered slice; where the first point represents the point in the reconstructed slice, and the K 1 target points corresponding to the first point include the first point and (K 1 - 1) neighboring points, K 1 is an integer greater than 1.
- the first filtering unit 3002 is also configured to search a preset number of candidate points in the reconstructed slice using a K nearest neighbor search method based on the first point in the reconstructed slice; and calculate the first point and the preset number of candidate points respectively.
- Distance values between a number of candidate points determine relatively small (K 1 -1) distance values from the obtained preset number of distance values; and candidates corresponding to (K 1 -1) distance values
- the point determines (K 1 -1) neighboring points, and the first point and (K 1 -1) neighboring points are determined as K 1 target points corresponding to the first point.
- the first filtering unit 3002 is further configured to perform filtering processing on K 1 target points corresponding to the first point in the reconstructed slice according to the first filter coefficient, and determine the filtering of the attribute information of the first point in the reconstructed slice. value; and after determining the filtered value of the attribute information of at least one point in the reconstructed slice, determining the filtered slice according to the filtered value of the attribute information of at least one point in the reconstructed slice.
- the first determination unit 3001 is further configured to determine the first attribute parameter based on the original value of the attribute information of the initial slice midpoint; and based on the attribute information of the K 1 target points corresponding to the reconstructed slice midpoint. reconstruct the value to determine the second attribute parameter; and determine the first filter coefficient based on the first attribute parameter and the second attribute parameter.
- the first determining unit 3001 is further configured to determine the cross-correlation parameter according to the first attribute parameter and the second attribute parameter; and determine the auto-correlation parameter according to the second attribute parameter; and determine the auto-correlation parameter according to the cross-correlation parameter and the auto-correlation parameter.
- the parameters are used for coefficient calculation to obtain the first filter coefficient.
- the first determining unit 3001 is further configured to determine the first generation value of the to-be-processed component of the attribute information of the reconstructed slice, determine the second-generation value of the to-be-processed component of the attribute information of the filtered slice; and according to The first generation value and the second generation value determine the first filter identification information of the component to be processed; and obtain the first filter identification information based on the first filter identification information of the component to be processed.
- the first determination unit 3001 is further configured to determine that the value of the first filter identification information of the component to be processed is the first value if the second generation value is less than the first generation value; and if the second generation value If the value is greater than the first generation value, it is determined that the value of the first filter identification information of the component to be processed is the second value.
- the first determination unit 3001 is further configured to use the rate-distortion cost method to calculate the cost value of the to-be-processed component of the attribute information of the reconstructed slice, and use the obtained first rate-distortion value as the first-generation value; And use the rate distortion cost method to calculate the cost value of the to-be-processed component of the attribute information of the filtered slice, and use the obtained second rate distortion value as the second generation value.
- the first determination unit 3001 is further configured to use a rate-distortion cost method to calculate the cost value of the to-be-processed component of the attribute information of the reconstructed slice to obtain the first rate-distortion value; and use the preset performance measurement index to calculate the cost value.
- the first determining unit 3001 is further configured to determine the first filter identification information of the component to be processed if the second performance value is greater than the first performance value and the second rate distortion value is less than the first rate distortion value. The value is the first value; and if the second performance value is less than the first performance value, it is determined that the value of the first filter identification information of the component to be processed is the second value.
- the first determining unit 3001 is further configured to determine the first filter identification information of the component to be processed if the second performance value is greater than the first performance value and the second rate distortion value is less than the first rate distortion value. The value is the first value; and if the second rate distortion value is greater than the first rate distortion value, it is determined that the value of the first filter identification information of the component to be processed is the second value.
- the first determining unit 3001 is further configured to determine to perform filtering processing on the to-be-processed component of the attribute information of the reconstructed slice if the value of the first filter identification information of the component to be processed is the first value; and If the value of the first filter identification information of the component to be processed is the second value, it is determined that the component to be processed of the attribute information of the reconstructed slice is not to be filtered.
- the attribute information includes a color component
- the color component includes at least one of the following: a first color component, a second color component, and a third color component; wherein, if the color component conforms to the RGB color space, the third color component is determined.
- the first color component, the second color component and the third color component are in order: R component, G component, B component; if the color component conforms to the YUV color space, it is determined that the first color component, the second color component and the third color component are in order are: Y component, U component, V component.
- the first determining unit 3001 is further configured to determine the first filter identification information of the first color component, the first filter identification information of the second color component, and the first filter identification information of the second color component when the component to be processed is a color component.
- the first filter identification information of the three color components; and the first filter identification information is obtained according to the first filter identification information of the first color component, the first filter identification information of the second color component and the first filter identification information of the third color component. information.
- the first determining unit 3001 is further configured to: if at least one of the first filter identification information of the first color component, the first filter identification information of the second color component, and the first filter identification information of the third color component One is the first value, then it is determined that the first filter identification information indicates filtering of the reconstructed slice; and if the first filter identification information of the first color component, the first filter identification information of the second color component and the third color component If all the first filtering identification information is the second value, it is determined that the first filtering identification information indicates that the reconstructed slice is not to be filtered.
- the first determining unit 3001 is further configured to not encode the first filter coefficient if the first filter identification information indicates that the reconstructed slice is not to be filtered.
- the first determination unit 3001 is also configured to determine the second filter coefficient based on the reconstructed point cloud and the initial point cloud corresponding to the reconstructed point cloud when the reconstructed point cloud does not meet the preset conditions;
- the first filtering unit 3002 is also configured to perform filtering processing on the reconstructed point cloud according to the second filter coefficient, and determine the filtered point cloud corresponding to the reconstructed point cloud;
- the first determination unit 3001 is further configured to determine second filter identification information based on the filtered point cloud; wherein the second filter identification information indicates whether to filter the reconstructed point cloud;
- the encoding unit 3003 is also configured to encode the second filter identification information; and if the second filter identification information indicates filtering the reconstructed point cloud, encode the second filter coefficient; and write the resulting encoded bits into code stream.
- the first filtering unit 3002 is further configured to determine K 2 target points corresponding to the first point in the reconstructed point cloud; and to determine the K 2 target points corresponding to the first point in the reconstructed point cloud according to the second filter coefficient.
- the target point is filtered to determine the filtered point cloud; among them, the first point represents the point in the reconstructed point cloud, and the K 2 target points corresponding to the first point include the first point and the points adjacent to the first point in the reconstructed point cloud. (K 2 -1) neighboring points, K 2 is an integer greater than 1.
- the first filtering unit 3002 is also configured to use the K nearest neighbor search method to search a preset number of candidate points in the reconstructed point cloud based on the first point in the reconstructed point cloud; and respectively calculate the first point and Distance values between a preset number of candidate points, determining relatively small (K 2 -1) distance values from the obtained preset number of distance values; and corresponding to (K 2 -1) distance values Determine (K 2 -1) neighbor points of candidate points, and determine the first point and (K 2 -1) neighbor points as K 2 target points corresponding to the first point.
- the first filtering unit 3002 is also configured to filter the K 2 target points corresponding to the first point in the reconstructed point cloud according to the second filter coefficient, and determine the attribute information of the first point in the reconstructed point cloud. the filter value; and after determining the filter value of the attribute information of at least one point in the reconstructed point cloud, determining the filtered point cloud according to the filter value of the attribute information of at least one point in the reconstructed point cloud.
- the first determination unit 3001 is further configured to determine the third attribute parameter based on the original value of the attribute information of the initial point cloud midpoint; and based on the attributes of the K 2 target points corresponding to the reconstructed point cloud midpoint. Determine the fourth attribute parameter based on the reconstructed value of the information; and determine the second filter coefficient based on the third attribute parameter and the fourth attribute parameter.
- the first determining unit 3001 is further configured to determine the cross-correlation parameter according to the third attribute parameter and the fourth attribute parameter; and determine the auto-correlation parameter according to the fourth attribute parameter; and determine the auto-correlation parameter according to the cross-correlation parameter and the auto-correlation parameter.
- the parameters are used for coefficient calculation to obtain the second filter coefficient.
- the first determination unit 3001 is further configured to determine the third generation value of the to-be-processed component of the attribute information of the reconstructed point cloud, and determine the fourth-generation value of the to-be-processed component of the attribute information of the filtered point cloud; And determine the second filter identification information of the component to be processed according to the third generation value and the fourth generation value; obtain the second filter identification information according to the second filter identification information of the component to be processed.
- the first determination unit 3001 is further configured to determine that the value of the second filter identification information of the component to be processed is the third value if the fourth generation value is less than the third generation value; and if the fourth generation value If the value is greater than the third generation value, the value of the second filter identification information of the component to be processed is determined to be the fourth value.
- the first determining unit 3001 is further configured to determine to perform filtering processing on the to-be-processed component of the attribute information of the reconstructed point cloud if the value of the second filter identification information of the component to be processed is the third value; And if the value of the second filter identification information of the component to be processed is the fourth value, it is determined that the component to be processed of the attribute information of the reconstructed point cloud is not to be filtered.
- the first determining unit 3001 is further configured to determine the second filter identification information of the first color component, the second filter identification information of the second color component, and the second filter identification information of the second color component when the component to be processed is a color component. Second filter identification information of three color components; and obtaining a second filter identification information based on the second filter identification information of the first color component, the second filter identification information of the second color component, and the second filter identification information of the third color component information.
- the first determining unit 3001 is further configured to: if at least one of the second filter identification information of the first color component, the second filter identification information of the second color component, and the second filter identification information of the third color component One is the third value, then it is determined that the second filter identification information indicates filtering of the reconstructed point cloud; and if the second filter identification information of the first color component, the second filter identification information of the second color component and the third color component If all of the second filter identification information are fourth values, it is determined that the second filter identification information indicates that the reconstructed point cloud is not filtered.
- the first determination unit 3001 is further configured to not encode the second filter coefficient if the second filter identification information indicates that the reconstructed point cloud is not to be filtered.
- the first determining unit 3001 is also configured to determine n initial slices from the multiple initial slices if the number of at least one initial slice is multiple, and perform aggregation processing on the n initial slices to obtain the initial aggregation slice; and determine the reconstructed aggregation slice corresponding to the initial aggregation slice; wherein, the reconstructed aggregation slice is obtained by aggregating the reconstructed slices corresponding to n initial slices, n is an integer greater than 1; and according to the initial aggregation slice and the reconstructed aggregation slice, determine third filter coefficient;
- the first filtering unit 3002 is also configured to filter the reconstructed aggregate slice according to the third filter coefficient, and determine the filtered aggregate slice corresponding to the reconstructed aggregate slice;
- the first determining unit 3001 is further configured to determine third filtering identification information based on the filtered aggregate slice; wherein the third filtering identification information indicates whether to perform filtering processing on the reconstructed aggregated slice;
- Encoding unit 3003 is also configured to encode the third filter identification information
- the first determining unit 3001 is further configured to encode the third filter coefficient if the third filter identification information indicates that the reconstructed aggregate slice is filtered;
- the encoding unit 3003 is also configured to write the obtained encoded bits into the code stream.
- the first filtering unit 3002 is further configured to determine at least one filtered slice after performing filtering processing on at least one reconstructed slice corresponding to the initial point cloud respectively;
- the first determination unit 3002 is further configured to perform aggregation processing on at least one filtered slice and determine the first filtered point cloud;
- the first determination unit 3001 is also configured to determine the fourth filter coefficient based on the initial point cloud and the first filtered point cloud;
- the first filtering unit 3002 is also configured to filter the first filtered point cloud according to the fourth filter coefficient, and determine the second filtered point cloud corresponding to the first filtered point cloud;
- the first determination unit 3001 is further configured to determine fourth filtering identification information based on the second filtered point cloud; wherein the fourth filtering identification information indicates whether to perform filtering processing on the first filtered point cloud;
- the encoding unit 3003 is also configured to encode the fourth filter identification information; and if the fourth filter identification information indicates filtering the first filtered point cloud, encode the fourth filter coefficient; and encode the obtained Bits are written to the code stream.
- the first determining unit 3001 is further configured to determine the predicted value of the attribute information of the initial slice midpoint; and determine the attributes of the initial slice midpoint according to the original value and the predicted value of the attribute information of the initial slice midpoint. The residual value of the information;
- the encoding unit 3003 is also configured to encode the residual value of the attribute information of the initial slice midpoint, and write the resulting encoded bits into the code stream.
- the "unit" may be part of a circuit, part of a processor, part of a program or software, etc., and of course may also be a module, or may be non-modular.
- each component in this embodiment can be integrated into one processing unit, or each unit can exist physically alone, or two or more units can be integrated into one unit.
- the above integrated units can be implemented in the form of hardware or software function modules.
- the integrated unit is implemented in the form of a software function module and is not sold or used as an independent product, it can be stored in a computer-readable storage medium.
- the technical solution of this embodiment is essentially either The part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product.
- the computer software product is stored in a storage medium and includes a number of instructions to make a computer device (can It is a personal computer, server, or network device, etc.) or processor that executes all or part of the steps of the method described in this embodiment.
- the aforementioned storage media include: U disk, mobile hard disk, Read Only Memory (ROM), Random Access Memory (RAM), magnetic disk or optical disk and other media that can store program code.
- the embodiment of the present application provides a computer storage medium for use in the encoder 300.
- the computer storage medium stores a computer program.
- the computer program is executed by the first processor, any one of the foregoing embodiments is implemented. Methods.
- the encoder 300 may include: a first communication interface 3101, a first memory 3102, and a first processor 3103; the various components are coupled together through a first bus system 3104. It can be understood that the first bus system 3104 is used to implement connection communication between these components. In addition to the data bus, the first bus system 3104 also includes a power bus, a control bus and a status signal bus. However, for the sake of clear explanation, various buses are labeled as the first bus system 3104 in FIG. 18 . in,
- the first communication interface 3101 is used for receiving and sending signals during the process of sending and receiving information with other external network elements;
- the first memory 3102 is used to store a computer program capable of running on the first processor 3103;
- the first processor 3103 is configured to execute: when running the computer program:
- the first filter identification information indicates filtering the reconstructed slice, encode the first filter coefficient
- the first memory 3102 in the embodiment of the present application may be a volatile memory or a non-volatile memory, or may include both volatile and non-volatile memories.
- non-volatile memory can be read-only memory (Read-Only Memory, ROM), programmable read-only memory (Programmable ROM, PROM), erasable programmable read-only memory (Erasable PROM, EPROM), electrically removable memory. Erase programmable read-only memory (Electrically EPROM, EEPROM) or flash memory.
- Volatile memory may be Random Access Memory (RAM), which is used as an external cache.
- RAM static random access memory
- DRAM dynamic random access memory
- DRAM synchronous dynamic random access memory
- SDRAM double data rate synchronous dynamic random access memory
- Double Data Rate SDRAM DDRSDRAM
- enhanced SDRAM ESDRAM
- Synchlink DRAM SLDRAM
- Direct Rambus RAM DRRAM
- the first memory 3102 of the systems and methods described herein is intended to include, but is not limited to, these and any other suitable types of memory.
- the first processor 3103 may be an integrated circuit chip with signal processing capabilities. During the implementation process, each step of the above method can be completed by instructions in the form of hardware integrated logic circuits or software in the first processor 3103 .
- the above-mentioned first processor 3103 can be a general-purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or a ready-made programmable gate array (Field Programmable Gate Array, FPGA). or other programmable logic devices, discrete gate or transistor logic devices, or discrete hardware components.
- DSP Digital Signal Processor
- ASIC Application Specific Integrated Circuit
- FPGA Field Programmable Gate Array
- a general-purpose processor may be a microprocessor or the processor may be any conventional processor, etc.
- the steps of the method disclosed in conjunction with the embodiments of the present application can be directly implemented by a hardware decoding processor, or executed by a combination of hardware and software modules in the decoding processor.
- the software module can be located in random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers and other mature storage media in this field.
- the storage medium is located in the first memory 3102.
- the first processor 3103 reads the information in the first memory 3102 and completes the steps of the above method in combination with its hardware.
- the embodiments described in this application can be implemented using hardware, software, firmware, middleware, microcode, or a combination thereof.
- the processing unit can be implemented in one or more Application Specific Integrated Circuits (ASIC), Digital Signal Processing (DSP), Digital Signal Processing Device (DSP Device, DSPD), programmable Logic device (Programmable Logic Device, PLD), Field-Programmable Gate Array (FPGA), general-purpose processor, controller, microcontroller, microprocessor, and other devices used to perform the functions described in this application electronic unit or combination thereof.
- ASIC Application Specific Integrated Circuits
- DSP Digital Signal Processing
- DSP Device Digital Signal Processing Device
- DSPD Digital Signal Processing Device
- PLD programmable Logic Device
- FPGA Field-Programmable Gate Array
- the technology described in this application can be implemented through modules (such as procedures, functions, etc.) that perform the functions described in this application.
- Software code may be stored in memory and executed by a processor.
- the memory can be implemented in the processor or external to the processor.
- the first processor 3103 is further configured to perform the method described in any one of the preceding embodiments when running the computer program.
- This embodiment provides an encoder.
- the encoder when the reconstructed point cloud meets the preset conditions, the encoding end will perform filtering processing based on the divided reconstructed slices, and determine that the reconstructed slices need to be filtered. After that, the corresponding filter coefficients are passed to the decoder; in this way, filtering based on reconstructed slices not only avoids memory overflow due to limited memory resources when processing large point clouds, making it highly universal; it also It can optimize the reconstructed point cloud, improve the quality of the point cloud, save bit rate, and improve encoding and decoding efficiency.
- FIG. 19 shows a schematic structural diagram of a decoder 320 provided by an embodiment of the present application.
- the decoder 320 may include: a decoding unit 3201 and a second filtering unit 3202; wherein,
- the decoding unit 3201 is configured to parse the code stream and determine the first filter identification information; and if the first filter identification information indicates that the reconstructed slice of the reconstructed point cloud is filtered, parse the code stream and determine the first filter coefficient;
- the second filtering unit 3202 is configured to perform filtering processing on the reconstructed slice according to the first filter coefficient, and determine the filtered slice corresponding to the reconstructed slice.
- the decoder 320 may also include a second determination unit 3203 configured to determine parameter information of the reconstructed point cloud; and determine whether the reconstructed point cloud satisfies the preset parameter information according to the reconstructed point cloud. condition;
- the decoding unit 3201 is also configured to parse the code stream and determine the first filter identification information when the reconstructed point cloud meets the preset conditions.
- the second determination unit 3203 is further configured to determine the number of points in the reconstructed point cloud; and if the number of points in the reconstructed point cloud is greater than or equal to the preset threshold, determine that the reconstructed point cloud meets the preset condition; Alternatively, if the number of points in the reconstructed point cloud is less than the preset threshold, it is determined that the reconstructed point cloud does not meet the preset condition.
- the decoding unit 3201 is also configured to parse the code stream and determine the residual value of the attribute information of the initial slice midpoint;
- the second determination unit 3203 is also configured to determine the reconstructed value of the attribute information of the initial slice midpoint based on the predicted value and residual value of the attribute information of the initial slice midpoint after determining the predicted value of the attribute information of the initial slice midpoint. ; and determine the reconstructed slice based on the reconstructed value of the attribute information of the midpoint of the initial slice.
- the second determining unit 3203 is further configured to perform aggregation processing on the at least one reconstructed slice to determine the reconstructed point cloud after determining the at least one reconstructed slice.
- the second determination unit 3203 is further configured to determine K 1 target points corresponding to the first point in the reconstructed slice; and to determine the K 1 target points corresponding to the first point in the reconstructed slice according to the first filter coefficient. Perform filtering to determine the filtered slice; where the first point represents the point in the reconstructed slice, and the K 1 target points corresponding to the first point include the first point and (K 1 - 1) neighboring points, K 1 is an integer greater than 1.
- the second determination unit 3203 is further configured to use the K nearest neighbor search method to search a preset number of candidate points in the reconstructed slice based on the first point in the reconstructed slice; and respectively calculate the first point and the preset number of candidate points in the reconstructed slice.
- Distance values between a number of candidate points determine relatively small (K 1 -1) distance values from the obtained preset number of distance values; and candidates corresponding to (K 1 -1) distance values
- the point determines (K 1 -1) neighboring points, and the first point and (K 1 -1) neighboring points are determined as K 1 target points corresponding to the first point.
- the second filtering unit 3202 is further configured to perform filtering processing on K 1 target points corresponding to the first point in the reconstructed slice according to the first filter coefficient, and determine the filtering of the attribute information of the first point in the reconstructed slice. value; and after determining the filtered value of the attribute information of at least one point in the reconstructed slice, determining the filtered slice according to the filtered value of the attribute information of at least one point in the reconstructed slice.
- the second filtering unit 3202 is also configured to, if the first filtering identification information indicates that the reconstructed slice is not to be filtered, the step of parsing the code stream and determining the first filter coefficient is not performed, and the reconstructed slice is directly determined as Slicing after filtering.
- the second determination unit 3203 is further configured to, after determining at least one filtered slice of the reconstructed point cloud, perform aggregation processing on the at least one filtered slice to determine the filtered point cloud.
- the decoding unit 3201 is also configured to parse the code stream and determine the second filtering identification information; and if the second filtering identification information indicates filtering the reconstructed point cloud, parse the code stream and determine the second filtering coefficient. ;
- the second filtering unit 3202 is also configured to filter the reconstructed point cloud according to the second filter coefficient, and determine the filtered point cloud corresponding to the reconstructed point cloud.
- the first decoding unit 3201 is also configured to parse the code stream and determine the second filter identification information when the reconstructed point cloud does not meet the preset conditions.
- the second determination unit 3203 is also configured to, if the second filter identification information indicates that the reconstructed point cloud is not filtered, the step of parsing the code stream and determining the second filter coefficient is not performed, and the reconstructed point cloud is directly processed. Determine it as the filtered point cloud.
- the second determination unit 3203 is further configured to determine K 2 target points corresponding to the first point in the reconstructed point cloud; and to determine the K 2 target points corresponding to the first point in the reconstructed point cloud according to the second filter coefficient.
- the target point is filtered to determine the filtered point cloud; among them, the first point represents the point in the reconstructed point cloud, and the K 2 target points corresponding to the first point include the first point and the points adjacent to the first point in the reconstructed point cloud. (K 2 -1) neighboring points, K 2 is an integer greater than 1.
- the second determination unit 3203 is also configured to search a preset number of candidate points in the reconstructed point cloud using the K nearest neighbor search method based on the first point in the reconstructed point cloud; and respectively calculate the first point and Distance values between a preset number of candidate points, determining relatively small (K 2 -1) distance values from the obtained preset number of distance values; and corresponding to (K 2 -1) distance values Determine (K 2 -1) neighbor points of candidate points, and determine the first point and (K 2 -1) neighbor points as K 2 target points corresponding to the first point.
- the second filtering unit 32032 is also configured to filter the K 2 target points corresponding to the first point in the reconstructed point cloud according to the second filter coefficient, and determine the attribute information of the first point in the reconstructed point cloud. the filter value; and after determining the filter value of the attribute information of at least one point in the reconstructed point cloud, determining the filtered point cloud according to the filter value of the attribute information of at least one point in the reconstructed point cloud.
- the attribute information includes a color component
- the color component includes at least one of the following: a first color component, a second color component, and a third color component; wherein, if the color component conforms to the RGB color space, the third color component is determined.
- the first color component, the second color component and the third color component are in order: R component, G component, B component; if the color component conforms to the YUV color space, it is determined that the first color component, the second color component and the third color component are in order are: Y component, U component, V component.
- the decoding unit 3201 is also configured to parse the code stream and determine the first filter identification information of the component to be processed; wherein the first filter identification information of the component to be processed indicates whether the attribute information of the reconstructed slice is to be processed. components are filtered.
- the second determination unit 3203 is further configured to determine to perform filtering processing on the to-be-processed component of the attribute information of the reconstructed slice if the value of the first filter identification information of the component to be processed is the first value; and If the value of the first filter identification information of the component to be processed is the second value, it is determined that the component to be processed of the attribute information of the reconstructed slice is not to be filtered.
- the decoding unit 3201 is also configured to parse the code stream and determine the first filter identification information of the first color component, the first filter identification information of the second color component, and the first filter identification information of the third color component. ; Wherein, the first filter identification information of the first color component indicates whether to perform filtering processing on the first color component of the attribute information of the reconstructed slice, and the first filter identification information of the second color component indicates whether to perform filtering processing on the first color component of the attribute information of the reconstructed slice.
- the second color component is subjected to filtering processing, and the first filtering identification information of the third color component indicates whether to perform filtering processing on the third color component of the attribute information of the reconstructed slice.
- the second determining unit 3203 is further configured to: if at least one of the first filter identification information of the first color component, the first filter identification information of the second color component, and the first filter identification information of the third color component One is the first value, then it is determined that the first filter identification information indicates filtering of the reconstructed slice; and if the first filter identification information of the first color component, the first filter identification information of the second color component and the third color component If all the first filtering identification information is the second value, it is determined that the first filtering identification information indicates that the reconstructed slice is not to be filtered.
- the decoding unit 3201 is also configured to parse the code stream and determine the second filter identification information of the component to be processed; wherein the second filter identification information of the component to be processed indicates whether to reconstruct the attribute information of the point cloud.
- the processing components are filtered.
- the second determination unit 3203 is further configured to determine to perform filtering processing on the component to be processed of the attribute information of the reconstructed point cloud if the value of the second filter identification information of the component to be processed is the third value; And if the value of the second filter identification information of the component to be processed is the fourth value, it is determined that the component to be processed of the attribute information of the reconstructed point cloud is not to be filtered.
- the decoding unit 3201 is also configured to parse the code stream and determine the second filter identification information of the first color component, the second filter identification information of the second color component, and the second filter identification information of the third color component. ; Wherein, the second filter identification information of the first color component indicates whether to perform filtering processing on the first color component of the attribute information of the reconstructed point cloud, and the second filter identification information of the second color component indicates whether to perform filtering processing on the attribute information of the reconstructed point cloud.
- the second color component of the reconstructed point cloud is subjected to filtering processing, and the second filtering identification information of the third color component indicates whether to perform filtering processing on the third color component of the attribute information of the reconstructed point cloud.
- the second determining unit 3203 is further configured to: if at least one of the second filter identification information of the first color component, the second filter identification information of the second color component, and the second filter identification information of the third color component One is the third value, then it is determined that the second filter identification information indicates filtering of the reconstructed point cloud; and if the second filter identification information of the first color component, the second filter identification information of the second color component and the third color component If all of the second filter identification information are fourth values, it is determined that the second filter identification information indicates that the reconstructed point cloud is not filtered.
- the decoding unit 3201 is also configured to parse the code stream and determine the third filtering identification information; and if the third filtering identification information indicates filtering the reconstructed aggregate slice, parse the code stream and determine the third filtering coefficient. ; Among them, the reconstructed aggregation slice is obtained by aggregating n reconstructed slices, and n is an integer greater than 1;
- the second filtering unit 3202 is further configured to perform filtering processing on the reconstructed aggregation slice according to the third filter coefficient, and determine a filtered aggregation slice corresponding to the reconstructed aggregation slice.
- the second determining unit 3203 is further configured to determine at least one filtered slice after performing filtering processing on at least one reconstructed slice respectively; and perform aggregation processing on at least one filtered slice to determine the first filtered slice. point cloud;
- the decoding unit 3201 is also configured to parse the code stream and determine the fourth filtering identification information; and if the fourth filtering identification information indicates filtering the first filtered point cloud, parse the code stream and determine the fourth filtering coefficient;
- the second filtering unit 3202 is also configured to filter the first filtered point cloud according to the fourth filter coefficient, and determine the second filtered point cloud corresponding to the first filtered point cloud.
- the "unit" may be part of a circuit, part of a processor, part of a program or software, etc., and of course may also be a module, or may be non-modular.
- each component in this embodiment can be integrated into one processing unit, or each unit can exist physically alone, or two or more units can be integrated into one unit.
- the above integrated units can be implemented in the form of hardware or software function modules.
- the integrated unit is implemented in the form of a software function module and is not sold or used as an independent product, it can be stored in a computer-readable storage medium.
- this embodiment provides a computer storage medium for use in the decoder 320.
- the computer storage medium stores a computer program.
- any one of the foregoing embodiments is implemented. the method described.
- the decoder 320 may include: a second communication interface 3301, a second memory 3302, and a second processor 3303; the various components are coupled together through a second bus system 3304. It can be understood that the second bus system 3304 is used to implement connection communication between these components. In addition to the data bus, the second bus system 3304 also includes a power bus, a control bus and a status signal bus. However, for the sake of clear explanation, various buses are labeled as the second bus system 3304 in FIG. 20 . in,
- the second communication interface 3301 is used for receiving and sending signals during the process of sending and receiving information with other external network elements;
- the second memory 3302 is used to store computer programs that can run on the second processor 3303;
- the second processor 3303 is used to execute: when running the computer program:
- the code stream is parsed to determine the first filter coefficient
- the second processor 3303 is further configured to perform the method described in any one of the preceding embodiments when running the computer program.
- This embodiment provides a decoder.
- the decoder when the reconstructed point cloud meets the preset conditions, the encoding end will perform filtering processing based on the divided reconstructed slices, and determine that the reconstructed slices need to be filtered. After that, the corresponding filter coefficients are passed to the decoder; accordingly, the decoder can directly decode to obtain the filter coefficients, and then use the filter coefficients to filter the reconstructed slices; in this way, filtering based on the reconstructed slices not only avoids the need to process large-scale Memory overflow occurs due to limited memory resources during point cloud processing, making it highly adaptable; it can also optimize the reconstructed point cloud, which can improve the quality of the point cloud, save bit rates, and improve encoding and decoding efficiency.
- FIG. 21 shows a schematic structural diagram of a coding and decoding system provided by an embodiment of the present application.
- the encoding and decoding system 340 may include an encoder 3401 and a decoder 3402.
- the encoder 3401 may be the encoder described in any of the preceding embodiments
- the decoder 3402 may be the decoder described in any of the preceding embodiments.
- the encoder 3401 when the reconstructed point cloud meets the preset conditions, the encoder 3401 will perform filtering processing based on the divided reconstructed slices, and after determining that the reconstructed slices need to be filtered , the corresponding filter coefficient is transmitted to the decoder; accordingly, the decoder 3402 can directly decode to obtain the filter coefficient, and then use the filter coefficient to filter the reconstructed slice; in this way, filtering based on the reconstructed slice not only avoids the need to process large Memory overflow occurs due to limited memory resources during point cloud processing, making it highly adaptable; it can also optimize the reconstructed point cloud, which can improve the quality of the point cloud, save bit rates, and improve encoding and decoding performance.
- the reconstructed slice of the reconstructed point cloud is determined; when the reconstructed point cloud meets the preset conditions, the first filter coefficient is determined according to the reconstructed slice and the initial slice corresponding to the reconstructed slice; according to the first The filter coefficient performs filtering on the reconstructed slice and determines the filtered slice corresponding to the reconstructed slice; determines the first filter identification information based on the filtered slice; wherein the first filter identification information indicates whether to filter the reconstructed slice; the first filtered The identification information is encoded; if the first filter identification information indicates that the reconstructed slice is filtered, the first filter coefficient is encoded; and the resulting encoded bits are written into the code stream.
- the code stream is parsed to determine the first filter identification information; if the first filter identification information indicates that the reconstructed slice of the reconstructed point cloud is filtered, the code stream is parsed to determine the first filter coefficient; according to the first filter coefficient, the The reconstructed slice is filtered, and the filtered slice corresponding to the reconstructed slice is determined.
- the encoding end will perform filtering based on the divided reconstructed slices, and only after determining that the reconstructed slices need to be filtered, the corresponding filter coefficients will be passed to the decoder; correspondingly Therefore, the decoder can directly decode to obtain the filter coefficients, and then use the filter coefficients to filter the reconstructed slices; in this way, filtering based on the reconstructed slices not only avoids memory overflow due to limited memory resources when processing large point clouds , making it highly universal; it can also optimize the reconstructed point cloud, which can improve the quality of the point cloud, save bit rates, and improve encoding and decoding efficiency.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
Abstract
Description
Claims (69)
- 一种编码方法,应用于编码器,所述方法包括:确定重建点云的重建切片;在所述重建点云满足预设条件的情况下,根据所述重建切片与所述重建切片对应的初始切片,确定第一滤波系数;根据所述第一滤波系数对所述重建切片进行滤波处理,确定所述重建切片对应的滤波后切片;根据所述滤波后切片,确定第一滤波标识信息;其中,所述第一滤波标识信息指示是否对所述重建切片进行滤波处理;对所述第一滤波标识信息进行编码;若所述第一滤波标识信息指示对所述重建切片进行滤波处理,则对所述第一滤波系数进行编码;将所得到的编码比特写入码流。
- 根据权利要求1所述的方法,其中,所述方法还包括:确定所述重建点云的参数信息;根据所述重建点云的参数信息,确定所述重建点云是否满足预设条件。
- 根据权利要求2所述的方法,其中,所述确定所述重建点云的参数信息,包括:确定所述重建点云中点的数量;相应地,所述方法还包括:若所述重建点云中点的数量大于或等于预设阈值,则确定所述重建点云满足预设条件;或者,若所述重建点云中点的数量小于预设阈值,则确定所述重建点云不满足预设条件。
- 根据权利要求3所述的方法,其中,所述确定所述重建点云中点的数量,包括:对初始点云进行几何量化处理,得到量化后点云;根据所述量化后点云中点的数量,确定所述重建点云中点的数量。
- 根据权利要求1所述的方法,其中,所述方法还包括:对初始点云进行切片处理,得到至少一个初始切片;对所述至少一个初始切片依次进行编码及重建处理,得到至少一个重建切片;对所述至少一个重建切片进行聚合处理,得到所述重建点云。
- 根据权利要求1所述的方法,其中,所述根据所述第一滤波系数对所述重建切片进行滤波处理,确定所述重建切片对应的滤波后切片,包括:确定所述重建切片中第一点对应的K 1个目标点;根据所述第一滤波系数对所述重建切片中第一点对应的K 1个目标点进行滤波处理,确定所述滤波后切片;其中,所述第一点表示所述重建切片中的点,所述第一点对应的K 1个目标点包括所述第一点以及所述重建切片中与所述第一点相邻的(K 1-1)个近邻点,K 1为大于1的整数。
- 根据权利要求6所述的方法,其中,所述确定所述重建切片中第一点对应的K 1个目标点,包括:基于所述重建切片中的第一点,利用K近邻搜索方式在所述重建切片中搜索预设数量个候选点;分别计算所述第一点与所述预设数量个候选点之间的距离值,从所得到的预设数量个距离值中确定相对较小的(K 1-1)个距离值;根据(K 1-1)个距离值对应的候选点确定(K 1-1)个近邻点,将所述第一点和所述(K 1-1)个近邻点确定为所述第一点对应的K 1个目标点。
- 根据权利要求6所述的方法,其中,所述根据所述第一滤波系数对所述重建切片中第一点对应的K 1个目标点进行滤波处理,确定所述滤波后切片,包括:根据所述第一滤波系数对所述重建切片中第一点对应的K 1个目标点进行滤波处理,确定所述重建切片中所述第一点的属性信息的滤波值;在确定出所述重建切片中至少一个点的属性信息的滤波值之后,根据所述重建切片中至少一个点的属性信息的滤波值确定所述滤波后切片。
- 根据权利要求1所述的方法,其中,所述根据所述重建切片与所述重建切片对应的初始切片,确定第一滤波系数,包括:根据所述初始切片中点的属性信息的原始值,确定第一属性参数;根据所述重建切片中点对应的K 1个目标点的属性信息的重建值,确定第二属性参数;基于所述第一属性参数和所述第二属性参数,确定所述第一滤波系数。
- 根据权利要求9所述的方法,其中,所述基于所述第一属性参数和所述第二属性参数,确定所 述第一滤波系数,包括:根据所述第一属性参数和所述第二属性参数,确定互相关参数;根据所述第二属性参数确定自相关参数;根据所述互相关参数和所述自相关参数进行系数计算,得到所述第一滤波系数。
- 根据权利要求1所述的方法,其中,所述根据所述滤波后切片,确定第一滤波标识信息,包括:确定所述重建切片的属性信息的待处理分量的第一代价值,以及确定所述滤波后切片的属性信息的待处理分量的第二代价值;根据所述第一代价值和所述第二代价值,确定所述待处理分量的第一滤波标识信息;根据所述待处理分量的第一滤波标识信息,得到所述第一滤波标识信息。
- 根据权利要求11所述的方法,其中,所述根据所述第一代价值和所述第二代价值,确定所述待处理分量的第一滤波标识信息,包括:若所述第二代价值小于所述第一代价值,则确定所述待处理分量的第一滤波标识信息的取值为第一值;若所述第二代价值大于所述第一代价值,则确定所述待处理分量的第一滤波标识信息的取值为第二值。
- 根据权利要求11所述的方法,其中,所述确定所述重建切片的属性信息的待处理分量的第一代价值,包括:利用率失真代价方式对所述重建切片的属性信息的待处理分量进行代价值计算,将所得到的第一率失真值作为所述第一代价值;相应地,所述确定所述滤波后切片的属性信息的待处理分量的第二代价值,包括:利用率失真代价方式对所述滤波后切片的属性信息的待处理分量进行代价值计算,将所得到的第二率失真值作为所述第二代价值。
- 根据权利要求11所述的方法,其中,所述确定所述重建切片的属性信息的待处理分量的第一代价值,包括:利用率失真代价方式对所述重建切片的属性信息的待处理分量进行代价值计算,得到第一率失真值;利用预设性能衡量指标对所述重建切片的属性信息的待处理分量进行性能值计算,得到第一性能值;根据所述第一率失真值和所述第一性能值,确定所述第一代价值;相应地,所述确定所述滤波后切片的属性信息的待处理分量的第二代价值,包括:利用率失真代价方式对所述滤波后切片的属性信息的待处理分量进行代价值计算,得到第二率失真值;利用预设性能衡量指标对所述滤波后切片的属性信息的待处理分量进行性能值计算,得到第二性能值;根据所述第二率失真值和所述第二性能值,确定所述第二代价值。
- 根据权利要求14所述的方法,其中,所述根据所述第一代价值和所述第二代价值,确定所述待处理分量的第一滤波标识信息,包括:若所述第二性能值大于所述第一性能值且所述第二率失真值小于所述第一率失真值,则确定所述待处理分量的第一滤波标识信息的取值为第一值;若所述第二性能值小于所述第一性能值,则确定所述待处理分量的第一滤波标识信息的取值为第二值。
- 根据权利要求14所述的方法,其中,所述根据所述第一代价值和所述第二代价值,确定所述待处理分量的第一滤波标识信息,包括:若所述第二性能值大于所述第一性能值且所述第二率失真值小于所述第一率失真值,则确定所述待处理分量的第一滤波标识信息的取值为第一值;若所述第二率失真值大于所述第一率失真值,则确定所述待处理分量的第一滤波标识信息的取值为第二值。
- 根据权利要求12、15或16所述的方法,其中,所述方法还包括:若所述待处理分量的第一滤波标识信息的取值为第一值,则确定对所述重建切片的属性信息的待处理分量进行滤波处理;若所述待处理分量的第一滤波标识信息的取值为第二值,则确定不对所述重建切片的属性信息的待处理分量进行滤波处理。
- 根据权利要求8、9或11任一项所述的方法,其中,所述属性信息包括颜色分量,且所述颜色分量包括下述至少之一:第一颜色分量、第二颜色分量和第三颜色分量;其中,若所述颜色分量符合RGB颜色空间,则确定所述第一颜色分量、所述第二颜色分量和所述第三颜色分量依次为:R分量、G分量、B分量;若所述颜色分量符合YUV颜色空间,则确定所述第一颜色分量、所述第二颜色分量和所述第三颜色分量依次为:Y分量、U分量、V分量。
- 根据权利要求11所述的方法,其中,所述根据所述待处理分量的第一滤波标识信息,得到所述第一滤波标识信息,包括:在所述待处理分量为颜色分量的情况下,确定第一颜色分量的第一滤波标识信息、第二颜色分量的第一滤波标识信息和第三颜色分量的第一滤波标识信息;根据所述第一颜色分量的第一滤波标识信息、所述第二颜色分量的第一滤波标识信息和所述第三颜色分量的第一滤波标识信息,得到所述第一滤波标识信息。
- 根据权利要求19所述的方法,其中,所述方法还包括:若所述第一颜色分量的第一滤波标识信息、所述第二颜色分量的第一滤波标识信息和所述第三颜色分量的第一滤波标识信息中至少一个为第一值,则确定所述第一滤波标识信息指示对所述重建切片进行滤波处理;若所述第一颜色分量的第一滤波标识信息、所述第二颜色分量的第一滤波标识信息和所述第三颜色分量的第一滤波标识信息中全部为第二值,则确定所述第一滤波标识信息指示不对所述重建切片进行滤波处理。
- 根据权利要求20所述的方法,其中,所述方法还包括:若所述第一滤波标识信息指示不对所述重建切片进行滤波处理,则不对所述第一滤波系数进行编码。
- 根据权利要求1所述的方法,其中,所述方法还包括:在所述重建点云不满足预设条件的情况下,根据所述重建点云与所述重建点云对应的初始点云,确定第二滤波系数;根据所述第二滤波系数对所述重建点云进行滤波处理,确定所述重建点云对应的滤波后点云;根据所述滤波后点云,确定第二滤波标识信息;其中,所述第二滤波标识信息指示是否对所述重建点云进行滤波处理;对所述第二滤波标识信息进行编码;若所述第二滤波标识信息指示对所述重建点云进行滤波处理,则对所述第二滤波系数进行编码;将所得到的编码比特写入码流。
- 根据权利要求22所述的方法,其中,所述根据所述第二滤波系数对所述重建点云进行滤波处理,确定所述重建点云对应的滤波后点云,包括:确定所述重建点云中第一点对应的K 2个目标点;根据所述第二滤波系数对所述重建点云中第一点对应的K 2个目标点进行滤波处理,确定所述滤波后点云;其中,所述第一点表示所述重建点云中的点,所述第一点对应的K 2个目标点包括所述第一点以及所述重建点云中与所述第一点相邻的(K 2-1)个近邻点,K 2为大于1的整数。
- 根据权利要求23所述的方法,其中,所述确定所述重建点云中第一点对应的K 2个目标点,包括:基于所述重建点云中的第一点,利用K近邻搜索方式在所述重建点云中搜索预设数量个候选点;分别计算所述第一点与所述预设数量个候选点之间的距离值,从所得到的预设数量个距离值中确定相对较小的(K 2-1)个距离值;根据(K 2-1)个距离值对应的候选点确定(K 2-1)个近邻点,将所述第一点和所述(K 2-1)个近邻点确定为所述第一点对应的K 2个目标点。
- 根据权利要求23所述的方法,其中,所述根据所述第二滤波系数对所述重建点云中第一点对应的K 2个目标点进行滤波处理,确定所述滤波后点云,包括:根据所述第二滤波系数对所述重建点云中第一点对应的K 2个目标点进行滤波处理,确定所述重建点云中所述第一点的属性信息的滤波值;在确定出所述重建点云中至少一个点的属性信息的滤波值之后,根据所述重建点云中至少一个点的属性信息的滤波值确定所述滤波后点云。
- 根据权利要求22所述的方法,其中,所述根据所述重建点云与所述重建点云对应的初始点云,确定第二滤波系数,包括:根据所述初始点云中点的属性信息的原始值,确定第三属性参数;根据所述重建点云中点对应的K 2个目标点的属性信息的重建值,确定第四属性参数;基于所述第三属性参数和所述第四属性参数,确定所述第二滤波系数。
- 根据权利要求26所述的方法,其中,所述基于所述第三属性参数和所述第四属性参数,确定所述第二滤波系数,包括:根据所述第三属性参数和所述第四属性参数,确定互相关参数;根据所述第四属性参数确定自相关参数;根据所述互相关参数和所述自相关参数进行系数计算,得到所述第二滤波系数。
- 根据权利要求22所述的方法,其中,所述根据所述滤波后点云,确定第二滤波标识信息,包括:确定所述重建点云的属性信息的待处理分量的第三代价值,以及确定所述滤波后点云的属性信息的待处理分量的第四代价值;根据所述第三代价值和所述第四代价值,确定所述待处理分量的第二滤波标识信息;根据所述待处理分量的第二滤波标识信息,得到所述第二滤波标识信息。
- 根据权利要求28所述的方法,其中,所述根据所述第三代价值和所述第四代价值,确定所述待处理分量的第二滤波标识信息,包括:若所述第四代价值小于所述第三代价值,则确定所述待处理分量的第二滤波标识信息的取值为第三值;若所述第四代价值大于所述第三代价值,则确定所述待处理分量的第二滤波标识信息的取值为第四值。
- 根据权利要求29所述的方法,其中,所述方法还包括:若所述待处理分量的第二滤波标识信息的取值为第三值,则确定对所述重建点云的属性信息的待处理分量进行滤波处理;若所述待处理分量的第二滤波标识信息的取值为第四值,则确定不对所述重建点云的属性信息的待处理分量进行滤波处理。
- 根据权利要求28所述的方法,其中,所述根据所述待处理分量的第二滤波标识信息,得到所述第二滤波标识信息,包括:在所述待处理分量为颜色分量的情况下,确定第一颜色分量的第二滤波标识信息、第二颜色分量的第二滤波标识信息和第三颜色分量的第二滤波标识信息;根据所述第一颜色分量的第二滤波标识信息、所述第二颜色分量的第二滤波标识信息和所述第三颜色分量的第二滤波标识信息,得到所述第二滤波标识信息。
- 根据权利要求31所述的方法,其中,所述方法还包括:若所述第一颜色分量的第二滤波标识信息、所述第二颜色分量的第二滤波标识信息和所述第三颜色分量的第二滤波标识信息中至少一个为第三值,则确定所述第二滤波标识信息指示对所述重建点云进行滤波处理;若所述第一颜色分量的第二滤波标识信息、所述第二颜色分量的第二滤波标识信息和所述第三颜色分量的第二滤波标识信息中全部为第四值,则确定所述第二滤波标识信息指示不对所述重建点云进行滤波处理。
- 根据权利要求32所述的方法,其中,所述方法还包括:若所述第二滤波标识信息指示不对所述重建点云进行滤波处理,则不对所述第二滤波系数进行编码。
- 根据权利要求5所述的方法,其中,所述方法还包括:若所述至少一个初始切片的数量为多个,则从多个初始切片中确定n个初始切片,对所述n个初始切片进行聚合处理,得到初始聚合切片;确定所述初始聚合切片对应的重建聚合切片;其中,所述重建聚合切片是由所述n个初始切片对应的重建切片聚合得到,n为大于1的整数;根据所述初始聚合切片与所述重建聚合切片,确定第三滤波系数;根据所述第三滤波系数对所述重建聚合切片进行滤波处理,确定所述重建聚合切片对应的滤波后聚合切片;根据所述滤波后聚合切片,确定第三滤波标识信息;其中,所述第三滤波标识信息指示是否对所述重建聚合切片进行滤波处理;对所述第三滤波标识信息进行编码;若所述第三滤波标识信息指示对所述重建聚合切片进行滤波处理,则对所述第三滤波系数进行编码;将所得到的编码比特写入码流。
- 根据权利要求5所述的方法,其中,所述方法还包括:在对所述初始点云对应的至少一个重建切片分别进行滤波处理后,确定至少一个滤波后切片;对所述至少一个滤波后切片进行聚合处理,确定第一滤波后点云;根据所述初始点云与所述第一滤波后点云,确定第四滤波系数;根据所述第四滤波系数对所述第一滤波后点云进行滤波处理,确定所述第一滤波后点云对应的第二滤波后点云;根据所述第二滤波后点云,确定第四滤波标识信息;其中,所述第四滤波标识信息指示是否对所述第一滤波后点云进行滤波处理;对所述第四滤波标识信息进行编码;若所述第四滤波标识信息指示对所述第一滤波后点云进行滤波处理,则对所述第四滤波系数进行编码;将所得到的编码比特写入码流。
- 根据权利要求1所述的方法,其中,所述方法还包括:确定所述初始切片中点的属性信息的预测值;根据所述初始切片中点的属性信息的原始值和所述预测值,确定所述初始切片中点的属性信息的残差值;对所述初始切片中点的属性信息的残差值进行编码,并将所得到的编码比特写入所述码流。
- 一种码流,所述码流包含确定解码点云的参数信息;其中,所述参数信息包括下述至少之一:初始切片中点的属性信息的残差值、第一滤波标识信息和第一滤波系数。
- 一种解码方法,应用于解码器,所述方法包括:解析码流,确定第一滤波标识信息;若所述第一滤波标识信息指示对重建点云的重建切片进行滤波处理,则解析码流,确定第一滤波系数;根据所述第一滤波系数对所述重建切片进行滤波处理,确定所述重建切片对应的滤波后切片。
- 根据权利要求38所述的方法,其中,所述方法还包括:确定所述重建点云的参数信息;根据所述重建点云的参数信息,确定所述重建点云是否满足预设条件;相应地,所述解析码流,确定第一滤波标识信息,包括:在所述重建点云满足预设条件时,解析码流,确定所述第一滤波标识信息。
- 根据权利要求39所述的方法,其中,所述确定所述重建点云的参数信息,包括:确定所述重建点云中点的数量;相应地,所述方法还包括:若所述重建点云中点的数量大于或等于预设阈值,则确定所述重建点云满足预设条件;或者,若所述重建点云中点的数量小于预设阈值,则确定所述重建点云不满足预设条件。
- 根据权利要求38所述的方法,其中,所述方法还包括:解析码流,确定初始切片中点的属性信息的残差值;在确定所述初始切片中点的属性信息的预测值后,根据所述初始切片中点的属性信息的所述预测值和所述残差值,确定所述初始切片中点的属性信息的重建值;基于所述初始切片中点的属性信息的重建值,确定所述重建切片。
- 根据权利要求41所述的方法,其中,所述方法还包括:在确定出至少一个重建切片之后,对所述至少一个重建切片进行聚合处理,确定所述重建点云。
- 根据权利要求38所述的方法,其中,所述根据所述第一滤波系数对所述重建切片进行滤波处理,确定所述重建切片对应的滤波后切片,包括:确定所述重建切片中第一点对应的K 1个目标点;根据所述第一滤波系数对所述重建切片中第一点对应的K 1个目标点进行滤波处理,确定所述滤波后切片;其中,所述第一点表示所述重建切片中的点,所述第一点对应的K 1个目标点包括所述第一点以及所述重建切片中与所述第一点相邻的(K 1-1)个近邻点,K 1为大于1的整数。
- 根据权利要求43所述的方法,其中,所述确定所述重建切片中第一点对应的K 1个目标点,包括:基于所述重建切片中的第一点,利用K近邻搜索方式在所述重建切片中搜索预设数量个候选点;分别计算所述第一点与所述预设数量个候选点之间的距离值,从所得到的预设数量个距离值中确定相对较小的(K 1-1)个距离值;根据(K 1-1)个距离值对应的候选点确定(K 1-1)个近邻点,将所述第一点和所述(K 1-1)个近邻点确定为所述第一点对应的K 1个目标点。
- 根据权利要求43所述的方法,其中,所述根据所述第一滤波系数对所述重建切片中第一点对应的K 1个目标点进行滤波处理,确定所述滤波后切片,包括:根据所述第一滤波系数对所述重建切片中第一点对应的K 1个目标点进行滤波处理,确定所述重建切片中所述第一点的属性信息的滤波值;在确定出所述重建切片中至少一个点的属性信息的滤波值之后,根据所述重建切片中至少一个点的属性信息的滤波值确定所述滤波后切片。
- 根据权利要求38至45任一项所述的方法,其中,所述方法还包括:若所述第一滤波标识信息指示不对重建切片进行滤波处理,则不执行解析码流,确定第一滤波系数的步骤,将所述重建切片直接确定为所述滤波后切片。
- 根据权利要求46所述的方法,其中,所述方法还包括:在确定出所述重建点云的至少一个滤波后切片之后,对所述至少一个滤波后切片进行聚合处理,确定滤波后点云。
- 根据权利要求42所述的方法,其中,所述方法还包括:解析码流,确定第二滤波标识信息;若所述第二滤波标识信息指示对所述重建点云进行滤波处理,则解析码流,确定第二滤波系数;根据所述第二滤波系数对所述重建点云进行滤波处理,确定所述重建点云对应的滤波后点云。
- 根据权利要求48所述的方法,其中,所述解析码流,确定第二滤波标识信息,包括:在所述重建点云不满足预设条件时,解析码流,确定所述第二滤波标识信息。
- 根据权利要求48所述的方法,其中,所述方法还包括:若所述第二滤波标识信息指示不对所述重建点云进行滤波处理,则不执行解析码流,确定第二滤波系数的步骤,将所述重建点云直接确定为所述滤波后点云。
- 根据权利要求48所述的方法,其中,所述根据所述第二滤波系数对所述重建点云进行滤波处理,确定所述重建点云对应的滤波后点云,包括:确定所述重建点云中第一点对应的K 2个目标点;根据所述第二滤波系数对所述重建点云中第一点对应的K 2个目标点进行滤波处理,确定所述滤波后点云;其中,所述第一点表示所述重建点云中的点,所述第一点对应的K 2个目标点包括所述第一点以及所述重建点云中与所述第一点相邻的(K 2-1)个近邻点,K 2为大于1的整数。
- 根据权利要求51所述的方法,其中,所述确定所述重建点云中第一点对应的K 2个目标点,包括:基于所述重建点云中的第一点,利用K近邻搜索方式在所述重建点云中搜索预设数量个候选点;分别计算所述第一点与所述预设数量个候选点之间的距离值,从所得到的预设数量个距离值中确定相对较小的(K 2-1)个距离值;根据(K 2-1)个距离值对应的候选点确定(K 2-1)个近邻点,将所述第一点和所述(K 2-1)个近邻点确定为所述第一点对应的K 2个目标点。
- 根据权利要求52所述的方法,其中,所述根据所述第二滤波系数对所述重建点云中第一点对应的K 2个目标点进行滤波处理,确定所述滤波后点云,包括:根据所述第二滤波系数对所述重建点云中第一点对应的K 2个目标点进行滤波处理,确定所述重建点云中所述第一点的属性信息的滤波值;在确定出所述重建点云中至少一个点的属性信息的滤波值之后,根据所述重建点云中至少一个点的属性信息的滤波值确定所述滤波后点云。
- 根据权利要求45或53所述的方法,其中,所述属性信息包括颜色分量,且所述颜色分量包括下述至少之一:第一颜色分量、第二颜色分量和第三颜色分量;其中,若所述颜色分量符合RGB颜色空间,则确定所述第一颜色分量、所述第二颜色分量和所述第三颜色分量依次为:R分量、G分量、B分量;若所述颜色分量符合YUV颜色空间,则确定所述第一颜色分量、所述第二颜色分量和所述第三颜色分量依次为:Y分量、U分量、V分量。
- 根据权利要求38所述的方法,其中,所述解析码流,确定第一滤波标识信息,包括:解析码流,确定待处理分量的第一滤波标识信息;其中,所述待处理分量的第一滤波标识信息指示是否对所述重建切片的属性信息的待处理分量进行滤波处理。
- 根据权利要求55所述的方法,其中,所述方法还包括:若所述待处理分量的第一滤波标识信息的取值为第一值,则确定对所述重建切片的属性信息的待处 理分量进行滤波处理;若所述待处理分量的第一滤波标识信息的取值为第二值,则确定不对所述重建切片的属性信息的待处理分量进行滤波处理。
- 根据权利要求55所述的方法,其中,在所述待处理分量为颜色分量的情况下,所述解析码流,确定第一滤波标识信息,包括:解析码流,确定第一颜色分量的第一滤波标识信息、第二颜色分量的第一滤波标识信息和第三颜色分量的第一滤波标识信息;其中,所述第一颜色分量的第一滤波标识信息指示是否对所述重建切片的属性信息的第一颜色分量进行滤波处理,所述第二颜色分量的第一滤波标识信息指示是否对所述重建切片的属性信息的第二颜色分量进行滤波处理,所述第三颜色分量的第一滤波标识信息指示是否对所述重建切片的属性信息的第三颜色分量进行滤波处理。
- 根据权利要求57所述的方法,其中,所述方法还包括:若所述第一颜色分量的第一滤波标识信息、所述第二颜色分量的第一滤波标识信息和所述第三颜色分量的第一滤波标识信息中至少一个为第一值,则确定所述第一滤波标识信息指示对所述重建切片进行滤波处理;若所述第一颜色分量的第一滤波标识信息、所述第二颜色分量的第一滤波标识信息和所述第三颜色分量的第一滤波标识信息中全部为第二值,则确定所述第一滤波标识信息指示不对所述重建切片进行滤波处理。
- 根据权利要求48所述的方法,其中,所述解析码流,确定第二滤波标识信息,包括:解析码流,确定待处理分量的第二滤波标识信息;其中,所述待处理分量的第二滤波标识信息指示是否对所述重建点云的属性信息的待处理分量进行滤波处理。
- 根据权利要求59所述的方法,其中,所述方法还包括:若所述待处理分量的第二滤波标识信息的取值为第三值,则确定对所述重建点云的属性信息的待处理分量进行滤波处理;若所述待处理分量的第二滤波标识信息的取值为第四值,则确定不对所述重建点云的属性信息的待处理分量进行滤波处理。
- 根据权利要求59所述的方法,其中,在所述待处理分量为颜色分量的情况下,所述解析码流,确定第二滤波标识信息,包括:解析码流,确定第一颜色分量的第二滤波标识信息、第二颜色分量的第二滤波标识信息和第三颜色分量的第二滤波标识信息;其中,所述第一颜色分量的第二滤波标识信息指示是否对所述重建点云的属性信息的第一颜色分量进行滤波处理,所述第二颜色分量的第二滤波标识信息指示是否对所述重建点云的属性信息的第二颜色分量进行滤波处理,所述第三颜色分量的第二滤波标识信息指示是否对所述重建点云的属性信息的第三颜色分量进行滤波处理。
- 根据权利要求61所述的方法,其中,所述方法还包括:若所述第一颜色分量的第二滤波标识信息、所述第二颜色分量的第二滤波标识信息和所述第三颜色分量的第二滤波标识信息中至少一个为第三值,则确定所述第二滤波标识信息指示对所述重建点云进行滤波处理;若所述第一颜色分量的第二滤波标识信息、所述第二颜色分量的第二滤波标识信息和所述第三颜色分量的第二滤波标识信息中全部为第四值,则确定所述第二滤波标识信息指示不对所述重建点云进行滤波处理。
- 根据权利要求38所述的方法,其中,所述方法还包括:解析码流,确定第三滤波标识信息;若所述第三滤波标识信息指示对重建聚合切片进行滤波处理,则解析码流,确定第三滤波系数;其中,所述重建聚合切片是由n个重建切片聚合得到,n为大于1的整数;根据所述第三滤波系数对所述重建聚合切片进行滤波处理,确定所述重建聚合切片对应的滤波后聚合切片。
- 根据权利要求38所述的方法,其中,所述方法还包括:在对至少一个重建切片分别进行滤波处理后,确定至少一个滤波后切片;对所述至少一个滤波后切片进行聚合处理,确定第一滤波后点云;解析码流,确定第四滤波标识信息;若所述第四滤波标识信息指示对所述第一滤波后点云进行滤波处理,则解析码流,确定第四滤波系数;根据所述第四滤波系数对所述第一滤波后点云进行滤波处理,确定所述第一滤波后点云对应的第二滤波后点云。
- 一种编码器,所述编码器包括第一确定单元、第一滤波单元和编码单元;其中,所述第一确定单元,配置为确定重建点云的重建切片;以及在所述重建点云满足预设条件的情况下,根据所述重建切片与所述重建切片对应的初始切片,确定第一滤波系数;所述第一滤波单元,配置为根据所述第一滤波系数对所述重建切片进行滤波处理,确定所述重建切片对应的滤波后切片;所述第一确定单元,还配置为根据所述滤波后切片,确定第一滤波标识信息;其中,所述第一滤波标识信息指示是否对所述重建切片进行滤波处理;所述编码单元,配置为对所述第一滤波标识信息进行编码;以及若所述第一滤波标识信息指示对所述重建切片进行滤波处理,则对所述第一滤波系数进行编码;所述编码单元,还配置为将所得到的编码比特写入码流。
- 一种编码器,所述编码器包括第一存储器和第一处理器;其中,所述第一存储器,用于存储能够在所述第一处理器上运行的计算机程序;所述第一处理器,用于在运行所述计算机程序时,执行如权利要求1至36任一项所述的方法。
- 一种解码器,所述解码器包括解码单元和第二滤波单元;其中,所述解码单元,配置为解析码流,确定第一滤波标识信息;以及若所述第一滤波标识信息指示对重建点云的重建切片进行滤波处理,则解析码流,确定第一滤波系数;所述第二滤波单元,配置为根据所述第一滤波系数对所述重建切片进行滤波处理,确定所述重建切片对应的滤波后切片。
- 一种解码器,所述解码器包括第二存储器和第二处理器;其中,所述第二存储器,用于存储能够在所述第二处理器上运行的计算机程序;所述第二处理器,用于在运行所述计算机程序时,执行如权利要求38至64任一项所述的方法。
- 一种计算机存储介质,其中,所述计算机存储介质存储有计算机程序,所述计算机程序被第一处理器执行时实现如权利要求1至36任一项所述的方法、或者被第二处理器执行时实现如权利要求38至64任一项所述的方法。
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/CN2022/087255 WO2023201450A1 (zh) | 2022-04-17 | 2022-04-17 | 编解码方法、码流、编码器、解码器以及存储介质 |
| CN202280094071.5A CN118891875B (zh) | 2022-04-17 | 2022-04-17 | 编解码方法、码流、编码器、解码器以及存储介质 |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/CN2022/087255 WO2023201450A1 (zh) | 2022-04-17 | 2022-04-17 | 编解码方法、码流、编码器、解码器以及存储介质 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2023201450A1 true WO2023201450A1 (zh) | 2023-10-26 |
Family
ID=88418722
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2022/087255 Ceased WO2023201450A1 (zh) | 2022-04-17 | 2022-04-17 | 编解码方法、码流、编码器、解码器以及存储介质 |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN118891875B (zh) |
| WO (1) | WO2023201450A1 (zh) |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2021180595A1 (en) * | 2020-03-11 | 2021-09-16 | Canon Kabushiki Kaisha | High level syntax for video coding and decoding |
| US20210329298A1 (en) * | 2020-04-08 | 2021-10-21 | Qualcomm Incorporated | Secondary component attribute coding for geometry-based point cloud compression (g-pcc) |
| CN113906757A (zh) * | 2019-03-12 | 2022-01-07 | 华为技术有限公司 | 用于点云数据的点云块数据单元编码和解码 |
| CN114073086A (zh) * | 2019-07-04 | 2022-02-18 | Lg 电子株式会社 | 点云数据处理设备和方法 |
Family Cites Families (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108629835B (zh) * | 2017-03-20 | 2021-10-01 | 哈尔滨工业大学 | 基于高光谱、真彩图与点云互补的室内重建方法及系统 |
| JP7403128B2 (ja) * | 2018-03-28 | 2023-12-22 | パナソニックIpマネジメント株式会社 | 符号化装置、復号装置、符号化方法、および復号方法 |
| US10904579B2 (en) * | 2019-01-09 | 2021-01-26 | Tencent America LLC | Method and apparatus for annealing iterative geometry smoothing |
| CN112581457B (zh) * | 2020-12-23 | 2023-12-12 | 武汉理工大学 | 一种基于三维点云的管道内表面检测方法及装置 |
-
2022
- 2022-04-17 CN CN202280094071.5A patent/CN118891875B/zh active Active
- 2022-04-17 WO PCT/CN2022/087255 patent/WO2023201450A1/zh not_active Ceased
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113906757A (zh) * | 2019-03-12 | 2022-01-07 | 华为技术有限公司 | 用于点云数据的点云块数据单元编码和解码 |
| CN114073086A (zh) * | 2019-07-04 | 2022-02-18 | Lg 电子株式会社 | 点云数据处理设备和方法 |
| WO2021180595A1 (en) * | 2020-03-11 | 2021-09-16 | Canon Kabushiki Kaisha | High level syntax for video coding and decoding |
| US20210329298A1 (en) * | 2020-04-08 | 2021-10-21 | Qualcomm Incorporated | Secondary component attribute coding for geometry-based point cloud compression (g-pcc) |
Also Published As
| Publication number | Publication date |
|---|---|
| CN118891875A (zh) | 2024-11-01 |
| CN118891875B (zh) | 2025-10-14 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| TWI874661B (zh) | 點雲壓縮方法、編碼器、解碼器及儲存媒介 | |
| WO2023130333A1 (zh) | 编解码方法、编码器、解码器以及存储介质 | |
| JP7673198B2 (ja) | 点群符号化方法、点群復号化方法、点群符号化と復号化システム、点群エンコーダ及び点群デコーダ | |
| WO2021062772A1 (zh) | 预测方法、编码器、解码器及计算机存储介质 | |
| US20240062427A1 (en) | Point cloud encoding and decoding method and decoder | |
| TW202404359A (zh) | 編解碼方法、編碼器、解碼器以及可讀儲存媒介 | |
| WO2021062771A1 (zh) | 颜色分量预测方法、编码器、解码器及计算机存储介质 | |
| WO2023123467A1 (zh) | 编解码方法、码流、编码器、解码器以及存储介质 | |
| US20240355003A1 (en) | Encoding and decoding methods, and bitstream | |
| WO2023201450A1 (zh) | 编解码方法、码流、编码器、解码器以及存储介质 | |
| WO2024159534A1 (zh) | 编解码方法、码流、编码器、解码器以及存储介质 | |
| WO2022170511A1 (zh) | 点云解码方法、解码器及计算机存储介质 | |
| US20250363672A1 (en) | Encoding method, decoding method, code stream, encoder, decoder, and storage medium | |
| WO2025217923A1 (zh) | 编解码方法、码流、编解码器以及存储介质 | |
| WO2024011739A1 (zh) | 点云编解码方法、编解码器及计算机存储介质 | |
| WO2024065406A1 (zh) | 编解码方法、码流、编码器、解码器以及存储介质 | |
| WO2024065408A1 (zh) | 编解码方法、码流、编码器、解码器以及存储介质 | |
| WO2025138048A1 (zh) | 编解码方法、码流、编码器、解码器以及存储介质 | |
| WO2025217752A1 (zh) | 点云编解码方法、编解码器、码流以及存储介质 | |
| WO2025010541A1 (zh) | 编解码方法、码流、编码器、解码器以及存储介质 | |
| WO2025007353A9 (zh) | 编解码方法、码流、编码器、解码器以及存储介质 | |
| WO2025152005A1 (zh) | 编解码方法及装置、编解码器、码流、设备、存储介质 | |
| WO2024082127A1 (zh) | 编解码方法、码流、编码器、解码器以及存储介质 | |
| WO2024082152A1 (zh) | 编解码方法及装置、编解码器、码流、设备、存储介质 | |
| WO2025217849A1 (zh) | 编解码方法、点云编码器、点云解码器以及存储介质 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22937691 Country of ref document: EP Kind code of ref document: A1 |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 202280094071.5 Country of ref document: CN |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 22937691 Country of ref document: EP Kind code of ref document: A1 |
|
| WWG | Wipo information: grant in national office |
Ref document number: 202280094071.5 Country of ref document: CN |