[go: up one dir, main page]

WO2025213421A1 - Procédé de codage, procédé de décodage, flux binaire, codeur, décodeur, et support d'enregistrement - Google Patents

Procédé de codage, procédé de décodage, flux binaire, codeur, décodeur, et support d'enregistrement

Info

Publication number
WO2025213421A1
WO2025213421A1 PCT/CN2024/087319 CN2024087319W WO2025213421A1 WO 2025213421 A1 WO2025213421 A1 WO 2025213421A1 CN 2024087319 W CN2024087319 W CN 2024087319W WO 2025213421 A1 WO2025213421 A1 WO 2025213421A1
Authority
WO
WIPO (PCT)
Prior art keywords
current block
block
transform
information
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/CN2024/087319
Other languages
English (en)
Chinese (zh)
Inventor
马闯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to PCT/CN2024/087319 priority Critical patent/WO2025213421A1/fr
Publication of WO2025213421A1 publication Critical patent/WO2025213421A1/fr
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding

Definitions

  • the embodiments of the present application relate to the field of video coding and decoding technology, and in particular to a coding and decoding method, a bit stream, an encoder, a decoder, and a storage medium.
  • G-PCC Geometry-based Point Cloud Compression
  • PT Predicting Transform
  • LT Lifting Transform
  • RAHT Region Adaptive Hierarchical Transform
  • attribute information is adaptively transformed from the bottom up based on the octree's construction hierarchy. Due to the suboptimal nature of existing technical solutions, the time complexity of transforming the reference image at both the encoder and decoder ends increases, increasing the complexity of the RAHT attribute transform encoding and decoding, and reducing attribute coding efficiency.
  • the embodiments of the present application provide a coding and decoding method, a code stream, an encoder, a decoder, and a storage medium, which can reduce time complexity, improve the efficiency of attribute coding and decoding of point clouds, and thereby improve the coding and decoding performance of point clouds.
  • an embodiment of the present application provides a decoding method, applied to a decoder, the method comprising:
  • the prediction mode identification information indicates that the current block performs inter-frame prediction in the transform domain, determining a transform coefficient prediction value of the current block according to a reference block of the current block;
  • the transform coefficient of the current block is determined according to the transform coefficient prediction value of the current block and the transform coefficient residual value of the current block.
  • an embodiment of the present application provides an encoding method, applied to an encoder, the method comprising:
  • the prediction mode identification information indicates that the current block performs inter-frame prediction in the transform domain, determining a transform coefficient prediction value of the current block according to a reference block of the current block;
  • the transform coefficient residual value of the current block is coded, and the obtained coded bits are written into the bitstream.
  • an embodiment of the present application provides a code stream, which is generated by bit encoding based on information to be encoded; wherein the information to be encoded includes at least one of the following: a low-frequency coefficient value of a root node and a high-frequency coefficient residual value of a current block.
  • an encoder comprising a first determining unit, a first predicting unit, and an encoding unit, wherein:
  • a first determining unit configured to determine prediction mode identification information of a current block
  • a first prediction unit is configured to determine a transform coefficient prediction value of the current block according to a reference block of the current block when the prediction mode identification information indicates that the current block performs inter-frame prediction in the transform domain;
  • the first determining unit is further configured to determine a transform coefficient residual value of the current block according to the transform coefficient prediction value of the current block;
  • the encoding unit is configured to perform encoding processing on the transform coefficient residual value of the current block and write the obtained encoding bits into the bit stream.
  • an encoder comprising a first memory and a first processor; wherein,
  • a first memory for storing a computer program capable of running on the first processor
  • the first processor is configured to execute the method according to the second aspect when running a computer program.
  • an embodiment of the present application provides a decoder, comprising a second determination unit, a second prediction unit, and a decoding unit, wherein:
  • a second determining unit configured to determine prediction mode identification information of a current block
  • the second prediction unit is configured to, when the prediction mode identification information indicates that the current block performs inter-frame prediction in the transform domain, predict the current block according to the parameters of the current block. Consider the block and determine the predicted value of the transform coefficient of the current block;
  • a decoding unit configured to decode the bitstream and determine a transform coefficient residual value of a current block
  • the second determining unit is further configured to determine the transform coefficient of the current block according to the transform coefficient prediction value of the current block and the transform coefficient residual value of the current block.
  • an embodiment of the present application provides a decoder, the decoder comprising a second memory and a second processor; wherein,
  • a second memory for storing a computer program capable of running on the second processor
  • the second processor is configured to execute the method according to the first aspect when running a computer program.
  • an embodiment of the present application provides a computer-readable storage medium having a computer program stored thereon, which, when executed by a processor, implements the method as described in the first aspect or the method as described in the second aspect.
  • an embodiment of the present application provides a computer program product, comprising a computer program or instructions, which, when executed by a processor, implements the method described in the first aspect, or implements the method described in the second aspect.
  • the embodiments of the present application provide a coding and decoding method, a code stream, an encoder, a decoder, and a storage medium.
  • the prediction mode identification information of the current block is determined; when the prediction mode identification information indicates that the current block performs inter-frame prediction in the transform domain, the transform coefficient prediction value of the current block is determined based on the reference block of the current block; the transform coefficient residual value of the current block is determined based on the transform coefficient prediction value of the current block; the transform coefficient residual value of the current block is encoded, and the obtained coded bits are written into the code stream.
  • the prediction mode identification information of the current block is determined; when the prediction mode identification information indicates that the current block performs inter-frame prediction in the transform domain, the transform coefficient prediction value of the current block is determined based on the reference block of the current block; the code stream is decoded to determine the transform coefficient residual value of the current block; and the transform coefficient of the current block is determined based on the transform coefficient prediction value of the current block and the transform coefficient residual value of the current block.
  • the transform coefficient prediction value of the current block can be determined based on the reference block of the current block.
  • the reference block can be transformed based on the slices divided by the reference point cloud, rather than based on the entire frame of the reference point cloud, which reduces the time complexity; and the transformation coefficients of the reference block can be stored in the memory module, so that when the transformation coefficients of the reference block are needed, they can be directly obtained from the memory module, further reducing the time complexity, thereby improving the attribute encoding and decoding efficiency of the point cloud, and thus improving the encoding and decoding performance of the point cloud.
  • FIG1A is a schematic diagram of a three-dimensional point cloud image
  • FIG1B is a partially enlarged view of a three-dimensional point cloud image
  • FIG2A is a schematic diagram of six viewing angles of a point cloud image
  • FIG2B is a schematic diagram of a data storage format corresponding to a point cloud image
  • FIG3 is a schematic diagram of a network architecture for point cloud encoding and decoding
  • FIG4 is a schematic diagram of a composition framework of a G-PCC encoder
  • FIG5 is a schematic diagram of a composition framework of a G-PCC decoder
  • FIG6 is a schematic diagram of a RAHT transformation structure
  • FIG7 is a schematic diagram of a RAHT transformation process along the x, y, and z directions;
  • FIG8A is a schematic diagram of a RAHT forward transformation process
  • FIG8B is a schematic diagram of a RAHT inverse transformation process
  • FIG9 is a schematic diagram of the structure of a RAHT attribute inter-frame prediction coding
  • Figure 10 is a schematic diagram of the hierarchical structure of a point cloud attribute RAHT
  • FIG11 is a schematic diagram of a framework of RAHT inter-frame prediction transformation applied to the encoding end
  • FIG12 is a schematic diagram of a framework of RAHT inter-frame prediction transformation applied to a decoding end
  • FIG13 is a schematic diagram of the logical structure of a RAHT inter-frame prediction transformation
  • FIG14 is a flowchart diagram 1 of a decoding method provided in an embodiment of the present application.
  • FIG15 is a schematic diagram of the logical structure of an inter-frame prediction transformation provided by an embodiment of the present application.
  • FIG16 is a second flow chart of a decoding method provided in an embodiment of the present application.
  • FIG17 is a third flow chart of a decoding method provided in an embodiment of the present application.
  • FIG18 is a fourth flow chart of a decoding method provided in an embodiment of the present application.
  • FIG19 is a schematic diagram of a flow chart of an encoding method provided in an embodiment of the present application.
  • FIG20 is a schematic diagram of a framework of inter-frame prediction transformation applied to a decoding end provided by an embodiment of the present application
  • FIG21 is a schematic diagram of a framework of inter-frame prediction transformation applied to an encoding end according to an embodiment of the present application
  • FIG22 is a schematic diagram of the structure of an encoder provided in an embodiment of the present application.
  • FIG23 is a schematic diagram of a specific hardware structure of an encoder provided in an embodiment of the present application.
  • FIG24 is a schematic diagram of the structure of a decoder provided in an embodiment of the present application.
  • FIG25 is a schematic diagram of a specific hardware structure of a decoder provided in an embodiment of the present application.
  • FIG26 is a schematic diagram of the composition structure of a coding and decoding system provided in an embodiment of the present application.
  • first ⁇ second ⁇ third involved in the embodiments of the present application are only used to distinguish similar objects and do not represent a specific ordering of the objects. It can be understood that “first ⁇ second ⁇ third” can be interchanged with a specific order or sequence where permitted, so that the embodiments of the present application described here can be implemented in an order other than that illustrated or described here.
  • Point Cloud is a three-dimensional representation of the surface of an object.
  • Point Cloud (data) on the surface of an object can be collected through acquisition equipment such as photoelectric radar, lidar, laser scanner, and multi-view camera.
  • a point cloud is a set of irregularly distributed discrete points in three-dimensional space that represent the spatial structure and surface properties of a three-dimensional object or scene. These points contain geometric information representing spatial location and attribute information representing the point cloud's appearance and texture.
  • Figure 1A shows a 3D point cloud image
  • Figure 1B shows a zoomed-in view of a 3D point cloud image. It can be seen that the point cloud surface is composed of densely distributed points.
  • Two-dimensional images contain information at every pixel and are distributed regularly, so there's no need to record their location information. However, the distribution of points in a point cloud in three-dimensional space is random and irregular, so recording the spatial location of each point is necessary to fully represent the point cloud. Similar to two-dimensional images, each location in the acquisition process has corresponding attribute information, typically including color and reflectance. Color information reflects the color of an object and is typically represented in RGB; reflectance information reflects the surface texture of an object and is typically represented in reflectance. Point cloud data typically consists of geometric information (x, y, z) representing the three-dimensional spatial location, as well as attribute information such as color (r, g, b) and reflectance.
  • reflectance information can be one-dimensional reflectance (r); color information can be in any color space, or it can be three-dimensional color information, such as RGB.
  • R represents red (red)
  • G represents green (green)
  • B represents blue (blue).
  • the color information may be luminance and chrominance (YCbCr, YUV) information, where Y represents brightness (Luma), Cb (U) represents blue color difference, and Cr (V) represents red color difference.
  • a point cloud obtained using laser measurement principles may include both 3D coordinate information and reflectivity information for each point.
  • a point cloud obtained using photogrammetry principles may include both 3D coordinate information and 3D color information for each point.
  • a point cloud obtained using a combination of laser measurement and photogrammetry principles may include both 3D coordinate information, reflectivity information, and 3D color information for each point.
  • Figures 2A and 2B show a point cloud image and its corresponding data storage format.
  • Figure 2A provides six viewing angles of the point cloud image
  • Figure 2B consists of a file header and data.
  • the header includes the data format, data representation type, the total number of points in the point cloud, and the content represented by the point cloud.
  • the point cloud is in ".ply" format, represented by ASCII code, with a total of 207,242 points.
  • Each point has 3D coordinate information (x, y, z) and 3D color information (r, g, b).
  • Point clouds can be divided into the following categories according to the acquisition method:
  • Static point cloud the object is stationary and the device that acquires the point cloud is also stationary;
  • Dynamic point cloud The object is moving, but the device that obtains the point cloud is stationary;
  • Dynamic point cloud acquisition The device used to acquire the point cloud is in motion.
  • point clouds can be divided into two categories according to their usage:
  • Category 1 Machine perception point cloud, which can be used in scenarios such as autonomous navigation systems, real-time inspection systems, geographic information systems, visual sorting robots, and disaster relief robots;
  • Category 2 Human eye perception point cloud, which can be used in point cloud application scenarios such as digital cultural heritage, free viewpoint broadcasting, 3D immersive communication, and 3D immersive interaction.
  • Point clouds can flexibly and conveniently express the spatial structure and surface properties of three-dimensional objects or scenes. Moreover, since point clouds are obtained by directly sampling real objects, they can provide a strong sense of reality while ensuring accuracy. Therefore, they are widely used, including virtual reality games, computer-aided design, geographic information systems, automatic navigation systems, digital cultural heritage, free viewpoint broadcasting, three-dimensional immersive remote presentation, and three-dimensional reconstruction of biological tissues and organs.
  • the main ways to collect point clouds are: computer generation, 3D laser scanning, 3D photogrammetry, etc.
  • Computers can generate virtual three-dimensional 3D laser scanning can generate point clouds of static, real-world 3D objects or scenes, generating millions of point clouds per second.
  • 3D photogrammetry can generate point clouds of dynamic, real-world 3D objects or scenes, generating tens of millions of point clouds per second.
  • the point cloud is a collection of massive points, storing the point cloud not only consumes a lot of memory, but is also not conducive to transmission. There is also not enough bandwidth to support direct transmission of the point cloud at the network layer without compression. Therefore, the point cloud needs to be compressed.
  • point cloud coding frameworks that can compress point clouds can include the Geometry-based Point Cloud Compression (G-PCC) codec framework or the Video-based Point Cloud Compression (V-PCC) codec framework provided by the Moving Picture Experts Group (MPEG), or the AVS-PCC codec framework provided by AVS.
  • G-PCC codec framework can be used to compress the first type of static point clouds and the third type of dynamically acquired point clouds, and can be based on the Point Cloud Compression Test Platform (Test Model Compression 13, TMC13).
  • the V-PCC codec framework can be used to compress the second type of dynamic point clouds, and can be based on the Point Cloud Compression Test Platform (Test Model Compression 2, TMC2). Therefore, the G-PCC codec framework is also called the Point Cloud Codec TMC13, and the V-PCC codec framework is also called the Point Cloud Codec TMC2.
  • FIG3 is a schematic diagram of a network architecture of a point cloud encoding and decoding system provided by an embodiment of the present application.
  • the network architecture includes one or more electronic devices 13 to 1N and a communication network 01, wherein the electronic devices 13 to 1N can perform video interaction through the communication network 01.
  • the electronic device can be various types of devices with point cloud encoding and decoding functions.
  • the electronic device can include a mobile phone, a tablet computer, a personal computer, a personal digital assistant, a navigator, a digital phone, a video phone, a television, a sensor device, a server, etc., which is not limited by the embodiment of the present application.
  • the decoder or encoder in the embodiment of the present application can be the above-mentioned electronic device.
  • the electronic device in the embodiment of the present application has a point cloud encoding and decoding function, generally including a point cloud encoder (ie, encoder) and a point cloud decoder (ie, decoder).
  • a point cloud encoder ie, encoder
  • a point cloud decoder ie, decoder
  • the input point cloud is first divided into multiple slices through slice partitioning.
  • the geometric information of the point cloud and the attribute information corresponding to each point are encoded separately.
  • FIG 4 is a schematic diagram of the composition framework of a G-PCC encoder.
  • the G-PCC encoder is applied to a point cloud encoder.
  • the input point cloud is sliced, and then the slices are independently encoded.
  • the geometric information of the point cloud and the attribute information corresponding to the points in the point cloud are encoded separately.
  • the G-PCC encoder first encodes the geometric information.
  • the encoder performs coordinate transformation on the geometric information so that the entire point cloud is contained in a bounding box; then quantization is performed. This quantization step mainly serves a scaling purpose. Due to quantization rounding, the geometric information of some points is the same. The decision to remove duplicate points is made based on parameters.
  • This process of quantization and removing duplicate points is also called voxelization.
  • the bounding box is partitioned based on an octree.
  • the encoding of geometric information is divided into two frameworks: octree-based and triangle facet-based.
  • the bounding box is divided into eight equal parts, and the placeholder bits of each sub-cube are recorded (where 1 indicates a non-empty sub-cube and 0 indicates an empty sub-cube).
  • the non-empty sub-cubes are further divided into eight equal parts, usually stopping when the resulting leaf node is a 1 ⁇ 1 ⁇ 1 unit cube.
  • the placeholder bits are intra-predicted using the spatial correlation between the node and its surrounding nodes.
  • CABAC encoding based on the context model is performed to generate a binary geometry bit stream, also known as the geometry code stream.
  • octree partitioning is also performed first.
  • the difference is that the octree-based geometric information coding method does not need to divide the point cloud into unit cubes with a side length of 1 ⁇ 1 ⁇ 1. Instead, the partitioning stops when the block side length is W.
  • the surface and the twelve edges of the block are obtained.
  • the vertex coordinates of each block are encoded in sequence to generate a binary geometric bit stream, namely the geometric code stream.
  • the G-PCC encoder After completing the geometric information encoding, the G-PCC encoder reconstructs the geometric information and uses the reconstructed geometric information to encode the attribute information of the point cloud.
  • the attribute encoding of the point cloud mainly encodes the color information of the point in the point cloud.
  • the encoder can perform color space conversion on the color information of the point. For example, when the color information of the point in the input point cloud is represented by the RGB color space, the encoder can convert the color information from the RGB color space to the YUV color space. Then, the point cloud is re-colored using the reconstructed geometric information. Color is encoded so that the unencoded attribute information corresponds to the reconstructed geometric information.
  • color information encoding there are two main transformation methods.
  • One method is to rely on the distance-based lifting transform based on the level of detail (LOD) division, and the other is to directly perform the region adaptive hierarchical transform (RAHT). Both methods transform the color information from the spatial domain to the frequency domain to obtain high-frequency coefficients and low-frequency coefficients. Finally, the coefficients are quantized and arithmetic encoded on the quantized coefficients to generate a binary attribute bit stream, namely the attribute code stream.
  • LOD level of detail
  • RAHT region adaptive hierarchical transform
  • Figure 5 is a schematic diagram of the composition framework of a G-PCC decoder.
  • the G-PCC decoder is applied to a point cloud decoder.
  • the geometric bit stream and attribute bit stream in the binary code stream are decoded independently.
  • the geometric information of the point cloud is obtained through arithmetic decoding-synthetic octree-surface fitting-reconstructed geometry-inverse coordinate transformation;
  • the attribute information of the point cloud is obtained through arithmetic decoding-inverse quantization-LOD-based inverse lifting or RAHT-based inverse transformation-inverse color conversion.
  • the original slice can be restored based on the geometric information and attribute information; then, after merging the slices, the three-dimensional image model of the input point cloud can be restored.
  • G-PCC attribute codec includes three attribute encoding methods: Predicting Transform (PT), Lifting Transform (LT), and Region Adaptive Hierarchical Transform (RAHT).
  • PT Predicting Transform
  • LT Lifting Transform
  • RAHT Region Adaptive Hierarchical Transform
  • the first two predictively encode the point cloud based on the LOD generation order
  • RAHT adaptively transforms attribute information from bottom to top based on the octree construction hierarchy.
  • the Regional Adaptive Hierarchical Transform is a Haar wavelet transform that transforms point cloud attribute information from the spatial domain to the frequency domain, further reducing the correlation between point cloud attributes. Its main concept is to transform the nodes in each layer in the X, Y, and Z dimensions in a bottom-up manner according to the octree structure (as shown in Figure 7), and iterate until the root node of the octree.
  • the basic idea is to perform a wavelet transform based on the hierarchical structure of the octree, associate attribute information with the octree nodes, and recursively transform the attributes of occupied nodes under the same parent node in a bottom-up manner, transforming the nodes in each layer in the X, Y, and Z dimensions until the root node of the octree is reached.
  • the low-pass/low-frequency (DC) coefficients obtained after the transformation of the nodes in the same layer are passed to the nodes in the previous layer for further transformation, while all high-pass/high-frequency (AC) coefficients can be encoded using an arithmetic coder.
  • the DC coefficients (direct current components) of the transformed nodes at the same layer are passed to the previous layer for further transformation, while the AC coefficients (alternating current components) of each layer are quantized and encoded.
  • the main transformation processes are described below.
  • Figure 8A illustrates a RAHT forward transform process
  • Figure 8B illustrates an inverse RAHT transform process.
  • g′ L,2x,y,z and g′ L,2x+1,y,z are two attribute DC coefficients of neighboring nodes in the L layer.
  • the information in the L-1 layer consists of the AC coefficients f′ L-1,x,y,z and the DC coefficients g′ L-1,x,y,z .
  • f′ L-1,x,y,z is no longer transformed and is directly quantized.
  • g′ L-1,x,y,z continues to search for neighboring nodes for transformation.
  • the transform is passed directly to the L-2 layer. This means that the RAHT transform is only effective for nodes with neighboring nodes; nodes without neighboring nodes are directly passed to the previous layer.
  • the weights (the number of non-empty child nodes in the node) corresponding to g′ L,2x,y,z and g′ L,2x+1,y ,z are w′ L,2x,y,z and w′ L,2x+1,y,z (abbreviated as w′ 0 and w′ 1 ) respectively, and the weight of g′ L-1,x,y,z is w′ L-1,x,y,z .
  • the general transformation formula is:
  • T w0,w1 is the transformation matrix:
  • the transformation matrix will be updated as the weights corresponding to each point change adaptively.
  • the above process will be iterated and updated continuously according to the partitioning structure of the octree until the root node of the octree is reached.
  • the process of G-PCC attribute inter-frame prediction coding is similar to the process of intra-frame prediction coding, which is described as follows:
  • a RAHT attribute transform coding structure is constructed based on geometric information. This involves performing transforms at the voxel level until the root node is reached, completing the hierarchical transform coding of the entire attribute. This approach results in both intra-frame and inter-frame coding structures.
  • the inter-frame coding structure for the RAHT attribute can be seen in Figure 9.
  • the geometric information of the current node to be coded is first used to obtain the co-located predicted node of the node to be coded in the reference image, and then the geometric information and attribute information of the reference node are used to obtain the predicted attribute of the current node to be coded.
  • the attribute prediction value of the current node to be encoded is obtained according to the following two different methods:
  • the inter-frame prediction node of the current node is valid: that is, if the same-position node exists, the attribute of the predicted node is directly used as the attribute prediction value of the current node to be encoded;
  • the inter-frame prediction node of the current node is invalid: that is, the co-located node does not exist, and the attribute prediction value of the adjacent node in the frame is used as the node to be encoded.
  • the predicted value of the attribute of the code node is used as the node to be encoded.
  • the attribute prediction value is used to predict the attribute of the current node to be encoded, thereby completing the predictive coding of the entire attribute.
  • the hierarchical structure of the RAHT inter-frame transform may include:
  • a point cloud can be a frame (or "an image”), and a frame (or “an image”) can be divided into multiple slices.
  • a slice is a unit of RAHT transformation/inverse transformation, and a slice performs one RAHT transformation/inverse transformation.
  • a RAHT transform/inverse transform can include many transform layers.
  • Each transform layer may include multiple RAHT transform blocks. Both intra-frame prediction and inter-frame prediction are for transform blocks.
  • Figure 10 illustrates a hierarchical structure of a point cloud attribute RAHT.
  • a point cloud can be divided into multiple slices, such as slice 1 and slice 2.
  • slice 1 can undergo a RAHT transform/inverse transform, for example, to obtain RAHT transform layer 1.
  • RAHT transform layer 1 can include multiple RAHT transform blocks, such as RAHT transform block 1 and RAHT transform block 2.
  • Figure 11 is a schematic diagram of a framework for RAHT inter-frame prediction transform applied to the encoding end.
  • the geometric information of the point cloud to be encoded is subjected to a point cloud geometric octree decomposition, and the attribute information of the point cloud to be encoded is subjected to a RAHT transform to determine the transform coefficients of the point cloud to be encoded.
  • the transform coefficient prediction value of the point cloud to be encoded is determined based on RAHT inter-frame prediction or RAHT intra-frame prediction.
  • the transform coefficient residual value is obtained by subtracting the transform coefficient of the point cloud to be encoded from the transform coefficient prediction value.
  • the transform coefficient residual value is then quantized to obtain the quantized transform coefficient residual value.
  • the quantized transform coefficient residual value can be encoded and written into the attribute code stream.
  • the quantized transform coefficient residual value can be dequantized to obtain the reconstructed transform coefficient residual value.
  • the reconstructed transform coefficient residual value is then added to the transform coefficient prediction value to obtain the reconstructed transform coefficient.
  • the reconstructed transform coefficient is then subjected to a RAHT inverse transform to obtain the reconstructed point cloud attributes.
  • the encoding end also involves octree decomposition of the geometric information of the reference point cloud and RAHT transformation of the attribute information of the reference point cloud.
  • Figure 12 is a schematic diagram of a framework for RAHT inter-frame prediction transformation applied to the decoding end.
  • the geometric information of the point cloud to be decoded is subjected to a point cloud geometric octree decomposition, and the attribute code stream is parsed to obtain the quantized transform coefficient residual value.
  • the quantized transform coefficient residual value is then dequantized to obtain the transform coefficient residual value of the point cloud to be decoded.
  • the transform coefficient prediction value of the point cloud to be decoded is determined based on RAHT inter-frame prediction or RAHT intra-frame prediction.
  • the transform coefficient of the point cloud to be decoded is then added to the transform coefficient prediction value.
  • the transform coefficient is subjected to a RAHT inverse transform to reconstruct the point cloud attributes.
  • the decoding end also involves the octree decomposition of the geometric information of the reference point cloud and the RAHT transform of the attribute information of the reference point cloud.
  • the related technology needs to re-perform a RAHT transform operation on the reference point cloud at the encoding and decoding ends, which greatly increases the time complexity and brings high algorithmic complexity.
  • the reference frame when decoding the current point cloud, when the reference point cloud is used as a reference frame, the reference frame is regarded as a slice and only performs a single RAHT transform (i.e., only one RAHT transform layer) as a reference. The information of different RAHT transform layers of the reference image is not retained, which increases the complexity of RAHT attribute transform encoding and decoding and reduces the attribute coding efficiency.
  • an embodiment of the present application provides an encoding method, which determines the prediction mode identification information of the current block; when the prediction mode identification information indicates that the current block performs inter-frame prediction in the transform domain, the transform coefficient prediction value of the current block is determined based on the reference block of the current block; based on the transform coefficient prediction value of the current block, the transform coefficient residual value of the current block is determined; the transform coefficient residual value of the current block is encoded, and the obtained encoded bits are written into the code stream.
  • An embodiment of the present application also provides a decoding method, which determines the prediction mode identification information of the current block; when the prediction mode identification information indicates that the current block performs inter-frame prediction in the transform domain, the transform coefficient prediction value of the current block is determined based on the reference block of the current block; the code stream is decoded to determine the transform coefficient residual value of the current block; and the transform coefficient of the current block is determined based on the transform coefficient prediction value of the current block and the transform coefficient residual value of the current block.
  • the transform coefficient prediction value of the current block can be determined based on the reference block of the current block.
  • the reference block can be transformed based on the slices divided by the reference point cloud, rather than the entire frame of the reference point cloud, which reduces the time complexity; and the transform coefficients of the reference block can be stored in the memory module, so that when the transform coefficients of the reference block are needed, they can be directly obtained from the memory module, further reducing the time complexity, thereby improving the attribute encoding and decoding efficiency of the point cloud, and thus improving the encoding and decoding performance of the point cloud.
  • FIG14 is a flowchart diagram of a decoding method provided by the embodiment of the present application. As shown in FIG14 , the method may include:
  • the decoding method of the embodiment of the present application is applied to the decoder, and the decoding method specifically refers to a transform domain prediction method of point cloud attributes.
  • the transform domain prediction of the current block may include inter-frame prediction in the transform domain and Intra-frame prediction in the transform domain: When the current block is subjected to inter-frame prediction in the transform domain, the temporal complexity can be reduced.
  • the current block may be a transformed block to be decoded in the current point cloud.
  • mode indication information in the form of some syntax elements can be written into the bitstream.
  • the prediction mode identification information of the current block can be determined.
  • the first syntax element can be used to indicate the prediction mode identification information of the current block.
  • the value of the first syntax element is determined; based on the value of the first syntax element, the prediction mode identification information of the current block can be determined.
  • the value of the first syntax element is a first value, it is determined that the current block performs inter-frame prediction in the transform domain; if the value of the first syntax element is a second value, it is determined that the current block performs intra-frame prediction in the transform domain.
  • the first value and the second value are different.
  • the first value can be 1 and the second value can be 0; or the first value can be true and the second value can be false.
  • the method may further include: if there is a reference block in the reference point cloud that satisfies a preset inter-frame condition with the current block, determining that the prediction mode identification information indicates that the current block performs inter-frame prediction in the transform domain; if there is no reference block in the reference point cloud that satisfies the preset inter-frame condition with the current block, determining that the prediction mode identification information indicates that the current block performs intra-frame prediction in the transform domain.
  • the prediction mode identification information of the current block can be determined by the first syntax element in the code stream, or it can be determined based on whether the current block finds an inter-frame reference block in the reference point cloud as a judgment condition for the prediction mode identification information, or it can be other judgment conditions, which are not specifically limited here.
  • the prediction mode identification information is determined based on whether the current block finds an inter-frame reference block in the reference point cloud, if the current block can find a reference block that meets the preset inter-frame condition in the reference point cloud, then it can be determined that the current block performs inter-frame prediction in the transform domain; if the current block cannot find a reference block that meets the preset inter-frame condition in the reference point cloud, then it can be determined that the current block performs intra-frame prediction in the transform domain.
  • the reference point cloud is the reference frame of the current point cloud where the current block is located.
  • a reference block of the current block is first determined, and then a transform coefficient prediction value of the current block is determined based on the reference block of the current block.
  • the method may include: determining a reference block of the current block based on the slice identification information in the reference point cloud.
  • the reference point cloud is divided into slices to obtain at least one slice of the reference point cloud, and the at least one slice includes the reference slice where the reference block is located.
  • the slice ID (sliceID) here can be used to indicate a reference slice.
  • the reference block of the current block can be determined from the reference slice indicated by the slice ID information.
  • the transformation operation is performed based on the slice divided by the reference point cloud, rather than the transformation operation based on the entire frame where the reference point cloud is located.
  • the reference point cloud can be divided into at least two slices, such as slice1-ref1 and slice2-ref2.
  • slice1-ref1 a RAHT transformation layer ref-1 can be obtained, and the RAHT transformation layer ref-1 can include RAHT transformation block ref1-1, RAHT transformation block ref1-2, etc.
  • a RAHT transformation layer ref-2 can be obtained, and the RAHT transformation layer ref-2 can include RAHT transformation block ref2-1, RAHT transformation block ref2-2, etc.
  • the time complexity can be reduced; and the slice-level transformation coefficients can be retained. Since the RAHT transform of the current point cloud is based on the inter-frame prediction at the slice level, the accuracy of the inter-frame prediction can be improved by using the slice-level transformation coefficients as a reference; at the same time, certain slices of the reference point cloud can be adaptively selected for inter-frame prediction, and certain slices are not used for inter-frame prediction, thereby improving the flexibility of the prediction and saving memory.
  • the method may further include: if the candidate reference block existing in the reference point cloud and the current block meet a preset inter-frame condition, determining the candidate reference block as the reference block of the current block.
  • an inter-frame reference block can be found in the reference point cloud for the current block, that is, a candidate reference block that satisfies a preset inter-frame condition with the current block, then the current block is subjected to transform-domain inter-frame prediction, and the found candidate reference block is used as the reference block for the current block. Whether the found candidate reference block satisfies the preset inter-frame condition with the current block can be determined through the following possible implementations.
  • the absolute coordinate information of the candidate reference block may be determined based on the coordinate information of the candidate reference block and the slice identification information (sliceID) where the candidate reference block is located.
  • the preset inter-frame conditions can be the several possible implementation methods mentioned above, or other conditions that can be used to find inter-frame reference blocks. No limitation is made here.
  • the reference block found in the reference point cloud may specifically include: the coordinate information of the reference block is the same as the coordinate information of the current block, and the slice identification information of the reference block is the same as the slice identification information of the current block, and the transformation layer information of the reference block is the same as the current layer information of the current block; or, the coordinate information of the parent transformation block of the reference block is the same as the coordinate information of the parent transformation block of the current block, and the slice identification information of the reference block is the same as the slice identification information of the current block, and the transformation layer information of the reference block is the same as the current layer information of the current block; or, the absolute coordinate information of the reference block is the same as the absolute coordinate information of the current block, and the transformation layer information of the reference block is the same as the current layer information of the current block; or, the absolute coordinate information of the parent transformation block of the reference block is the same as the absolute coordinate information of the parent transformation block of the current block, and the transformation layer information of the reference block is the same as the current layer information of the current block; or, the
  • the coordinate information of the reference block can be determined, and then jointly determined based on the coordinate information of the reference block and the coordinate offset information of the slice identification information where the reference block is located relative to the origin; for the absolute coordinate information of the parent transformation block of the reference block, the coordinate information of the parent transformation block of the reference block can be determined, and then jointly determined based on the coordinate information of the parent transformation block of the reference block and the coordinate offset information of the slice identification information where the reference block is located relative to the origin.
  • determining the transformation coefficient prediction value of the current block according to the reference block of the current block may specifically include: determining the transformation coefficient prediction value of the current block according to the transformation coefficient of the reference block.
  • the method may include:
  • S1601 Determine attribute information of a reference slice based on the reference slice indicated by the slice identification information.
  • S1602 Perform region-adaptive hierarchical transformation on attribute information of a reference slice to determine transformation coefficients of at least one transformation layer.
  • S1603 Determine a transform coefficient of a reference block from transform coefficients of at least one transform layer.
  • S1604 Determine a predicted value of the transform coefficient of the current block according to the transform coefficient of the reference block.
  • the RAHT transform when determining the transform coefficients of the reference block, can be performed on the slices (i.e., reference slices) divided based on the reference point cloud, rather than on the entire frame of the reference point cloud. In this way, the transform coefficients at the slice level can be retained. Since the RAHT transform of the current point cloud is based on the inter-frame prediction performed at the slice level, the transform coefficient prediction value of the current block can be obtained more accurately by using the transform coefficients at the slice level as a reference, thereby improving the accuracy of the inter-frame prediction. At the same time, certain slices of the reference point cloud can be adaptively selected for inter-frame prediction, while certain slices are not used for inter-frame prediction, thereby improving the flexibility of the prediction and saving memory.
  • the method may include: decoding the code stream to determine the transform coefficient residual value of the reference block; and determining the transform coefficient of the reference block based on the transform coefficient prediction value of the reference block and the transform coefficient residual value of the reference block.
  • the code stream can be decoded to determine the quantization coefficient residual values of the reference block; the quantization coefficient residual values of the reference block are inversely quantized to determine the transform coefficient residual values of the reference block; and then the transform coefficients of the reference block are determined based on the transform coefficient prediction values of the reference block and the transform coefficient residual values of the reference block.
  • the method may include: decoding a code stream to determine the transform coefficients of the reference block.
  • the reference block when the transform coefficients of the reference block do not need to be predicted, the reference block can be determined by decoding the code stream.
  • the quantization coefficients of the reference block are inversely quantized to determine the transformation coefficients of the reference block.
  • the method may further include: determining storage information of the reference block; and storing the storage information of the reference block in the memory module.
  • the storage information of the reference block may include at least one of the following: the transform coefficients of the reference block, the coordinate information of the reference block, the slice identification information where the reference block is located, and the transform layer information where the reference block is located.
  • the method may include:
  • S1702 Determine a predicted value of the transform coefficient of the current block according to the transform coefficient of the reference block.
  • the storage information corresponding to the reference block can be pre-stored in the memory module.
  • the memory module can retain information on multiple RAHT transformation layers of the reference point cloud. In this way, after determining the reference block of the current block, the transformation coefficients of the reference block can be quickly obtained from the memory module according to the sliceID where the reference block is located, and then the transformation coefficients of the reference block are used as the transformation coefficient prediction values of the current block, thereby further reducing the time complexity and making the inter-frame prediction more accurate.
  • the storage information corresponding to the reference block in the memory module may include at least one of the following: the transformation coefficient of the reference block, the coordinate information of the reference block, the slice identification information where the reference block is located, the transformation layer information where the reference block is located, and the absolute coordinate information of the reference block.
  • the absolute coordinate information of the reference block can be determined based on the coordinate information of the reference block and the slice ID information where the reference block is located.
  • the method may include: determining the coordinate information of the reference block; and determining the absolute coordinate information of the reference block based on the coordinate information of the reference block and the coordinate offset information of the slice ID where the reference block is located relative to the origin.
  • the absolute coordinate information of the reference block can be obtained by adding the coordinate information of the reference block to the coordinate offset information of the slice ID where the reference block is located relative to the origin.
  • the coordinate information may include geometric position (x, y, z) information and Morton code information.
  • the coordinate information of the reference block refers to the position information of the reference block when the RAHT transform is performed
  • the absolute coordinate information of the reference block refers to the absolute position information of the reference block in the reference frame.
  • the transform coefficient prediction value of the current block can be determined based on the transform coefficient of the reference block.
  • the method may further include:
  • the current block is determined to be intra-predicted in the transform domain, and the resulting intra-prediction value can be used as the transform coefficient prediction value of the current block.
  • the intra-prediction value can be obtained by weighted calculation of the transform coefficient prediction values of the neighboring blocks of the current block, the parent transform block of the current block, and the neighboring blocks of the parent transform block.
  • transform coefficients may include low-frequency coefficients and high-frequency coefficients.
  • the low-frequency coefficients may also be referred to as direct current coefficients or DC coefficients
  • the high-frequency coefficients may also be referred to as alternating current coefficients or AC coefficients.
  • the transform coefficient prediction values herein primarily refer to the high-frequency coefficient prediction values of the current block.
  • S1403 Decode the code stream to determine the transform coefficient residual value of the current block.
  • decoding the code stream and determining the transform coefficient residual value of the current block may include: decoding the code stream and determining the quantization coefficient residual value of the current block; and dequantizing the quantization coefficient residual value to determine the transform coefficient residual value of the current block.
  • the transform coefficient of the current block is obtained by performing an addition operation on the transform coefficient prediction value of the current block and the transform coefficient residual value of the current block.
  • the method further includes: determining the storage information of the current block; and saving the storage information of the current block into the memory module.
  • the storage information of the current block may include at least one of the following: the transformation coefficient of the current block, the coordinate information of the current block, the slice identification information where the current block is located, the current layer information where the current block is located, and the absolute coordinate information of the current block.
  • decoding a bitstream and determining a transform coefficient residual value of a current block may include: decoding the bitstream and determining a high-frequency coefficient residual value of the current block. Accordingly, determining the transform coefficient of the current block based on a transform coefficient prediction value of the current block and a transform coefficient residual value of the current block may include: determining the high-frequency coefficient value of the current block based on the high-frequency coefficient prediction value of the current block and the high-frequency coefficient residual value of the current block. Specifically, the high-frequency coefficient value of the current block may be determined by adding the high-frequency coefficient prediction value of the current block and the high-frequency coefficient residual value of the current block.
  • the method may further include: determining geometric information of the current slice; performing octree decomposition on the geometric information of the current slice to determine at least one transformation layer; wherein the at least one transformation layer includes the current layer where the current block is located.
  • the slice identification information (sliceID) of the current block can be used to indicate the current slice.
  • the current slice is determined by the slice division of the current point cloud.
  • the geometric information of the current slice is decomposed into an octree and transformed based on the hierarchical structure of the octree. Specifically, the RAHT inverse transformation is performed from top to bottom on the root node of the octree, and the RAHT inverse transformation is performed on the nodes in each layer from the three dimensions of x, y, and z until the inverse transformation reaches the leaf node of the octree.
  • the method further includes: when the current layer is the L-1th layer in at least one transformation layer, performing a region-adaptive hierarchical inverse transformation on the transformation coefficients of the current block in the L-1th layer to determine the reconstruction attribute information of the nodes in the Lth layer; wherein L is a positive integer.
  • the method further includes decoding the bitstream to determine the low-frequency coefficient value of the root node.
  • the low-frequency coefficient value of the root node and the residual value of the high-frequency coefficient of each layer are written into the bitstream so that the decoding end can perform region-adaptive layered inverse transform.
  • a regional adaptive hierarchical inverse transform is performed on the transform coefficients of the current block in the L-1 layer to determine the reconstruction attribute information of the nodes in the L layer.
  • it may include: when the current layer is the first transform layer, performing a regional adaptive hierarchical inverse transform based on the low-frequency coefficient value of the root node and the high-frequency coefficient value of the current block in the first layer to determine the reconstruction attribute information of the nodes in the second transform layer; when the current layer is not the first transform layer, determining the DC coefficient value of the current block in the current layer, performing a regional adaptive hierarchical inverse transform based on the DC coefficient value of the current block in the current layer and the AC coefficient value of the current block in the current layer to determine the reconstruction attribute information of the nodes in the next layer.
  • the attribute information of the current slice can be subjected to RAHT inverse transformation in a top-down manner along the octree decomposition, and the nodes in each layer are subjected to RAHT inverse transformation from the three dimensions of x, y, and z respectively, until the leaf nodes of the octree are transformed.
  • the DC coefficient and AC coefficient of the node in the same layer (L-1 layer) are subjected to RAHT inverse transformation to obtain the DC coefficient of the next layer (L layer), that is, the attribute information of each layer is obtained.
  • the transformation matrix will be updated as the weights corresponding to each point change adaptively.
  • the above process will be continuously iterated and updated according to the division structure of the octree until the leaf nodes of the octree.
  • the reconstruction attribute information of the current slice can be determined.
  • the embodiment of the present application provides a decoding method, specifically a decoding scheme for fast inter-frame attribute transformation of point clouds.
  • the reference block can be transformed based on the slices divided by the reference point cloud, rather than based on the entire frame of the reference point cloud, which reduces the time complexity; and the transform coefficient of the reference block can be stored in the memory module, so that when the transform coefficient of the reference block is needed, it can be directly obtained from the memory module, further reducing the time complexity, thereby improving the attribute encoding and decoding efficiency of the point cloud, and thus improving the encoding and decoding performance of the point cloud.
  • FIG19 is a flow chart of an encoding method provided by the embodiment of the present application. As shown in FIG19 , the method may include:
  • the encoding method in the embodiments of the present application is applied to an encoder and specifically refers to a transform domain prediction method for point cloud attributes.
  • the transform domain prediction of the current block may include inter-frame prediction and intra-frame prediction in the transform domain.
  • time complexity can be reduced.
  • the current block may be a transformed block to be encoded in the current point cloud.
  • mode indication information in the form of some syntax elements can be written into the bitstream.
  • the prediction mode identification information for the current block can be determined.
  • the first syntax element can be used to indicate the prediction mode identification information for the current block.
  • the value of the first syntax element is determined; the value of the first syntax element is encoded, and the resulting encoded bits are written into the bitstream.
  • the value of the first syntax element is determined to be a first value; if the current block performs intra-frame prediction in the transform domain, the value of the first syntax element is determined to be a second value.
  • the first value and the second value are different.
  • the first value can be 1 and the second value can be 0; or the first value can be true and the second value can be false.
  • the method may further include: if there is a reference block in the reference point cloud that satisfies a preset inter-frame condition with the current block, determining that the prediction mode identification information indicates that the current block performs inter-frame prediction in the transform domain; if there is no reference block in the reference point cloud that satisfies the preset inter-frame condition with the current block, determining that the prediction mode identification information indicates that the current block performs intra-frame prediction in the transform domain.
  • the prediction mode identification information of the current block can be determined by the first syntax element in the code stream, or it can be determined based on whether the current block finds an inter-frame reference block in the reference point cloud as a judgment condition for the prediction mode identification information, or it can be other judgment conditions, which are not specifically limited here.
  • the prediction mode identification information is determined based on whether the current block finds an inter-frame reference block in the reference point cloud, if the current block can find a reference block that meets the preset inter-frame condition in the reference point cloud, then it can be determined that the current block performs inter-frame prediction in the transform domain; if the current block cannot find a reference block that meets the preset inter-frame condition in the reference point cloud, then it can be determined that the current block performs intra-frame prediction in the transform domain.
  • the reference point cloud is the reference frame of the current point cloud where the current block is located.
  • a reference block of the current block is first determined, and then a transform coefficient prediction value of the current block is determined based on the reference block of the current block.
  • the method may include: determining a reference block of the current block based on the slice identification information in the reference point cloud.
  • the reference point cloud is divided into slices to obtain at least one slice of the reference point cloud, and the at least one slice includes the reference slice where the reference block is located.
  • the slice ID (sliceID) here can be used to indicate a reference slice.
  • the reference block of the current block can be determined from the reference slice indicated by the slice ID information.
  • the transformation operation is performed based on the slice divided by the reference point cloud, rather than the transformation operation based on the entire frame where the reference point cloud is located.
  • the reference point cloud can be divided into at least two slices, such as slice1-ref1 and slice2-ref2.
  • slice1-ref1 a RAHT transformation layer ref-1 can be obtained, and the RAHT transformation layer ref-1 can include RAHT transformation block ref1-1, RAHT transformation block ref1-2, etc.
  • a RAHT transformation layer ref-2 can be obtained, and the RAHT transformation layer ref-2 can include RAHT transformation block ref2-1, RAHT transformation block ref2-2, etc.
  • the time complexity can be reduced; and the slice-level transformation coefficients can be retained. Since the RAHT transform of the current point cloud is based on the inter-frame prediction at the slice level, the accuracy of the inter-frame prediction can be improved by using the slice-level transformation coefficients as a reference; at the same time, certain slices of the reference point cloud can be adaptively selected for inter-frame prediction, and certain slices are not used for inter-frame prediction, thereby improving the flexibility of the prediction and saving memory.
  • the method may further include: if the candidate reference block existing in the reference point cloud and the current block meet a preset inter-frame condition, determining the candidate reference block as the reference block of the current block.
  • an inter-frame reference block can be found in the reference point cloud for the current block, that is, a candidate reference block that satisfies a preset inter-frame condition with the current block, then the current block is subjected to transform-domain inter-frame prediction, and the found candidate reference block is used as the reference block for the current block. Whether the found candidate reference block satisfies the preset inter-frame condition with the current block can be determined through the following possible implementations.
  • the candidate reference block in the reference point cloud and the current block meet the preset inter-frame condition, which may include: the coordinate information of the candidate reference block is the same as the coordinate information of the current block and the slice identification information of the candidate reference block is the same as the slice identification information of the current block.
  • the identification information is the same and the transform layer information of the candidate reference block is the same as the current layer information of the current block.
  • the candidate reference block existing in the reference point cloud and the current block meet the preset inter-frame conditions, which may include: the coordinate information of the parent transform block of the candidate reference block is the same as the coordinate information of the parent transform block of the current block, and the slice identification information where the candidate reference block is located is the same as the slice identification information where the current block is located, and the transform layer information where the candidate reference block is located is the same as the current layer information where the current block is located.
  • the coordinate information may include geometric position (x, y, z) information and Morton code information, etc.
  • the absolute coordinate information of the candidate reference block can be determined based on the coordinate information of the candidate reference block and the slice identification information where the candidate reference block is located, and the absolute coordinate information of the parent transform block of the candidate reference block can be determined based on the coordinate information of the parent transform block of the candidate reference block and the slice identification information where the candidate reference block is located.
  • the candidate reference block in the reference point cloud and the current block meet a preset inter-frame condition, which may include: the absolute coordinate information of the candidate reference block is the same as the absolute coordinate information of the current block and the transformation layer information of the candidate reference block is the same as the current layer information of the current block.
  • the candidate reference block in the reference point cloud and the current block meet a preset inter-frame condition, which may include: the absolute coordinate information of the parent transform block of the candidate reference block is the same as the absolute coordinate information of the parent transform block of the current block and the transform layer information where the candidate reference block is located is the same as the current layer information where the current block is located.
  • the absolute coordinate information of the candidate reference block may be determined based on the coordinate information of the candidate reference block and the slice identification information (sliceID) where the candidate reference block is located.
  • the method may include: determining the coordinate information of the candidate reference block; and determining the absolute coordinate information of the candidate reference block based on the coordinate information of the candidate reference block and coordinate offset information of the slice identification information where the candidate reference block is located relative to the origin.
  • the absolute coordinate information of the candidate reference block may be obtained by adding the coordinate information of the candidate reference block to the coordinate offset information of the slice identification information where the candidate reference block is located relative to the origin.
  • the preset inter-frame conditions can be the several possible implementation methods mentioned above, or other conditions that can be used to find inter-frame reference blocks. No limitation is made here.
  • the reference block found in the reference point cloud may specifically include: the coordinate information of the reference block is the same as the coordinate information of the current block, and the slice identification information of the reference block is the same as the slice identification information of the current block, and the transformation layer information of the reference block is the same as the current layer information of the current block; or, the coordinate information of the parent transformation block of the reference block is the same as the coordinate information of the parent transformation block of the current block, and the slice identification information of the reference block is the same as the slice identification information of the current block, and the transformation layer information of the reference block is the same as the current layer information of the current block; or, the absolute coordinate information of the reference block is the same as the absolute coordinate information of the current block, and the transformation layer information of the reference block is the same as the current layer information of the current block; or, the absolute coordinate information of the parent transformation block of the reference block is the same as the absolute coordinate information of the parent transformation block of the current block, and the transformation layer information of the reference block is the same as the current layer information of the current block; or, the
  • the coordinate information of the reference block can be determined, and then jointly determined based on the coordinate information of the reference block and the coordinate offset information of the slice identification information where the reference block is located relative to the origin; for the absolute coordinate information of the parent transformation block of the reference block, the coordinate information of the parent transformation block of the reference block can be determined, and then jointly determined based on the coordinate information of the parent transformation block of the reference block and the coordinate offset information of the slice identification information where the reference block is located relative to the origin.
  • determining the transformation coefficient prediction value of the current block according to the reference block of the current block may specifically include: determining the transformation coefficient prediction value of the current block according to the transformation coefficient of the reference block.
  • the method may include: determining attribute information of the reference slice based on the reference slice indicated by the slice identification information; performing a region-adaptive hierarchical transform on the attribute information of the reference slice to determine the transform coefficients of at least one transform layer; determining the transform coefficients of the reference block from the transform coefficients of at least one transform layer; and determining a predicted value of the transform coefficient of the current block based on the transform coefficients of the reference block.
  • the RAHT transform when determining the transform coefficients of the reference block, can be performed on the slices (i.e., reference slices) divided based on the reference point cloud, rather than on the entire frame of the reference point cloud. In this way, the transform coefficients at the slice level can be retained. Since the RAHT transform of the current point cloud is based on the inter-frame prediction performed at the slice level, the transform coefficient prediction value of the current block can be obtained more accurately by using the transform coefficients at the slice level as a reference, thereby improving the accuracy of the inter-frame prediction. At the same time, certain slices of the reference point cloud can be adaptively selected for inter-frame prediction, while certain slices are not used for inter-frame prediction, thereby improving the flexibility of the prediction and saving memory.
  • the transform coefficients of the reference block can be determined by performing a RAHT transform on the slices (i.e., reference slices) divided by the reference point cloud. Specifically, by performing a RAHT transform on the attribute information of the reference slice, the transform coefficients of at least one transform layer can be determined. Each transform layer can include at least one transform block, from which the transform coefficients of the reference block are obtained; and then the transform coefficients of the reference block are used as the transform coefficient prediction values of the current block.
  • the method may include: determining the transform coefficient residual value of the reference block; quantizing the transform coefficient residual value of the reference block to determine the quantized coefficient residual value of the reference block; inverse quantizing the quantized coefficient residual value of the reference block to determine the reconstructed transform coefficient residual value of the reference block; and performing a transform coefficient prediction on the reference block and the reconstructed transform coefficient residual value of the reference block according to the transform coefficient prediction value of the reference block and the reconstructed transform coefficient residual value of the reference block.
  • the coefficient residual value is used to determine the transformation coefficient of the reference block.
  • the transform coefficient residual values of the reference block can be determined, and then the transform coefficient residual values of the reference block are quantized and dequantized to determine the reconstructed transform coefficient residual values of the reference block; based on the transform coefficient prediction values of the reference block and the reconstructed transform coefficient residual values of the reference block, the transform coefficients of the reference block can be determined.
  • the method may further include: encoding the quantized coefficient residual value of the reference block and writing the obtained coded bits into the bitstream.
  • the transform coefficients of the reference block when the transform coefficients of the reference block do not need to be predicted, the transform coefficients of the reference block can be directly written into the bitstream. Specifically, the transform coefficients of the reference block can be quantized to determine the quantized coefficients of the reference block; then, the quantized coefficients of the reference block can be encoded, and the resulting coded bits can be written into the bitstream.
  • the method may further include: determining storage information of the reference block; and storing the storage information of the reference block in the memory module.
  • the storage information of the reference block may include at least one of the following: the transform coefficients of the reference block, the coordinate information of the reference block, the slice identification information where the reference block is located, and the transform layer information where the reference block is located.
  • the method may include: determining the transform coefficient of the reference block based on the corresponding storage information of the reference block in the memory module; and determining the predicted value of the transform coefficient of the current block based on the transform coefficient of the reference block.
  • the storage information corresponding to the reference block can be pre-stored in the memory module.
  • the memory module can retain information on multiple RAHT transformation layers of the reference point cloud. In this way, after determining the reference block of the current block, the transformation coefficients of the reference block can be quickly obtained from the memory module according to the sliceID where the reference block is located, and then the transformation coefficients of the reference block are used as the transformation coefficient prediction values of the current block, thereby further reducing the time complexity and making the inter-frame prediction more accurate.
  • the storage information corresponding to the reference block in the memory module may include at least one of the following: the transformation coefficient of the reference block, the coordinate information of the reference block, the slice identification information where the reference block is located, the transformation layer information where the reference block is located, and the absolute coordinate information of the reference block.
  • the absolute coordinate information of the reference block can be determined based on the coordinate information of the reference block and the slice ID information where the reference block is located.
  • the method may include: determining the coordinate information of the reference block; and determining the absolute coordinate information of the reference block based on the coordinate information of the reference block and the coordinate offset information of the slice ID where the reference block is located relative to the origin.
  • the absolute coordinate information of the reference block can be obtained by adding the coordinate information of the reference block to the coordinate offset information of the slice ID where the reference block is located relative to the origin.
  • the coordinate information may include geometric position (x, y, z) information and Morton code information.
  • the coordinate information of the reference block refers to the position information of the reference block when the RAHT transform is performed
  • the absolute coordinate information of the reference block refers to the absolute position information of the reference block in the reference frame.
  • the transform coefficient prediction value of the current block can be determined based on the transform coefficient of the reference block.
  • the method may further include: determining the transform coefficient prediction values of the neighboring blocks of the current block, the transform coefficient prediction values of the parent transform block of the current block, and the transform coefficient prediction values of the neighboring blocks of the parent transform block; performing a weighted operation on the transform coefficient prediction values of the neighboring blocks of the current block, the transform coefficient prediction values of the parent transform block of the current block, and the transform coefficient prediction values of the neighboring blocks of the parent transform block to determine the transform coefficient prediction value of the current block.
  • the current block is determined to be intra-predicted in the transform domain, and the resulting intra-prediction value can be used as the transform coefficient prediction value of the current block.
  • the intra-prediction value can be obtained by weighted calculation of the transform coefficient prediction values of the neighboring blocks of the current block, the parent transform block of the current block, and the neighboring blocks of the parent transform block.
  • transform coefficients may include low-frequency coefficients and high-frequency coefficients.
  • the low-frequency coefficients may also be referred to as direct current coefficients or DC coefficients
  • the high-frequency coefficients may also be referred to as alternating current coefficients or AC coefficients.
  • the transform coefficient prediction values herein primarily refer to the high-frequency coefficient prediction values of the current block.
  • S1904 Encode the transform coefficient residual value of the current block, and write the obtained coded bits into the bitstream.
  • encoding the transform coefficient residual values of the current block and writing the obtained coded bits into the bitstream may include: quantizing the transform coefficient residual values to determine the quantized coefficient residual values of the current block; encoding the quantized coefficient residual values of the current block and writing the obtained coded bits into the bitstream.
  • the method may further include: performing inverse quantization on the quantized coefficient residual values of the current block to determine the reconstructed transform coefficient residual values of the current block; and determining the transform coefficients of the current block based on the transform coefficient prediction values of the current block and the reconstructed transform coefficient residual values of the current block.
  • the transform coefficients of the current block may be obtained by adding the transform coefficient prediction values of the current block and the reconstructed transform coefficient residual values of the current block.
  • the method further includes: determining the storage information of the current block; and saving the storage information of the current block into the memory module.
  • the method further includes: when the current point cloud containing the current block meets the conditions for being used as a reference point cloud, saving the storage information of the current block to a memory module.
  • the storage information of the current block can be saved to the memory module; thereby, when performing subsequent transform domain inter-frame prediction, the required reference block related information can be obtained from the memory module.
  • the storage information of the current block may include at least one of the following: the transformation coefficient of the current block, the coordinate information of the current block, the slice identification information where the current block is located, the current layer information where the current block is located, and the absolute coordinate information of the current block.
  • the method may include: determining the coordinate information of the current block; determining the absolute coordinate information of the current block based on the coordinate information of the current block and the coordinate offset information of the slice identification information where the current block is located relative to the origin.
  • the absolute coordinate information of the current block can be obtained by performing an addition operation based on the coordinate information of the current block and the coordinate offset information of the slice identification information where the current block is located relative to the origin.
  • the reference block of the current block can be determined based on the fact that the absolute coordinate information of the candidate reference block is the same as the absolute coordinate information of the current block and the transformation layer information where the candidate reference block is located is the same as the current layer information where the current block is located, or the reference block of the current block can be determined based on the fact that the absolute coordinate information of the parent transformation block of the candidate reference block is the same as the absolute coordinate information of the parent transformation block of the current block and the transformation layer information where the candidate reference block is located is the same as the current layer information where the current block is located, and so on. No limitation is made here.
  • the method may further include: determining geometric information of the current slice; performing octree decomposition on the geometric information of the current slice to determine at least one transformation layer; wherein the at least one transformation layer includes the current layer where the current block is located.
  • the slice identification information (sliceID) of the current block can be used to indicate the current slice.
  • the current slice is determined by the slice division of the current point cloud.
  • the geometric information of the current slice is decomposed into an octree and transformed based on the hierarchical structure of the octree. Specifically, the RAHT inverse transformation is performed from top to bottom on the root node of the octree, and the RAHT inverse transformation is performed on the nodes in each layer from the three dimensions of x, y, and z until the inverse transformation reaches the leaf node of the octree.
  • the method may further include: when the current layer is the L-1th layer in at least one transformation layer, determining the low-frequency coefficient values of the nodes in the Lth layer, and performing regional adaptive hierarchical transformation based on the low-frequency coefficient values of the nodes in the Lth layer, determining the low-frequency coefficient values and the original values of the high-frequency coefficients of the nodes in the L-1th layer, until the low-frequency coefficient values and the original values of the high-frequency coefficients of the root node of the current point cloud are determined; wherein L is a positive integer.
  • determining the transform coefficient residual value of the current block based on the transform coefficient prediction value of the current block may include: determining the original high-frequency coefficient values of the current block in the current layer; and determining the high-frequency coefficient residual value of the current block based on the original high-frequency coefficient values of the current block and the high-frequency coefficient prediction value of the current block.
  • the method further includes: encoding the high-frequency coefficient residual value of the current block and writing the resulting coded bits into the bitstream.
  • the method may further include: encoding the low-frequency coefficient value of the root node and writing the obtained encoding bits into the bitstream.
  • the low-frequency coefficient value of the root node and the high-frequency coefficient residual value of each layer can be written into the code stream, so that the RAHT inverse transform can be implemented at the decoding end to restore the reconstructed attribute information of the current point cloud.
  • the DC coefficient obtained after the transformation of the nodes in the same layer (layer L) is transferred to the nodes in the next layer (layer L-1) for further transformation using the attribute information of the point cloud.
  • the transformation matrix will be updated as the weights corresponding to each point change adaptively.
  • the above process will be continuously iterated and updated according to the division structure of the octree until the root node of the octree.
  • the DC coefficient of the root node and all AC coefficients are collectively referred to as transformation coefficients.
  • g′ L,2x,y,z and g′ L,2x+1,y,z are the attribute DC coefficients of two neighboring nodes in layer L.
  • the information in layer L-1 is the AC coefficient f′ L-1,x,y,z and the DC coefficient g′ L-1,x,y,z .
  • f′ L-1,x,y,z is directly quantized and encoded without further transformation.
  • g′ L-1,x,y,z will continue to search for neighboring nodes for transformation. If no neighboring nodes are found, they will be directly passed to layer L-2.
  • the RAHT transform is only effective for nodes with neighboring nodes; nodes without neighboring nodes are directly passed to the previous layer.
  • the weights (the number of non-empty child nodes in the node) corresponding to g′ L,2x,y,z and g′ L,2x+1,y ,z are w′ L,2x,y,z and w′ L,2x+1,y,z (abbreviated as w′ 0 and w′ 1 ) respectively, and the weight of g′ L-1,x,y,z is w′ L-1,x,y,z .
  • the RAHT transformation formula is:
  • the transform coefficients of the current block can be determined. Then, based on the original values of the transform coefficients of the current block and the predicted values of the transform coefficients of the current block, the transform coefficient residual values of the current block can be calculated.
  • the embodiment of the present application provides a coding method, specifically a coding scheme for fast inter-frame attribute transformation of point clouds.
  • Prediction mode identification information when the prediction mode identification information indicates that the current block performs inter-frame prediction in the transform domain, the transform coefficient prediction value of the current block is determined based on the reference block of the current block; the transform coefficient residual value of the current block is determined based on the transform coefficient prediction value of the current block; the transform coefficient residual value of the current block is encoded, and the obtained coded bits are written into the bitstream.
  • the transform coefficient prediction value of the current block can be determined based on the reference block of the current block.
  • the reference block can be transformed based on slices divided by the reference point cloud, rather than based on the entire frame of the reference point cloud, thereby reducing time complexity; and the transform coefficients of the reference block can be stored in a memory module, so that when the transform coefficients of the reference block are needed, they can be directly obtained from the memory module, further reducing time complexity, thereby improving the attribute encoding efficiency of the point cloud, and thus improving the encoding performance of the point cloud.
  • this technical solution proposes caching the decoded geometric information and transform coefficients of possible reference point clouds in a memory module (buffer) for later use as reference frames. This reduces time complexity by avoiding the need to repeatedly perform octree decomposition and RAHT transformation on the reference frame, while also retaining information from multiple RAHT transform layers of the reference frame for inter-frame prediction.
  • Figure 15 is a schematic diagram of the logical structure of an inter-frame prediction transformation provided by an embodiment of the present application.
  • the reference point cloud can be divided into at least two slices, so that during RAHT transformation, the transformation can be performed based on the divided slices rather than the entire frame, reducing time complexity and retaining information from multiple RAHT transformation layers of the reference frame for inter-frame prediction.
  • the process at the decoding end is as follows:
  • the geometric information of the decoded point cloud is decomposed into an octree and transformed based on the hierarchical structure of the octree.
  • the nodes in each layer are inversely transformed from the x, y, and z dimensions until they are inversely transformed to the leaf nodes of the octree.
  • the decoded quantized transform coefficient residual value is inversely quantized to obtain a transform coefficient residual value.
  • the transform coefficient can be obtained by adding the transform coefficient residual value to the transform coefficient prediction value.
  • the position of the current block during transformation is a relative geometric position (i.e., the aforementioned "coordinate information").
  • the absolute geometric position of the current block in the current frame can be calculated. Specifically, the absolute geometric position of the current block is equal to the relative geometric position of the current transformed block plus the geometric position offset of the slice to which the current block belongs relative to the origin;
  • Find inter-frame reference blocks For each transform block in the RAHT transform of the current frame, find the inter-frame prediction transform block of the current layer in the memory module. Taking the current block as an example, find a transform block in the reference frame whose absolute geometric position is the same as the absolute geometric position of the current block and the same layer number as the current block as the reference block, or find a transform block in the reference frame whose absolute geometric position is the same as the absolute geometric position of the parent transform block of the current block and the same layer number as the current block as the reference block, or find a transform block in the reference frame whose geometric position is the same as the geometric position of the current block, the same layer number as the current block and the same sliceID as the current block as the reference block, or find a transform block in the reference frame whose geometric position is the same as the geometric position of the parent transform block of the current block, the same layer number as the current block and the same sliceID as the current block as the reference block.
  • the transform coefficient corresponding to the inter-frame reference block in the memory module can be used as the transform coefficient prediction value of the current block.
  • the intra-frame prediction value is used as the transformation coefficient prediction value of the current block, wherein the intra-frame prediction value is obtained by weighted prediction of the neighboring blocks of the current block, the parent transformation block and the neighboring blocks of the parent transformation block.
  • the currently decoded frame may be used as a reference frame for subsequent frames
  • at least one of the transform coefficient, absolute geometric position, geometric position, sliceID, and number of RAHT transform layers of the current block may be stored in the memory module as inter-frame prediction reference information for subsequent frames; otherwise, the RAHT inverse transform may be performed directly.
  • the attributes of the point cloud are inversely transformed along the octree decomposition from top to bottom, and the nodes in each layer are inversely transformed from the three dimensions of x, y, and z until they are transformed to the leaf nodes of the octree.
  • the DC coefficient and AC coefficient of the nodes in the same layer are inversely transformed to obtain the DC coefficient of the next layer (Lth layer).
  • the transformation matrix will be updated as the weights corresponding to each point change adaptively.
  • the above process will be iteratively updated according to the division structure of the octree until the leaf nodes of the octree.
  • the quantized transform coefficient residual value can be obtained, and then the quantized transform coefficient residual value is dequantized to obtain the transform coefficient residual value of the point cloud to be decoded;
  • the transform coefficient prediction value of the point cloud to be decoded can be determined according to RAHT inter-frame prediction or RAHT intra-frame prediction; then, the transform coefficient of the point cloud to be decoded is obtained by performing an addition operation on the transform coefficient residual value and the transform coefficient prediction value; the transform coefficient is saved in the memory module, so that when performing inter-frame prediction later, the transform coefficient of the reference block can be directly obtained from the memory module; finally, the transform coefficient is subjected to RAHT inverse transformation to reconstruct the point cloud attributes.
  • the process of the encoding end is as follows:
  • the geometric information of the point cloud to be encoded is input and the octree decomposition is performed to obtain the RAHT transformation layer.
  • the nodes are transformed in the x, y, and z dimensions until they reach the root node of the octree.
  • the DC coefficient obtained after the transformation of the nodes in the same layer (layer L) is passed to the nodes in the next layer (layer L-1) to continue the transformation.
  • the transformation matrix will be updated as the weights corresponding to each point change adaptively.
  • the above process will be continuously iterated and updated according to the partitioning structure of the octree until the root node of the octree.
  • the DC coefficient of the root node and all AC coefficients are collectively referred to as the transformation coefficient;
  • the absolute geometric position of the current block relative to the origin can be calculated. This is equal to the relative geometric position of the current block plus the geometric position offset of the slice to which the current block belongs relative to the origin.
  • g′ L,2x,y,z and g′ L,2x+1,y,z are the attribute DC coefficients of two neighboring nodes in layer L.
  • the information in layer L-1 is the AC coefficient f′ L-1,x,y,z and the DC coefficient g′ L-1,x,y,z .
  • f′ L-1,x,y,z is no longer transformed and is directly quantized and encoded.
  • g′ L-1,x,y,z will continue to search for neighboring nodes for transformation.
  • the RAHT transform is only effective for nodes with neighboring nodes; nodes without neighboring nodes are directly passed to the previous layer.
  • the weights (the number of non-empty child nodes within the node) corresponding to g′ L,2x,y,z and g′ L,2x+1,y ,z are w′ L,2x,y,z and w′ L,2x+1,y,z (abbreviated as w′ 0 and w′ 1 ), respectively, and the weight of g′ L-1,x,y,z is w′ L-1,x,y,z . Therefore, the calculation formulas for the inverse RAHT transform are shown in Equations (5) and (6). Finally, the transform coefficients can be obtained.
  • the transform coefficient prediction value is subtracted from the transform coefficient to obtain the transform coefficient residual value.
  • the position of the current block during transformation is a relative geometric position (i.e., the aforementioned "coordinate information").
  • the absolute geometric position of the current block in the current frame can be calculated. Specifically, the absolute geometric position of the current block is equal to the relative geometric position of the current transformed block plus the geometric position offset of the slice to which the current block belongs relative to the origin;
  • Find inter-frame reference blocks For each transform block in the RAHT transform of the current frame, find the inter-frame prediction transform block of the current layer in the memory module. Taking the current block as an example, find a transform block in the reference frame whose absolute geometric position is the same as the absolute geometric position of the current block and the same layer number as the current block as the reference block, or find a transform block in the reference frame whose absolute geometric position is the same as the absolute geometric position of the parent transform block of the current block and the same layer number as the current block as the reference block, or find a transform block in the reference frame whose geometric position is the same as the geometric position of the current block, the same layer number as the current block and the same sliceID as the current block as the reference block, or find a transform block in the reference frame whose geometric position is the same as the geometric position of the parent transform block of the current block, the same layer number as the current block and the same sliceID as the current block as the reference block.
  • the transform coefficient corresponding to the inter-frame reference block in the memory module can be used as the transform coefficient prediction value of the current block.
  • the intra-frame prediction value is used as the transformation coefficient prediction value of the current block, wherein the intra-frame prediction value is obtained by weighted prediction of the neighboring blocks of the current block, the parent transformation block and the neighboring blocks of the parent transformation block.
  • the transform coefficient residual value is quantized to obtain a quantized transform coefficient residual value.
  • the quantized transform coefficient residual value is inversely quantized to obtain a reconstructed transform coefficient residual value (ie, a reconstructed transform coefficient residual value).
  • the reconstructed transform coefficient residual value is added to the transform coefficient prediction value to obtain the reconstructed transform coefficient. If the current frame may be used as a reference frame for subsequent frames after encoding, at least one of the reconstructed transform coefficient, absolute geometric position, geometric position, sliceID, and the number of layers of the RAHT transform layer where the current block is located can be stored in the memory module as the inter-frame prediction reference information for subsequent frames; otherwise, it is directly performed. Inverse RAHT transform.
  • the reconstructed point cloud attributes can be obtained.
  • the geometric information of the point cloud to be encoded is decomposed into a point cloud geometry octree, and the attribute information of the point cloud to be encoded is subjected to a RAHT transform to determine the transform coefficients of the point cloud to be encoded.
  • the transform coefficient prediction value of the point cloud to be encoded can be determined based on RAHT inter-frame prediction or RAHT intra-frame prediction.
  • the transform coefficient residual value is obtained by subtracting the transform coefficient of the point cloud to be encoded from the transform coefficient prediction value.
  • the transform coefficient residual value is then quantized to obtain the quantized transform coefficient residual value.
  • the quantized transform coefficient residual value can be encoded and written into the attribute code stream.
  • the quantized transform coefficient residual value can be dequantized to obtain the reconstructed transform coefficient residual value.
  • the reconstructed transform coefficient residual value is then added to the transform coefficient prediction value to obtain the reconstructed transform coefficient.
  • the transform coefficient is saved in a memory module, so that when performing inter-frame prediction later, the transform coefficient of the reference block can be directly obtained from the memory module.
  • the reconstructed transform coefficient is subjected to a RAHT inverse transform to obtain the reconstructed point cloud attributes.
  • Table 1 is the performance results based on the latest tmc13v24 under the test conditions of lossy geometry and lossy attribute (Lossy geometry, Lossy attribute) provided by the embodiment of the present application.
  • FIG22 is a schematic diagram of the composition structure of an encoder provided in an embodiment of the present application.
  • the encoder 220 may include a first determination unit 2201, a first prediction unit 2202, and an encoding unit 2203, wherein:
  • a first determining unit 2201 is configured to determine prediction mode identification information of a current block
  • the first prediction unit 2202 is configured to determine a transform coefficient prediction value of the current block according to a reference block of the current block when the prediction mode identification information indicates that the current block performs inter-frame prediction in the transform domain;
  • the first determining unit 2201 is further configured to determine a transform coefficient residual value of the current block according to the transform coefficient prediction value of the current block;
  • the encoding unit 2203 is configured to perform encoding processing on the transform coefficient residual value of the current block and write the obtained encoding bits into the bitstream.
  • the first determining unit 2201 is further configured to determine a reference block of the current block according to the slice identification information in the reference point cloud.
  • the first determining unit 2201 is further configured to determine a predicted value of a transform coefficient of the current block according to the transform coefficient of the reference block.
  • the first determination unit 2201 is further configured to determine the attribute information of the reference slice based on the reference slice indicated by the slice identification information; perform region-adaptive hierarchical transformation on the attribute information of the reference slice to determine the transformation coefficients of at least one transformation layer; and determine the transformation coefficients of the reference block from the transformation coefficients of at least one transformation layer.
  • the first determining unit 2201 is further configured to determine a transform coefficient residual value of a reference block; quantize the transform coefficient residual value of the reference block to determine a quantized coefficient residual value of the reference block; dequantize the quantized coefficient residual value of the reference block to determine a reconstructed transform coefficient residual value of the reference block; and determine, based on the transform coefficient prediction value of the reference block and the reconstructed transform coefficient residual value of the reference block, Transform coefficients of the reference block.
  • the encoder 220 further includes a first saving unit 2204; the first determining unit 2201 is further configured to determine storage information of the reference block, wherein the storage information of the reference block includes at least one of the following: transformation coefficients of the reference block, coordinate information of the reference block, slice identification information of the reference block, and transformation layer information of the reference block; the first saving unit 2204 is configured to save the storage information of the reference block into the memory module.
  • the first determining unit 2201 is further configured to determine the transformation coefficient of the reference block according to the storage information corresponding to the reference block in the memory module.
  • the first determination unit 2201 is further configured such that the coordinate information of the reference block is the same as the coordinate information of the current block, the slice identification information of the reference block is the same as the slice identification information of the current block, and the transformation layer information of the reference block is the same as the current layer information of the current block.
  • the first determination unit 2201 is further configured such that the coordinate information of the parent transform block of the reference block is the same as the coordinate information of the parent transform block of the current block, the slice identification information of the reference block is the same as the slice identification information of the current block, and the transform layer information of the reference block is the same as the current layer information of the current block.
  • the first determination unit 2201 is further configured to determine the coordinate information of the reference block; and determine the absolute coordinate information of the reference block based on the coordinate information of the reference block and the coordinate offset information of the slice identification information where the reference block is located relative to the origin.
  • the first determining unit 2201 is further configured to determine that the absolute coordinate information of the reference block is the same as the absolute coordinate information of the current block and the transform layer information of the reference block is the same as the current layer information of the current block.
  • the first determination unit 2201 is further configured such that the absolute coordinate information of the parent transform block of the reference block is the same as the absolute coordinate information of the parent transform block of the current block and the transform layer information where the reference block is located is the same as the current layer information where the current block is located.
  • the first determination unit 2201 is further configured to determine that the prediction mode identification information indicates that the current block performs inter-frame prediction in the transform domain if there is a reference block in the reference point cloud that satisfies the preset inter-frame condition with the current block; if there is no reference block in the reference point cloud that satisfies the preset inter-frame condition with the current block, determine that the prediction mode identification information indicates that the current block performs intra-frame prediction in the transform domain.
  • the first prediction unit 2202 is further configured to determine the transform coefficient prediction values of the neighboring blocks of the current block, the transform coefficient prediction values of the parent transform block of the current block, and the transform coefficient prediction values of the neighboring blocks of the parent transform block when the prediction mode identification information indicates that the current block performs intra-frame prediction in the transform domain; and perform weighted operations on the transform coefficient prediction values of the neighboring blocks of the current block, the transform coefficient prediction values of the parent transform block of the current block, and the transform coefficient prediction values of the neighboring blocks of the parent transform block to determine the transform coefficient prediction value of the current block.
  • the first determination unit 2201 is further configured to perform inverse quantization on the quantization coefficient residual value of the current block to determine the reconstructed transform coefficient residual value of the current block; and determine the transform coefficient of the current block based on the transform coefficient prediction value of the current block and the reconstructed transform coefficient residual value of the current block.
  • the first determination unit 2201 is further configured to determine the storage information of the current block, wherein the storage information of the current block includes at least one of the following: the transformation coefficient of the current block, the coordinate information of the current block, the slice identification information where the current block is located, and the current layer information where the current block is located; the first saving unit 2204 is further configured to save the storage information of the current block into the memory module.
  • the first saving unit 2204 is further configured to save the storage information of the current block into the memory module when the current point cloud where the current block is located meets the conditions for being used as a reference point cloud.
  • the first determination unit 2201 is further configured to determine the geometric information of the current slice; wherein the current slice is determined by slice division of the current point cloud; and perform octree decomposition on the geometric information of the current slice to determine at least one transformation layer; wherein the at least one transformation layer includes the current layer where the current block is located.
  • the first determination unit 2201 is further configured to determine the low-frequency coefficient values of the nodes in the Lth layer when the current layer is the L-1th layer in at least one transformation layer, and perform regional adaptive hierarchical transformation based on the low-frequency coefficient values of the nodes in the Lth layer to determine the low-frequency coefficient values and the original values of the high-frequency coefficients of the nodes in the L-1th layer until the low-frequency coefficient values and the original values of the high-frequency coefficients of the root node of the current point cloud are determined; wherein L is a positive integer.
  • the encoding unit 2203 is further configured to encode the low-frequency coefficient value of the root node of the current point cloud and write the obtained encoding bits into the bitstream.
  • the first determination unit 2201 is further configured to determine the original value of the high-frequency coefficient of the current block; and determine the residual value of the high-frequency coefficient of the current block based on the original value of the high-frequency coefficient of the current block and the predicted value of the high-frequency coefficient of the current block; the encoding unit 2203 is further configured to encode the residual value of the high-frequency coefficient of the current block and write the obtained encoding bits into the bitstream.
  • a "unit" can be a portion of a circuit, a portion of a processor, a portion of a program or software, etc., and of course it can also be a module, or it can be non-modular.
  • the various components in this embodiment can be integrated into a processing unit, or each unit can exist physically separately, or two or more units can be integrated into a single unit.
  • the above-mentioned integrated units can be implemented in the form of hardware or in the form of software functional modules.
  • FIG23 is a schematic diagram of a specific hardware structure of an encoder provided in an embodiment of the present application.
  • the encoder 220 may include: a first communication interface 2301, a first memory 2302 and a first processor 2303; each component The first bus system 2304 is coupled together. It is understood that the first bus system 2304 is used to realize the connection and communication between these components.
  • the first bus system 2304 also includes a power bus, a control bus, and a status signal bus. However, for the sake of clarity, in FIG23, various buses are labeled as the first bus system 2304.
  • the first communication interface 2301 is used to receive and send signals when sending and receiving information with other external network elements;
  • a first memory 2302 is used to store computer programs that can be run on the first processor 2303;
  • the first processor 2303 is configured to, when running the computer program, execute:
  • Determining prediction mode identification information of the current block when the prediction mode identification information indicates that the current block performs inter-frame prediction in the transform domain, determining a transform coefficient prediction value of the current block based on a reference block of the current block; determining a transform coefficient residual value of the current block based on the transform coefficient prediction value of the current block;
  • the transform coefficient residual value of the current block is coded, and the obtained coded bits are written into the bitstream.
  • the first memory 2302 in the embodiment of the present application can be a volatile memory or a non-volatile memory, or can include both volatile and non-volatile memories.
  • the non-volatile memory can be a read-only memory (ROM), a programmable read-only memory (PROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), or a flash memory.
  • the volatile memory can be a random access memory (RAM), which is used as an external cache.
  • RAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDR SDRAM double data rate synchronous DRAM
  • ESDRAM enhanced synchronous DRAM
  • SLDRAM synchronous link DRAM
  • DRRAM direct RAM
  • the first processor 2303 may be an integrated circuit chip with signal processing capabilities. During implementation, each step of the above method can be completed by the hardware integrated logic circuit in the first processor 2303 or by instructions in the form of software.
  • the above-mentioned first processor 2303 may be a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field programmable gate array (FPGA), or other programmable logic devices, discrete gate or transistor logic devices, or discrete hardware components.
  • DSP digital signal processor
  • ASIC application-specific integrated circuit
  • FPGA field programmable gate array
  • the various methods, steps, and logic block diagrams disclosed in the embodiments of this application can be implemented or executed.
  • the general-purpose processor can be a microprocessor or any conventional processor.
  • the steps of the method disclosed in conjunction with the embodiments of this application can be directly embodied as being executed by a hardware decoding processor, or can be executed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a storage medium mature in the art, such as random access memory, flash memory, read-only memory, programmable read-only memory, or electrically erasable programmable memory, registers, etc.
  • the storage medium is located in the first memory 2302 , and the first processor 2303 reads the information in the first memory 2302 and completes the steps of the above method in combination with its hardware.
  • the first processor 2303 is further configured to execute the method described in any one of the aforementioned embodiments when running the computer program.
  • the second determining unit 2401 is configured to determine prediction mode identification information of the current block
  • the second prediction unit 2402 is configured to determine a transform coefficient prediction value of the current block according to a reference block of the current block when the prediction mode identification information indicates that the current block performs inter-frame prediction in the transform domain;
  • the decoding unit 2403 is configured to decode the code stream and determine the transform coefficient residual value of the current block
  • the second determining unit 2401 is further configured to determine the transform coefficient of the current block according to the transform coefficient prediction value of the current block and the transform coefficient residual value of the current block.
  • the second determining unit 2401 is further configured to determine a reference block of the current block according to the slice identification information in the reference point cloud.
  • the second determining unit 2401 is further configured to determine a predicted value of a transform coefficient of the current block according to the transform coefficient of the reference block.
  • the second determination unit 2401 is further configured to determine the attribute information of the reference slice based on the reference slice indicated by the slice identification information; perform region-adaptive hierarchical transformation on the attribute information of the reference slice to determine the transformation coefficients of at least one transformation layer; and determine the transformation coefficients of the reference block from the transformation coefficients of at least one transformation layer.
  • the decoding unit 2403 is further configured to decode the code stream and determine the transform coefficient residual value of the reference block; the second determination unit 2401 is further configured to determine the transform coefficient of the reference block based on the transform coefficient prediction value of the reference block and the transform coefficient residual value of the reference block.
  • the decoding unit 2403 is further configured to decode the code stream to determine the transform coefficients of the reference block.
  • the decoder 240 further includes a second saving unit 2404; the second determination unit 2401 is further configured to determine storage information of the reference block, wherein the storage information of the reference block includes at least one of the following: transformation coefficients of the reference block, coordinate information of the reference block, slice identification information of the reference block, and transformation layer information of the reference block; the second saving unit 2404 is configured to save the storage information of the reference block into the memory module.
  • the second determining unit 2401 is further configured to determine the transformation coefficient of the reference block according to the storage information corresponding to the reference block in the memory module.
  • the second determination unit 2401 is further configured such that the coordinate information of the reference block is the same as the coordinate information of the current block, the slice identification information where the reference block is located is the same as the slice identification information where the current block is located, and the transformation layer information where the reference block is located is the same as the current layer information where the current block is located.
  • the second determination unit 2401 is further configured such that the coordinate information of the parent transform block of the reference block is the same as the coordinate information of the parent transform block of the current block, the slice identification information of the reference block is the same as the slice identification information of the current block, and the transform layer information of the reference block is the same as the current layer information of the current block.
  • the second determination unit 2401 is further configured to determine the coordinate information of the reference block; and determine the absolute coordinate information of the reference block based on the coordinate information of the reference block and the coordinate offset information of the slice identification information where the reference block is located relative to the origin.
  • the second determining unit 2401 is further configured to determine that the absolute coordinate information of the reference block is the same as the absolute coordinate information of the current block and the transform layer information of the reference block is the same as the current layer information of the current block.
  • the second determination unit 2401 is further configured to ensure that the absolute coordinate information of the parent transform block of the reference block is the same as the absolute coordinate information of the parent transform block of the current block and the transform layer information of the reference block is the same as the current layer information of the current block.
  • the second determination unit 2401 is further configured to determine that the prediction mode identification information indicates that the current block performs inter-frame prediction in the transform domain if there is a reference block in the reference point cloud that satisfies the preset inter-frame condition with the current block; if there is no reference block in the reference point cloud that satisfies the preset inter-frame condition with the current block, determine that the prediction mode identification information indicates that the current block performs intra-frame prediction in the transform domain.
  • the second prediction unit 2402 is further configured to determine the transform coefficient prediction values of the neighboring blocks of the current block, the transform coefficient prediction values of the parent transform block of the current block, and the transform coefficient prediction values of the neighboring blocks of the parent transform block when the prediction mode identification information indicates that the current block performs intra-frame prediction in the transform domain; and perform weighted operations on the transform coefficient prediction values of the neighboring blocks of the current block, the transform coefficient prediction values of the parent transform block of the current block, and the transform coefficient prediction values of the neighboring blocks of the parent transform block to determine the transform coefficient prediction value of the current block.
  • the second determination unit 2401 is further configured to determine the storage information of the current block, wherein the storage information of the current block includes at least one of the following: the transformation coefficient of the current block, the coordinate information of the current block, the slice identification information where the current block is located, and the current layer information where the current block is located; the second saving unit 2404 is further configured to save the storage information of the current block into the memory module.
  • the second saving unit 2404 is configured to save the storage information of the current block into the memory module when the current point cloud where the current block is located meets the conditions for being used as a reference point cloud.
  • the decoding unit 2403 is further configured to decode the code stream to determine the quantization coefficient residual value of the current block; the second determination unit 2401 is further configured to dequantize the quantization coefficient residual value to determine the transform coefficient residual value of the current block.
  • the second determination unit 2401 is further configured to determine the geometric information of the current slice; wherein the current slice is determined by slice division of the current point cloud; and perform octree decomposition on the geometric information of the current slice to determine at least one transformation layer; wherein the at least one transformation layer includes the current layer where the current block is located.
  • the second determination unit 2401 is further configured to perform a regional adaptive hierarchical inverse transform on the reconstructed transform coefficients of the current block in the L-1 layer when the current layer is the L-1 layer in at least one transform layer, to determine the reconstruction attribute information of the nodes in the L layer; wherein L is a positive integer.
  • a "unit” may be a part of a circuit, a part of a processor, a part of a program or software, etc. It can be modular or non-modular. Moreover, the various components in this embodiment can be integrated into a single processing unit, or each unit can exist physically separately, or two or more units can be integrated into a single unit. The aforementioned integrated units can be implemented in the form of hardware or software functional modules.
  • FIG25 is a schematic diagram of the specific hardware structure of a decoder provided in an embodiment of the present application.
  • the decoder 240 may include: a second communication interface 2501, a second memory 2502, and a second processor 2503; each component is coupled together through a second bus system 2504.
  • the second bus system 2504 is used to achieve connection and communication between these components.
  • the second bus system 2504 also includes a power bus, a control bus, and a status signal bus.
  • various buses are labeled as the second bus system 2504 in FIG25. Among them,
  • the second communication interface 2501 is used to receive and send signals during the process of sending and receiving information between other external network elements;
  • the second memory 2502 is used to store computer programs that can be run on the second processor 2503;
  • the second processor 2503 is configured to, when running the computer program, execute:
  • Determine prediction mode identification information of the current block when the prediction mode identification information indicates that the current block performs inter-frame prediction in the transform domain, determine a transform coefficient prediction value of the current block based on a reference block of the current block; decode the code stream to determine a transform coefficient residual value of the current block; and determine a transform coefficient of the current block based on the transform coefficient prediction value of the current block and the transform coefficient residual value of the current block.
  • the second processor 2503 is further configured to execute any one of the methods described in the foregoing embodiments when running the computer program.
  • the hardware functions of the second memory 2502 are similar to those of the first memory 2302, and the hardware functions of the second processor 2503 are similar to those of the first processor 2303; they will not be described in detail here.
  • This embodiment provides a decoder that, when performing inter-frame prediction in the transform domain on a current block, can determine a predicted value of the transform coefficients of the current block based on a reference block of the current block.
  • the reference block can be transformed based on slices of a reference point cloud, rather than on the entire frame of the reference point cloud, thereby reducing time complexity.
  • the transform coefficients of the reference block can be stored in a memory module so that when the transform coefficients of the reference block are needed, they can be directly retrieved from the memory module, further reducing time complexity. This improves the efficiency of encoding and decoding point cloud attributes, and thus enhances point cloud encoding and decoding performance.
  • FIG26 is a schematic diagram of the structure of a coding and decoding system provided by an embodiment of the present application.
  • the coding and decoding system 260 may include an encoder 2601 and a decoder 2602 .
  • the encoder 2601 may be the encoder described in any one of the aforementioned embodiments
  • the decoder 2602 may be the decoder described in any one of the aforementioned embodiments.
  • the present application further provides a computer-readable storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor (eg, the first processor or the second processor), implements the method as described in any of the aforementioned embodiments.
  • a processor eg, the first processor or the second processor
  • embodiments of the present application further provide a computer program product, including a computer program or instructions.
  • a processor e.g., a first processor or a second processor
  • the method described in any one of the aforementioned embodiments is implemented.
  • the embodiments of the present application further provide a computer program, which, when executed by a processor (eg, a first processor or a second processor), implements the method as described in any one of the aforementioned embodiments.
  • a processor eg, a first processor or a second processor
  • the disclosed devices and methods can be implemented in other ways.
  • the device embodiments described above are merely schematic.
  • the division of the units is merely a logical function division.
  • Another point is that the mutual coupling or direct coupling or communication connection shown or discussed can be through some interfaces, indirect coupling or communication connection of devices or units, which can be electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separate, and the components shown as units may or may not be physical units, that is, they may be located in one place or distributed across multiple network units. Some or all of these units may be selected to achieve the purpose of this embodiment according to actual needs.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the functions are implemented in the form of software functional units and sold or used as independent products, they can be stored in a computer-readable storage medium.
  • the technical solution of this application is essentially or partially contributed to the existing technology.
  • Part of this technical solution can be embodied in the form of a software product, which is stored in a storage medium and includes a number of instructions for causing a computer device (which can be a personal computer, server, or network device, etc.) to execute all or part of the steps of the method described in each embodiment of this application.
  • the aforementioned storage medium includes: a USB flash drive, a mobile hard drive, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, etc., various media that can store program code.
  • the prediction mode identification information of the current block is determined; when the prediction mode identification information indicates that the current block performs inter-frame prediction in the transform domain, the transform coefficient prediction value of the current block is determined based on the reference block of the current block; based on the transform coefficient prediction value of the current block, the transform coefficient residual value of the current block is determined; the transform coefficient residual value of the current block is encoded, and the obtained coded bits are written into the bitstream.
  • the prediction mode identification information of the current block is determined; when the prediction mode identification information indicates that the current block performs inter-frame prediction in the transform domain, the transform coefficient prediction value of the current block is determined based on the reference block of the current block; the bitstream is decoded to determine the transform coefficient residual value of the current block; and the transform coefficient of the current block is determined based on the transform coefficient prediction value of the current block and the transform coefficient residual value of the current block.
  • the transform coefficient prediction value of the current block can be determined based on the reference block of the current block.
  • the reference block can be transformed based on the slices divided by the reference point cloud, rather than based on the entire frame of the reference point cloud, which reduces the time complexity; and the transformation coefficients of the reference block can be stored in the memory module, so that when the transformation coefficients of the reference block are needed, they can be directly obtained from the memory module, further reducing the time complexity, thereby improving the attribute encoding and decoding efficiency of the point cloud, and thus improving the encoding and decoding performance of the point cloud.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

La présente demande divulgue un procédé de codage, un procédé de décodage, un flux binaire, un codeur, un décodeur et un support d'enregistrement. Le procédé de décodage consiste à : déterminer des informations d'identification de mode de prédiction d'un bloc actuel ; lorsque les informations d'identification de mode de prédiction indiquent que le bloc actuel effectue une inter-prédiction dans un domaine de transformée, sur la base d'un bloc de référence du bloc actuel, déterminer une valeur prédite de coefficient de transformée du bloc actuel ; décoder un flux binaire pour déterminer une valeur résiduelle de coefficient de transformée du bloc actuel ; et sur la base de la valeur prédite de coefficient de transformée du bloc actuel et de la valeur résiduelle de coefficient de transformée du bloc actuel, déterminer un coefficient de transformée du bloc actuel. De cette manière, la complexité temporelle est réduite, et l'efficacité de codage et de décodage d'attribut d'un nuage de points est améliorée.
PCT/CN2024/087319 2024-04-11 2024-04-11 Procédé de codage, procédé de décodage, flux binaire, codeur, décodeur, et support d'enregistrement Pending WO2025213421A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2024/087319 WO2025213421A1 (fr) 2024-04-11 2024-04-11 Procédé de codage, procédé de décodage, flux binaire, codeur, décodeur, et support d'enregistrement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2024/087319 WO2025213421A1 (fr) 2024-04-11 2024-04-11 Procédé de codage, procédé de décodage, flux binaire, codeur, décodeur, et support d'enregistrement

Publications (1)

Publication Number Publication Date
WO2025213421A1 true WO2025213421A1 (fr) 2025-10-16

Family

ID=97349038

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2024/087319 Pending WO2025213421A1 (fr) 2024-04-11 2024-04-11 Procédé de codage, procédé de décodage, flux binaire, codeur, décodeur, et support d'enregistrement

Country Status (1)

Country Link
WO (1) WO2025213421A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102792688A (zh) * 2010-02-19 2012-11-21 斯凯普公司 用于视频的数据压缩
CN106998470A (zh) * 2016-01-25 2017-08-01 华为技术有限公司 解码方法、编码方法、解码设备和编码设备
US20180310000A1 (en) * 2015-09-23 2018-10-25 Lg Electronics Inc. Method and apparatus for intra prediction in video coding system
CN114651443A (zh) * 2020-05-29 2022-06-21 Oppo广东移动通信有限公司 帧间预测方法、编码器、解码器以及计算机存储介质
CN115633179A (zh) * 2022-10-12 2023-01-20 山东大学 一种用于实时体积视频流传输的压缩方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102792688A (zh) * 2010-02-19 2012-11-21 斯凯普公司 用于视频的数据压缩
US20180310000A1 (en) * 2015-09-23 2018-10-25 Lg Electronics Inc. Method and apparatus for intra prediction in video coding system
CN106998470A (zh) * 2016-01-25 2017-08-01 华为技术有限公司 解码方法、编码方法、解码设备和编码设备
CN114651443A (zh) * 2020-05-29 2022-06-21 Oppo广东移动通信有限公司 帧间预测方法、编码器、解码器以及计算机存储介质
CN115633179A (zh) * 2022-10-12 2023-01-20 山东大学 一种用于实时体积视频流传输的压缩方法

Similar Documents

Publication Publication Date Title
CN118075464A (zh) 点云属性的预测方法、装置及编解码器
WO2024174086A1 (fr) Procédé de décodage, procédé de codage, décodeurs et codeurs
WO2024197680A1 (fr) Procédé et appareil de codage de nuage de points, procédé et appareil de décodage de nuage de points, dispositif et support de stockage
WO2025213421A1 (fr) Procédé de codage, procédé de décodage, flux binaire, codeur, décodeur, et support d'enregistrement
WO2024216476A1 (fr) Procédé de codage/décodage, codeur, décodeur, flux de code, et support de stockage
WO2025138048A1 (fr) Procédé de codage, procédé de décodage, flux de codes, codeur, décodeur et support de stockage
WO2025010601A1 (fr) Procédé de codage, procédé de décodage, codeurs, décodeurs, flux de code et support de stockage
WO2024207456A1 (fr) Procédé de codage et de décodage, codeur, décodeur, flux de code et support de stockage
WO2024234132A9 (fr) Procédé de codage, procédé de décodage, flux de code, codeur, décodeur et support d'enregistrement
WO2025213480A1 (fr) Procédé et appareil de codage, procédé et appareil de décodage, codeur de nuage de points, décodeur de nuage de points, flux binaire, dispositif et support de stockage
WO2024148598A1 (fr) Procédé de codage, procédé de décodage, codeur, décodeur et support de stockage
WO2024174092A1 (fr) Procédé de codage/décodage, flux de code, codeur, décodeur et support d'enregistrement
WO2025076659A1 (fr) Procédé de codage de nuage de points, procédé de décodage de nuage de points, flux de code, codeur, décodeur et support de stockage
WO2025007360A1 (fr) Procédé de codage, procédé de décodage, flux binaire, codeur, décodeur et support d'enregistrement
WO2025007355A9 (fr) Procédé de codage, procédé de décodage, flux de code, codeur, décodeur et support de stockage
WO2024065406A1 (fr) Procédés de codage et de décodage, train de bits, codeur, décodeur et support de stockage
WO2024216477A1 (fr) Procédés de codage/décodage, codeur, décodeur, flux de code et support de stockage
WO2025010600A9 (fr) Procédé de codage, procédé de décodage, flux de code, codeur, décodeur et support de stockage
WO2024216479A9 (fr) Procédé de codage et de décodage, flux de code, codeur, décodeur et support de stockage
WO2025076663A1 (fr) Procédé de codage, procédé de décodage, codeur, décodeur, et support de stockage
WO2025076672A1 (fr) Procédé de codage, procédé de décodage, codeur, décodeur, flux de code, et support de stockage
WO2024212113A1 (fr) Procédé et appareil de codage et de décodage de nuage de points, dispositif et support de stockage
WO2025076668A1 (fr) Procédé de codage, procédé de décodage, codeur, décodeur et support de stockage
WO2025145433A1 (fr) Procédé de codage de nuage de points, procédé de décodage de nuage de points, codec, flux de code et support de stockage
WO2024187380A1 (fr) Procédé de codage, procédé de décodage, flux de code, codeur, décodeur et support de stockage

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24934503

Country of ref document: EP

Kind code of ref document: A1