WO2024212043A1 - Procédé de codage, procédé de décodage, flux de code, codeur, décodeur et support de stockage - Google Patents
Procédé de codage, procédé de décodage, flux de code, codeur, décodeur et support de stockage Download PDFInfo
- Publication number
- WO2024212043A1 WO2024212043A1 PCT/CN2023/087303 CN2023087303W WO2024212043A1 WO 2024212043 A1 WO2024212043 A1 WO 2024212043A1 CN 2023087303 W CN2023087303 W CN 2023087303W WO 2024212043 A1 WO2024212043 A1 WO 2024212043A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- node
- decoded
- geometric
- encoded
- prediction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
Definitions
- the embodiments of the present application relate to the field of point cloud encoding and decoding technology, and in particular, to an encoding and decoding method, a bit stream, an encoder, a decoder, and a storage medium.
- G-PCC geometry-based point cloud compression
- the geometry coding of G-PCC can be divided into octree-based geometry coding and prediction tree-based geometry coding.
- For the prediction tree-based geometry coding it is necessary to first establish a prediction tree; then traverse each node in the prediction tree, and after determining the prediction mode of each node, predict the geometric position information of the node according to the prediction mode to obtain the prediction residual, and finally encode the parameters such as the prediction mode and prediction residual of each node to generate a binary code stream.
- the current node can also use the inter-frame prediction mode to predict the geometric position information of the node.
- the prior information of global motion is considered, which leads to poor prediction effect, reduces the accuracy of inter-frame prediction, and thus reduces the coding and decoding efficiency of geometric information.
- the embodiments of the present application provide a coding and decoding method, a bit stream, an encoder, a decoder and a storage medium, which can improve the accuracy of inter-frame prediction, thereby improving the coding and decoding efficiency of geometric information and improving the coding and decoding performance of point clouds.
- an embodiment of the present application provides a decoding method, which is applied to a decoder, and the method includes:
- an embodiment of the present application provides an encoding method, which is applied to an encoder, and the method includes:
- a geometric prediction value of the node to be encoded is determined.
- an embodiment of the present application provides a code stream, wherein the code stream is generated by bit encoding according to information to be encoded; wherein the information to be encoded includes at least one of the following:
- the geometric prediction residual value, quantization parameter, prediction node index value, first identification information and second identification information of the node to be encoded
- the first identification information is used to indicate whether the node to be encoded uses an inter-frame prediction mode
- the second identification information is used to indicate whether the node to be encoded enables a local motion processing mode
- an embodiment of the present application provides a decoder, the decoder comprising a decoding unit, a first determining unit and a first local motion processing unit; wherein,
- the decoding unit is configured to parse the bitstream, determine the predicted node index value corresponding to the node to be decoded; determine the first decoded node before the node to be decoded in the current frame;
- the first determining unit is configured to determine a prediction node according to the prediction node index value and the first decoded node;
- the first local motion processing unit is configured to perform local motion processing on the first geometric parameter of the prediction node based on the first decoded node to determine the second geometric parameter of the prediction node.
- the first determining unit is further configured to determine the node to be decoded according to the first geometric parameter or the second geometric parameter.
- the geometric prediction value of the point is further configured to determine the node to be decoded according to the first geometric parameter or the second geometric parameter.
- an embodiment of the present application provides a decoder, the decoder comprising a first memory and a first processor; wherein:
- the first memory is configured to store a computer program that can be executed on the first processor
- the first processor is configured to execute a decoding method on the decoder side when running the computer program.
- an embodiment of the present application provides an encoder, the encoder comprising a second determination unit, a second local motion processing unit and a prediction unit; wherein,
- the second determination unit is configured to determine a first coded node preceding the node to be coded in the current frame; determine a first candidate node having at least one geometric parameter identical to the first coded node in the reference frame, and determine at least one second candidate node in the reference frame according to the first candidate node;
- the second local motion processing unit is configured to perform local motion processing on the geometric parameters of at least one candidate node among the at least one second candidate node to obtain the updated at least one second candidate node;
- the prediction unit is configured to determine a geometric prediction value of the node to be encoded based on the first candidate node and at least one updated second candidate node.
- an encoder comprising a second memory and a second processor; wherein:
- the second memory is configured to store a computer program that can be executed on the second processor
- the second processor is configured to execute the encoding method on the encoder side when running the computer program.
- an embodiment of the present application provides a computer-readable storage medium, wherein the computer-readable storage medium stores a computer program, and when the computer program is executed by a first processor, the computer program implements a decoding method on the decoder side, or when the computer program is executed by a second processor, the computer program implements a coding and decoding method on the encoder side.
- An embodiment of the present application provides a coding and decoding method.
- the decoder parses the bit stream to determine the predicted node index value corresponding to the node to be decoded; then, the decoder determines the previous first decoded node of the node to be decoded in the current frame; subsequently, the decoder determines the predicted node based on the predicted node index value and the first decoded node; finally, the decoder performs local motion processing on the first geometric parameter of the predicted node based on the first decoded node to determine the second geometric parameter of the predicted node; and determines the geometric prediction value of the node to be decoded based on the first geometric parameter or the second geometric parameter.
- the encoder determines the first encoded node that is the previous node of the node to be encoded in the current frame; then, the encoder determines a first candidate node that has at least one geometric parameter identical to the first encoded node in the reference frame, and determines at least one second candidate node in the reference frame based on the first candidate node; subsequently, the encoder performs local motion processing on the geometric parameters of at least one candidate node among the at least one second candidate node to obtain at least one updated second candidate node; finally, the encoder determines the geometric prediction value of the node to be encoded based on the first candidate node and the at least one updated second candidate node.
- the technical solution of the present application is mainly to optimize the geometric parameters of the prediction nodes used for inter-frame prediction.
- the second geometric parameters since the second geometric parameters are obtained after the first geometric parameters are processed by local motion, the second geometric parameters have the prior information of local motion compared with the first geometric parameters, thereby achieving a more refined prediction of the geometric information of the prediction nodes, thereby improving the accuracy of inter-frame prediction.
- the accuracy of the geometric reconstruction of the node to be decoded can be improved, thereby improving the accuracy of inter-frame prediction, and can improve the coding efficiency of the geometric information, thereby improving the encoding and decoding performance of the point cloud.
- FIG1A is a schematic diagram of a three-dimensional point cloud image
- FIG1B is a partial enlarged view of a three-dimensional point cloud image
- FIG2A is a schematic diagram of six viewing angles of a point cloud image
- FIG2B is a schematic diagram of a data storage format corresponding to a point cloud image
- FIG3 is a schematic diagram of a network architecture for point cloud encoding and decoding
- FIG4A is a schematic diagram of a composition framework of a G-PCC encoder
- FIG4B is a schematic diagram of a composition framework of a G-PCC decoder
- FIG5A is a schematic diagram of a low plane position in the Z-axis direction
- FIG5B is a schematic diagram of a high plane position in the Z-axis direction
- FIG6 is a schematic diagram of a node encoding sequence
- FIG. 7A is a schematic diagram of a plane identification information
- FIG7B is a schematic diagram of another type of planar identification information
- FIG8 is a schematic diagram of sibling nodes of a current node
- FIG9 is a schematic diagram of the intersection of a laser radar and a node
- FIG10 is a schematic diagram of neighborhood nodes at the same partition depth and the same coordinates
- FIG11A is a schematic diagram showing a current node being located at a low plane position of a parent node
- FIG11C is a schematic diagram of another low plane position of the current node located at the parent node
- FIG12A is a schematic diagram showing a high plane position of a current node located at a parent node
- FIG12B is another schematic diagram of the current node being located at the high plane position of the parent node
- FIG12C is a schematic diagram of another high plane position of the current node located at the parent node
- FIG13 is a schematic diagram of predictive coding of planar position information of a laser radar point cloud
- FIG14 is a schematic diagram of IDCM encoding
- FIG15 is a schematic diagram of coordinate transformation of a rotating laser radar to obtain a point cloud
- FIG16 is a schematic diagram of predictive coding in the X-axis or Y-axis direction
- FIG17A is a schematic diagram showing an angle of predicting an X-plane by using a horizontal azimuth angle
- FIG17B is a schematic diagram showing an angle of the Y plane predicted by the horizontal azimuth angle
- FIG18 is another schematic diagram of predictive coding in the X-axis or Y-axis direction
- FIG19A is a schematic diagram of three intersection points included in a sub-block
- FIG19B is a schematic diagram of a triangular facet set fitted using three intersection points
- FIG19C is a schematic diagram of upsampling of a triangular face set
- FIG20 is a schematic diagram of the structure of a geometric prediction tree inter-frame encoding and decoding
- FIG21 is a schematic diagram of a flow chart of a decoding method provided in an embodiment of the present application.
- FIG22 is a schematic diagram of a flow chart of an encoding method provided in an embodiment of the present application.
- FIG23 is a schematic diagram of the structure of a geometric information inter-frame encoding and decoding provided in an embodiment of the present application.
- FIG24 is a schematic diagram of the structure of another geometric information inter-frame encoding and decoding provided in an embodiment of the present application.
- FIG25 is a schematic diagram of the composition structure of an encoder provided in an embodiment of the present application.
- FIG26 is a schematic diagram of a specific hardware structure of an encoder provided in an embodiment of the present application.
- FIG27 is a schematic diagram of the composition structure of a decoder provided in an embodiment of the present application.
- FIG28 is a schematic diagram of a specific hardware structure of a decoder provided in an embodiment of the present application.
- FIG. 29 is a schematic diagram of the composition structure of a coding and decoding system provided in an embodiment of the present application.
- first ⁇ second ⁇ third involved in the embodiments of the present application are only used to distinguish similar objects and do not represent a specific ordering of the objects. It can be understood that “first ⁇ second ⁇ third” can be interchanged in a specific order or sequence where permitted, so that the embodiments of the present application described here can be implemented in an order other than that illustrated or described here.
- Point Cloud is a three-dimensional representation of the surface of an object.
- Point cloud (data) on the surface of an object can be collected through acquisition equipment such as photoelectric radar, lidar, laser scanner, and multi-view camera.
- a point cloud is a set of irregularly distributed discrete points in space that express the spatial structure and surface properties of a three-dimensional object or scene.
- FIG1A shows a three-dimensional point cloud image
- FIG1B shows a partial magnified view of the three-dimensional point cloud image. It can be seen that the point cloud surface is composed of densely distributed points.
- Two-dimensional images have information expressed at each pixel point, and the distribution is regular, so there is no need to record its position information additionally; however, the distribution of points in the point cloud in three-dimensional space is random and irregular, so it is necessary to record the position of each point in space in order to fully express a point cloud.
- each position in the acquisition process has corresponding attribute information, usually RGB color values, and the color value reflects the color of the object; for point clouds, in addition to color information, the attribute information corresponding to each point is also commonly the reflectance value, which reflects the surface material of the object. Therefore, point cloud data usually includes the location information of the point and the attribute information of the point. Among them, the location information of the point can also be called the geometric information of the point.
- the geometric information of the point can be the three-dimensional coordinate information of the point (x, y, z).
- the attribute information of the point can include color information and/or reflectance, etc.
- reflectance can be one-dimensional reflectance information (r); color information can be information on any color space, or color information can also be three-dimensional color information, such as RGB
- R represents red (Red, R)
- G represents green (Green, G)
- B represents blue (Blue, B).
- the color information may be luminance and chrominance (YCbCr, YUV) information.
- Y represents brightness (Luma)
- Cb (U) represents blue color difference
- Cr (V) represents red color difference.
- the points in the point cloud may include the three-dimensional coordinate information of the points and the reflectivity value of the points.
- the points in the point cloud may include the three-dimensional coordinate information of the points and the three-dimensional color information of the points.
- a point cloud obtained by combining the principles of laser measurement and photogrammetry may include the three-dimensional coordinate information of the points, the reflectivity value of the points and the three-dimensional color information of the points.
- Figure 2A and 2B a point cloud image and its corresponding data storage format are shown.
- Figure 2A provides six viewing angles of the point cloud image
- Figure 2B consists of a file header information part and a data part.
- the header information includes the data format, data representation type, the total number of point cloud points, and the content represented by the point cloud.
- the point cloud is in the ".ply" format, represented by ASCII code, with a total number of 207242 points, and each point has three-dimensional coordinate information (x, y, z) and three-dimensional color information (r, g, b).
- Point clouds can be divided into the following categories according to the way they are obtained:
- Static point cloud the object is stationary, and the device that obtains the point cloud is also stationary;
- Dynamic point cloud The object is moving, but the device that obtains the point cloud is stationary;
- Dynamic point cloud acquisition The device used to acquire the point cloud is in motion.
- point clouds can be divided into two categories according to their usage:
- Category 1 Machine perception point cloud, which can be used in autonomous navigation systems, real-time inspection systems, geographic information systems, visual sorting robots, disaster relief robots, etc.
- Category 2 Point cloud perceived by the human eye, which can be used in point cloud application scenarios such as digital cultural heritage, free viewpoint broadcasting, 3D immersive communication, and 3D immersive interaction.
- Point clouds can flexibly and conveniently express the spatial structure and surface properties of three-dimensional objects or scenes. Point clouds are obtained by directly sampling real objects, so they can provide a strong sense of reality while ensuring accuracy. Therefore, they are widely used, including virtual reality games, computer-aided design, geographic information systems, automatic navigation systems, digital cultural heritage, free viewpoint broadcasting, three-dimensional immersive remote presentation, and three-dimensional reconstruction of biological tissues and organs.
- Point clouds can be collected mainly through the following methods: computer generation, 3D laser scanning, 3D photogrammetry, etc.
- Computers can generate point clouds of virtual three-dimensional objects and scenes; 3D laser scanning can obtain point clouds of static real-world three-dimensional objects or scenes, and can obtain millions of point clouds per second; 3D photogrammetry can obtain point clouds of dynamic real-world three-dimensional objects or scenes, and can obtain tens of millions of point clouds per second.
- 3D photogrammetry can obtain point clouds of dynamic real-world three-dimensional objects or scenes, and can obtain tens of millions of point clouds per second.
- the number of points in each point cloud frame is 700,000, and each point has coordinate information xyz (float) and color information RGB (uchar).
- point cloud compression has become a key issue in promoting the development of the point cloud industry.
- the point cloud is a collection of massive points, storing the point cloud will not only consume a lot of memory, but also be inconvenient for transmission. There is also not enough bandwidth to support direct transmission of the point cloud at the network layer without compression. Therefore, the point cloud needs to be compressed.
- the point cloud coding framework that can compress point clouds can be the geometry-based point cloud compression (G-PCC) codec framework or the video-based point cloud compression (V-PCC) codec framework provided by the Moving Picture Experts Group (MPEG), or the AVS-PCC codec framework provided by AVS.
- the G-PCC codec framework can be used to compress the first type of static point cloud and the third type of dynamically acquired point cloud, which can be based on the point cloud compression test platform (Test Model Compression 13, TMC13), and the V-PCC codec framework can be used to compress the second type of dynamic point cloud, which can be based on the point cloud compression test platform (Test Model Compression 2, TMC2). Therefore, the G-PCC codec framework is also called the point cloud codec TMC13, and the V-PCC codec framework is also called the point cloud codec TMC2.
- FIG3 is a schematic diagram of a network architecture of a point cloud encoding and decoding provided by the embodiment of the present application.
- the network architecture includes one or more electronic devices 13 to 1N and a communication network 01, wherein the electronic devices 13 to 1N can perform video interaction through the communication network 01.
- the electronic device can be various types of devices with point cloud encoding and decoding functions.
- the electronic device can include a mobile phone, a tablet computer, a personal computer, a personal digital assistant, a navigator, a digital phone, a video phone, a television, a sensor device, a server, etc., which is not limited by the embodiment of the present application.
- the decoder or encoder in the embodiment of the present application can be the above-mentioned electronic device.
- the electronic device in the embodiment of the present application has a point cloud encoding and decoding function, generally including a point cloud encoder (ie, encoder) and a point cloud decoder (ie, decoder).
- a point cloud encoder ie, encoder
- a point cloud decoder ie, decoder
- the point cloud data is first divided into multiple slices by slice division.
- the geometric information of the point cloud and the attribute information corresponding to each point are encoded separately.
- FIG4A shows a schematic diagram of the composition framework of a G-PCC encoder.
- the geometric information is transformed so that all point clouds are contained in a bounding box (Bounding Box), and then quantized.
- This step of quantization mainly plays a role in scaling. Due to the quantization rounding, the geometric information of a part of the point cloud is the same, so whether to remove duplicate points is determined based on parameters.
- the process of quantization and removal of duplicate points is also called voxelization.
- the Bounding Box is divided into octrees or a prediction tree is constructed.
- arithmetic coding is performed on the points in the leaf nodes of the division to generate a binary geometric bit stream; or, arithmetic coding is performed on the intersection points (Vertex) generated by the division (surface fitting is performed based on the intersection points) to generate a binary geometric bit stream.
- color conversion is required first to convert the color information (i.e., attribute information) from the RGB color space to the YUV color space. Then, the point cloud is recolored using the reconstructed geometric information so that the uncoded attribute information corresponds to the reconstructed geometric information. Attribute encoding is mainly performed on color information.
- FIG4B shows a schematic diagram of the composition framework of a G-PCC decoder.
- the geometric bit stream and the attribute bit stream in the binary bit stream are first decoded independently.
- the geometric information of the point cloud is obtained through arithmetic decoding-reconstruction of the octree/reconstruction of the prediction tree-reconstruction of the geometry-coordinate inverse conversion;
- the attribute information of the point cloud is obtained through arithmetic decoding-inverse quantization-LOD partitioning/RAHT-color inverse conversion, and the point cloud data to be encoded (i.e., the output point cloud) is restored based on the geometric information and attribute information.
- the current geometric coding of G-PCC can be divided into octree-based geometric coding (marked by a dotted box) and prediction tree-based geometric coding (marked by a dotted box).
- the octree-based geometry encoding includes: first, coordinate transformation of the geometric information so that all point clouds are contained in a Bounding Box. Then quantization is performed. This step of quantization mainly plays a role of scaling. Due to the quantization rounding, the geometric information of some points is the same. Whether to remove duplicate points is determined based on parameters. The process of quantization and removal of duplicate points is also called voxelization. Next, the Bounding Box is continuously divided into trees (such as octrees, quadtrees, binary trees, etc.) in the order of breadth-first traversal, and the placeholder code of each node is encoded.
- trees such as octrees, quadtrees, binary trees, etc.
- the bounding box of the point cloud is calculated. Assume that dx > dy > dz , the bounding box corresponds to a cuboid.
- K and M In the process of binary tree/quadtree/octree partitioning, two parameters are introduced: K and M.
- K indicates the maximum number of binary tree/quadtree partitions before octree partitioning;
- parameter M is used to indicate that the minimum block side length corresponding to binary tree/quadtree partitioning is 2 M.
- the reason why parameters K and M meet the above conditions is that in the process of geometric implicit partitioning in G-PCC, the priority of partitioning is binary tree, quadtree and octree.
- the node block size does not meet the conditions of binary tree/quadtree, the node will be partitioned by octree until it is divided into the minimum unit of leaf node 1 ⁇ 1 ⁇ 1.
- the geometric information encoding mode based on octree can effectively encode the geometric information of point cloud by utilizing the correlation between adjacent points in space.
- the encoding efficiency of point cloud geometric information can be further improved by using plane coding.
- Fig. 5A and Fig. 5B provide a kind of plane position schematic diagram.
- Fig. 5A shows a kind of low plane position schematic diagram in the Z-axis direction
- Fig. 5B shows a kind of high plane position schematic diagram in the Z-axis direction.
- (a), (a0), (a1), (a2), (a3) here all belong to the low plane position in the Z-axis direction.
- the four subnodes occupied in the current node are located at the high plane position of the current node in the Z-axis direction, so it can be considered that the current node belongs to a Z plane and is a high plane in the Z-axis direction.
- FIG. 6 provides a Schematic diagram of point coding sequence, that is, node coding is performed in the order of 0, 1, 2, 3, 4, 5, 6, 7 as shown in Figure 6.
- the octree coding method is used for (a) in Figure 5A, the placeholder information of the current node is represented as: 11001100.
- the plane coding method is used, first, an identifier needs to be encoded to indicate that the current node is a plane in the Z-axis direction.
- the plane position of the current node needs to be represented; secondly, only the placeholder information of the low plane node in the Z-axis direction needs to be encoded (that is, the placeholder information of the four sub-nodes 0, 2, 4, and 6). Therefore, based on the plane coding method, encoding the current node only requires encoding 6 bits (bit), which can reduce the representation of 2 bits compared to the octree coding of the related technology. Based on this analysis, plane coding has a more obvious coding efficiency than octree coding.
- PlaneMode _i 0 means that the current node is not a plane in the i-axis direction, and 1 means that the current node is a plane in the i-axis direction. If the current node is a plane in the i-axis direction, then for PlanePosition _i : 0 means that the current node is a plane in the i-axis direction, and the plane position is a low plane, and 1 means that the current node is a high plane in the i-axis direction.
- Prob(i) new (L ⁇ Prob(i)+ ⁇ (coded node))/L+1 (3)
- L 255; in addition, if the coded node is a plane, ⁇ (coded node) is 1; otherwise, ⁇ (coded node) is 0.
- local_node_density new local_node_density+4*numSiblings (4)
- FIG8 shows a schematic diagram of the sibling nodes of the current node. As shown in FIG8, the current node is a node filled with slashes, and the nodes filled with grids are sibling nodes, then the number of sibling nodes of the current node is 5 (including the current node itself).
- planarEligibleK OctreeDepth is true; if (pointCount-numPointCountRecon) is not less than nodeCount ⁇ 1.3, then planarEligibleKOctreeDepth is false.
- planarEligibleKOctreeDepth is true, all nodes in the current layer are plane-encoded; otherwise, all nodes in the current layer are not plane-encoded, and only octree encoding is used.
- Figure 9 shows a schematic diagram of the intersection of a laser radar and a node.
- a node filled with a grid is simultaneously passed through by two laser beams (Laser), so the current node is not a plane in the vertical direction of the Z axis;
- a node filled with a slash is small enough that it cannot be passed through by two lasers at the same time, so the node filled with a slash may be a plane in the vertical direction of the Z axis.
- the plane identification information and the plane position information may be predictively coded.
- the predictive encoding of the plane position information may include:
- the plane position information is divided into three elements: predicted as a low plane, predicted as a high plane, and unpredictable;
- the spatial distance after determining the spatial distance between the node at the same division depth and the same coordinates as the current node and the current node, if the spatial distance is less than a preset distance threshold, then the spatial distance can be determined to be "near”; or, if the spatial distance is greater than the preset distance threshold, then the spatial distance can be determined to be "far”.
- FIG10 shows a schematic diagram of neighborhood nodes at the same division depth and the same coordinates.
- the bold large cube represents the parent node (Parent node), the small cube filled with a grid inside it represents the current node (Current node), and the intersection position (Vertex position) of the current node is shown;
- the small cube filled with white represents the neighborhood nodes at the same division depth and the same coordinates, and the distance between the current node and the neighborhood node is the spatial distance, which can be judged as "near” or "far”; in addition, if the neighborhood node is a plane, then the plane position (Planar position) of the neighborhood node is also required.
- the current node is a small cube filled with a grid
- the neighboring node is searched for a small cube filled with white at the same octree partition depth level and the same vertical coordinate, and the distance between the two nodes is judged as "near" and "far", and the plane position of the reference node is referenced.
- FIG. 11A shows a schematic diagram of a low plane position of a current node located at a parent node
- FIG. 11B shows another schematic diagram of a low plane position of a current node located at a parent node
- FIG. 11C shows another schematic diagram of a low plane position of a current node located at a parent node.
- FIG. 11A, FIG. 11B and FIG. 11C three examples of the current node being located at a low plane position of a parent node are shown. The specific description is as follows:
- FIG. 12A shows a schematic diagram of a high plane position of a current node located at a parent node
- FIG. 12B shows another schematic diagram of a high plane position of a current node located at a parent node
- FIG. 12C shows another schematic diagram of a high plane position of a current node located at a parent node.
- FIG. 12A, FIG. 12B and FIG. 12C three examples of the current node being located at a high plane position of a parent node are shown. The specific description is as follows:
- Figure 13 shows a schematic diagram of predictive encoding of the laser radar point cloud plane position information.
- the laser radar emission angle is ⁇ bottom
- it can be mapped to the bottom plane (Bottom virtual plane)
- the laser radar emission angle is ⁇ top
- it can be mapped to the top plane (Top virtual plane).
- the plane position of the current node is predicted by using the laser radar acquisition parameters, and the position of the current node intersecting with the laser ray is used to quantify the position into multiple intervals, which is finally used as the context information of the plane position of the current node.
- the specific calculation process is as follows: Assuming that the coordinates of the laser radar are (x Lidar , y Lidar , z Lidar ), and the geometric coordinates of the current node are (x, y, z), then first calculate the vertical tangent value tan ⁇ of the current node relative to the laser radar, and the calculation formula is as follows:
- each Laser has a certain offset angle relative to the LiDAR, it is also necessary to calculate the relative tangent value tan ⁇ corr,L of the current node relative to the Laser.
- the specific calculation is as follows:
- the relative tangent value tan ⁇ corr,L of the current node is used to predict the plane position of the current node. Specifically, assuming that the tangent value of the lower boundary of the current node is tan( ⁇ bottom ), and the tangent value of the upper boundary is tan( ⁇ top ), the plane position is quantized into 4 quantization intervals according to tan ⁇ corr,L , that is, the context information of the plane position is determined.
- the octree-based geometric information coding mode only has an efficient compression rate for points with correlation in space.
- the use of the direct coding model (DCM) can greatly reduce the complexity.
- DCM direct coding model
- the use of DCM is not represented by flag information, but is inferred from the parent node and neighbor information of the current node. There are three ways to determine whether the current node is eligible for DCM encoding, as follows:
- the current node has no sibling child nodes, that is, the parent node of the current node has only one child node, and the parent node of the parent node of the current node has only two occupied child nodes, that is, the current node has at most one neighbor node.
- the parent node of the current node has only one child node, the current node.
- the six neighbor nodes that share a face with the current node are also empty nodes.
- FIG14 provides a schematic diagram of IDCM coding. If the current node does not have the DCM coding qualification, it will be divided into octrees. If it has the DCM coding qualification, the number of points contained in the node will be further determined. When the number of points is less than a threshold value (for example, 2), the node will be DCM-encoded, otherwise the octree division will continue.
- a threshold value for example, 2
- IDCM_flag the current node is encoded using DCM, otherwise octree coding is still used.
- the DCM coding mode of the current node needs to be encoded.
- DCM modes There are currently two DCM modes, namely: (a) only one point exists (or multiple points, but they are repeated points); (b) contains two points.
- the geometric information of each point needs to be encoded. Assuming that the side length of the node is 2d , d bits are required to encode each component of the geometric coordinates of the node, and the bit information is directly encoded into the bit stream. It should be noted here that when encoding the lidar point cloud, the three-dimensional coordinate information can be predictively encoded by using the lidar acquisition parameters, thereby further improving the encoding efficiency of the geometric information.
- the current node does not meet the requirements of the DCM node, it will exit directly (that is, the number of points is greater than 2 points and it is not a duplicate point).
- the second point of the current node is a repeated point, and then it is encoded whether the number of repeated points of the current node is greater than 1. When the number of repeated points is greater than 1, it is necessary to perform exponential Golomb decoding on the remaining number of repeated points.
- the coordinate information of the points contained in the current node is encoded.
- the following will introduce the lidar point cloud and the human eye point cloud in detail.
- the axis with the smaller node coordinate geometry position will be used as the priority coded axis dirextAxis, and then the geometry information of the priority coded axis dirextAxis will be encoded as follows. Assume that the bit depth of the coded geometry corresponding to the priority coded axis is nodeSizeLog2, and assume that the coordinates of the two points are pointPos[0] and pointPos[1].
- the specific encoding process is as follows:
- the priority coded coordinate axis dirextAxis geometry information is first encoded as follows, assuming that the priority coded axis corresponds to the coded geometry bit depth of nodeSizeLog2, and assuming that the coordinates of the two points are pointPos[0] and pointPos[1].
- the specific encoding process is as follows:
- the geometric coordinate information of the current node can be predicted, so as to further improve the efficiency of the geometric information encoding of the point cloud.
- the geometric information nodePos of the current node is first used to obtain a directly encoded main axis direction, and then the geometric information of the encoded direction is used to predict the geometric information of another dimension.
- the axis direction of the direct encoding is directAxis
- the bit depth of the direct encoding is nodeSizeLog2
- Figure 15 provides a schematic diagram of coordinate transformation for a rotating laser radar to obtain a point cloud.
- the (x, y, z) coordinates of each node can be converted to (R, ⁇ , i).
- the laser scanner can perform laser scanning at a preset angle, and different ⁇ (i) can be obtained under different values of i. For example, when i is equal to 1, this When i is equal to 2, ⁇ (2) can be obtained, and the corresponding scanning angle is -13°; when i is equal to 10, ⁇ (10) can be obtained, and the corresponding scanning angle is +13°; when i is equal to 9, ⁇ (19) can be obtained, and the corresponding scanning angle is +15°.
- the LaserIdx corresponding to the current point i.e., the pointLaserIdx number in Figure 15, will be calculated first, and the LaserIdx of the current node, i.e., nodeLaserIdx, will be calculated; secondly, the LaserIdx of the node, i.e., nodeLaserIdx, will be used to predictively encode the LaserIdx of the point, i.e., pointLaserIdx, where the calculation method of the LaserIdx of the node or point is as follows.
- the LaserIdx of the current node is first used to predict the pointLaserIdx of the point. After the LaserIdx of the current point is encoded, the three-dimensional geometric information of the current point is predicted and encoded using the acquisition parameters of the laser radar.
- FIG16 shows a schematic diagram of predictive coding in the X-axis or Y-axis direction.
- a box filled with a grid represents a current node
- a box filled with a slash represents an already coded node.
- the LaserIdx corresponding to the current node is first used to obtain the corresponding predicted value of the horizontal azimuth, that is, Secondly, the node geometry information corresponding to the current point is used to obtain the horizontal azimuth angle corresponding to the node Assuming the geometric coordinates of the node are nodePos, the horizontal azimuth
- the calculation method between the node geometry information is as follows:
- Figure 17A shows a schematic diagram of predicting the angle of the Y plane through the horizontal azimuth angle
- Figure 17B shows a schematic diagram of predicting the angle of the X plane through the horizontal azimuth angle.
- the predicted value of the horizontal azimuth angle corresponding to the current point The calculation is as follows:
- FIG18 shows another schematic diagram of predictive coding in the X-axis or Y-axis direction.
- the portion filled with a grid represents the low plane
- the portion filled with dots represents the high plane.
- Indicates the horizontal azimuth of the low plane of the current node Indicates the horizontal azimuth of the high plane of the current node, Indicates the predicted horizontal azimuth angle corresponding to the current node.
- int context (angLel ⁇ 0&&angLeR ⁇ 0)
- the Z-axis direction of the current point will be predicted and encoded using the LaserIdx corresponding to the current point.
- Code that is, the radius information radius of the radar coordinate system is calculated by using the x and y information of the current point, and then the tangent value of the current point and the vertical offset are obtained by using the laser LaserIdx of the current point, so the predicted value of the Z axis direction of the current point, namely Z_pred, can be obtained.
- Z_pred is used to perform predictive coding on the geometric information of the current point in the Z-axis direction to obtain the prediction residual Z_res, and finally Z_res is encoded.
- G-PCC currently introduces a plane coding mode. In the process of geometric division, it will determine whether the child nodes of the current node are in the same plane. If the child nodes of the current node meet the conditions of the same plane, the child nodes of the current node will be represented by the plane.
- the decoding end follows the order of breadth-first traversal. Before decoding the placeholder information of each node, it will first use the reconstructed geometric information to determine whether the current node is to be plane decoded or IDCM decoded. If the current node meets the conditions for plane decoding, the plane identification and plane position information of the current node will be decoded first, and then the placeholder information of the current node will be decoded based on the plane information; if the current node meets the conditions for IDCM decoding, it will first decode whether the current node is a true IDCM node.
- IDCM decoding If it is a true IDCM decoding, it will continue to parse the DCM decoding mode of the current node, and then the number of points in the current DCM node can be obtained, and finally the geometric information of each point will be decoded.
- the placeholder information of the current node will be decoded.
- the prior information is first used to determine whether the node starts IDCM. That is, the starting conditions of IDCM are as follows:
- the current node has no sibling child nodes, that is, the parent node of the current node has only one child node, and the parent node of the parent node of the current node has only two occupied child nodes, that is, the current node has at most one neighbor node.
- the parent node of the current node has only one child node, the current node.
- the six neighbor nodes that share a face with the current node are also empty nodes.
- a node meets the conditions for DCM coding, first decode whether the current node is a real DCM node, that is, IDCM_flag; when IDCM_flag is true, the current node adopts DCM coding, otherwise it still adopts octree coding.
- numPonts of the current node obtained by decoding is less than or equal to 1, continue decoding to see if the second point is a repeated point; if the second point is not a repeated point, it can be implicitly inferred that the second type that satisfies the DCM mode contains only one point; if the second point obtained by decoding is a repeated point, it can be inferred that the third type that satisfies the DCM mode contains multiple points, but they are all repeated points, then continue decoding to see if the number of repeated points is greater than 1 (entropy decoding), and if it is greater than 1, continue decoding the number of remaining repeated points (decoding using exponential Columbus).
- the current node does not meet the requirements of the DCM node, it will exit directly (that is, the number of points is greater than 2 points and it is not a duplicate point).
- the coordinate information of the points contained in the current node is decoded.
- the following will introduce the lidar point cloud and the human eye point cloud in detail.
- the axis with the smaller node coordinate geometry position will be used as the priority decoding axis dirextAxis, and then the priority decoding axis dirextAxis geometry information will be decoded first in the following way.
- the geometry bit depth to be decoded corresponding to the priority decoding axis is nodeSizeLog2
- the coordinates of the two points are pointPos[0] and pointPos[1] respectively.
- the specific encoding process is as follows:
- the priority encoded coordinate axis dirextAxis geometry information is first decoded as follows, assuming that the priority decoded axis corresponds to the code geometry bit depth of nodeSizeLog2, and assuming that the coordinates of the two points are pointPos[0] and pointPos[1].
- the specific encoding process is as follows:
- the LaserIdx of the current node i.e., nodeLaserIdx
- the LaserIdx of the node i.e., nodeLaserIdx
- the calculation method of the LaserIdx of the node or point is the same as that of the encoder.
- the LaserIdx of the current point and the predicted residual information of the LaserIdx of the node are decoded to obtain ResLaserIdx.
- the three-dimensional geometric information of the current point is predicted and decoded using the acquisition parameters of the laser radar.
- the specific algorithm is as follows:
- the node geometry information corresponding to the current point is used to obtain the horizontal azimuth angle corresponding to the node Assuming the geometric coordinates of the node are nodePos, the horizontal azimuth
- the calculation method between the node geometry information is as follows:
- int context (angLel ⁇ 0&&angLeR ⁇ 0)
- the Z-axis direction of the current point will be predicted and decoded using the LaserIdx corresponding to the current point, that is, the radius information radius of the radar coordinate system is calculated by using the x and y information of the current point, and then the tangent value of the current point and the vertical offset are obtained using the laser LaserIdx of the current point, so that the predicted value of the Z-axis direction of the current point, namely Z_pred, can be obtained.
- the decoded Z_res and Z_pred are used to reconstruct and restore the geometric information of the current point in the Z-axis direction.
- geometric information coding based on triangle soup (trisoup)
- geometric division must also be performed first, but different from geometric information coding based on binary tree/quadtree/octree, this method does not need to divide the point cloud into unit cubes with a side length of 1 ⁇ 1 ⁇ 1 step by step, but stops dividing when the side length of the sub-block is W.
- the surface and the twelve edges of the block are obtained.
- the vertex coordinates of each block are encoded in turn to generate a binary code stream.
- the Predictive geometry coding includes: first, sorting the input point cloud.
- the currently used sorting methods include unordered, Morton order, azimuth order, and radial distance order.
- the prediction tree structure is established by using two different methods, including: KD-Tree (high-latency slow mode) and low-latency fast mode (using laser radar calibration information).
- KD-Tree high-latency slow mode
- low-latency fast mode using laser radar calibration information.
- each node in the prediction tree is traversed, and the geometric position information of the node is predicted by selecting different prediction modes to obtain the prediction residual, and the geometric prediction residual is quantized using the quantization parameter.
- the prediction residual of the prediction tree node position information, the prediction tree structure, and the quantization parameters are encoded to generate a binary code stream.
- the decoding end reconstructs the prediction tree structure by continuously parsing the bit stream, and then obtains the geometric position prediction residual information and quantization parameters of each prediction node through parsing, and dequantizes the prediction residual to recover the reconstructed geometric position information of each node, and finally completes the geometric reconstruction of the decoding end.
- attribute encoding is mainly performed on color information.
- the color information is converted from the RGB color space to the YUV color space.
- the point cloud is recolored using the reconstructed geometric information so that the unencoded attribute information corresponds to the reconstructed geometric information.
- color information encoding there are two main transformation methods, one is the distance-based lifting transformation that relies on LOD division, and the other is to directly perform RAHT transformation. Both methods will convert color information from the spatial domain to the frequency domain, and obtain high-frequency coefficients and low-frequency coefficients through transformation.
- the coefficients are quantized and encoded to generate a binary code stream, as shown in Figures 4A and 4B.
- the Morton code can be used to search for the nearest neighbor.
- the Morton code corresponding to each point in the point cloud can be obtained from the geometric coordinates of the point.
- the specific method for calculating the Morton code is described as follows. For each component of the three-dimensional coordinate represented by a d-bit binary number, its three components can be expressed as:
- the Morton code M is x, y, z, starting from the highest bit, arranged in sequence To the lowest bit, the calculation formula of M is as follows:
- Condition 1 The geometric position is limitedly lossy and the attributes are lossy;
- Condition 3 The geometric position is lossless, and the attributes are limitedly lossy
- the general test sequences include four categories: Cat1A, Cat1B, Cat3-fused, and Cat3-frame.
- the Cat2-frame point cloud only contains reflectance attribute information
- the Cat1A and Cat1B point clouds only contain color attribute information
- the Cat3-fused point cloud contains both color and reflectance attribute information.
- the bounding box is divided into sub-cubes in sequence, and the non-empty sub-cubes (containing points in the point cloud) are divided again until the leaf node obtained by division is a 1 ⁇ 1 ⁇ 1 unit cube.
- the number of points contained in the leaf node needs to be encoded, and finally the encoding of the geometric octree is completed to generate a binary code stream.
- the decoding end obtains the placeholder code of each node by continuously parsing in the order of breadth-first traversal, and continuously divides the nodes in turn until a 1 ⁇ 1 ⁇ 1 unit cube is obtained.
- geometric lossless decoding it is necessary to parse the number of points contained in each leaf node and finally restore the geometrically reconstructed point cloud information.
- the prediction tree structure is established by using two different methods, including: based on KD-Tree (high-latency slow mode) and using lidar calibration information (low-latency fast mode).
- lidar calibration information each point can be divided into different Lasers, and the prediction tree structure is established according to different Lasers.
- each node in the prediction tree is traversed, and the geometric position information of the node is predicted by selecting different prediction modes to obtain the prediction residual, and the geometric prediction residual is quantized using the quantization parameter.
- the prediction residual of the prediction tree node position information, the prediction tree structure, and the quantization parameters are encoded to generate a binary code stream.
- the decoding end continuously parses the bitstream to reconstruct the prediction tree structure, and then obtains the geometric position of each prediction node through parsing.
- the prediction residual information and quantization parameters are set, and the prediction residual is dequantized to restore the reconstructed geometric position information of each node, and finally the geometric reconstruction at the decoding end is completed.
- the point coordinates of the point cloud input are (x, y, z).
- the position information of the point cloud is converted into the radar coordinate system (radius, laserIdx).
- the geometric coordinates of the point are pointPos
- the starting coordinates of the laser ray are LidarOrigin
- the number of lasers is LaserNum
- the tangent value of each Laser is tan ⁇ i
- the offset position of each Laser in the vertical direction is Zi
- the calculation method of the node or point LaserIdx is as follows:
- the depth information radius is calculated as follows:
- LidarOrigin is generally 0.
- FIG20 shows a schematic diagram of the structure of inter-frame coding and decoding of geometric information.
- the current point to be coded in the current frame is filled with a grid, and the current point is represented by a at the previous coded node; there are a first reference frame and a second reference frame, wherein the first reference frame can be the previous frame of the current frame, and the second reference frame can be a global motion compensation (Global Motion Compensation, GMC) reference frame.
- GMC Global Motion Compensation
- the first reference frame i.e., the previous reference frame
- the second reference frame i.e., the reference frame of the previous frame after global motion
- find the node a that has the same and laserID point g and use the points e and f encoded or decoded after point g in the second reference frame as inter-frame candidate points; at the same time, point e and point f Replaced by the parent node of the current point to be encoded
- different prediction points including several intra-frame candidate points and up to 4 inter-frame candidate points
- RDO rate distortion optimization
- the geometric prediction residual is quantized using the quantization parameter.
- the prediction mode, prediction residual, prediction tree structure, quantization parameter and other parameters of the prediction tree node position information are encoded to generate a binary code stream.
- the decoding end continuously parses the bitstream, reconstructs the prediction tree structure, and traverses the prediction tree to find the previous decoded node a before the current point to be decoded;
- the prediction mode is decoded; if the prediction mode is inter-frame prediction mode, the prediction point is selected from the following at most four candidate points using the decoded prediction mode:
- the first reference frame i.e., the previous reference frame
- the second reference frame i.e., the reference frame of the previous frame after global motion
- find the node a that has the same and laserID point g and use the points e and f encoded or decoded after point g in the second reference frame as inter-frame candidate points; at the same time, point e and point f Replaced by the parent node of the current point to be decoded
- the geometric position prediction residual information and quantization parameters of different prediction points are obtained by analysis, and the prediction residual is dequantized, so that the reconstructed geometric position information of each node can be restored, and finally the geometric reconstruction at the decoding end is completed.
- the current prediction mode is the inter-frame prediction mode
- the point set for the inter-frame prediction candidate points in the related technology is small, and the inter-frame prediction point that best suits the current point cannot be fully selected; thus resulting in poor prediction effect and reduced coding efficiency of geometric information.
- the embodiment of the present application provides a coding method, determining the previous coded node of the current node in the current frame; determining A first candidate node in the first reference frame whose geometric parameters meet the first condition with those of the previous encoded node, and at least one second candidate node in the first reference frame based on the first candidate node; a third candidate node in the second reference frame whose geometric parameters meet the first condition with those of the previous encoded node, and at least one fourth candidate node in the second reference frame based on the third candidate node; an inter-frame candidate node set is determined based on at least one of the first candidate node, at least one second candidate node, the third candidate node and at least one fourth candidate node; a geometric prediction value of the current node is determined based on the inter-frame candidate node set.
- An embodiment of the present application also provides a decoding method, which decodes a code stream and determines an inter-frame prediction mode value of a current node; based on the inter-frame prediction mode value, determines a selected node from at least one of a first candidate node, at least one second candidate node, a third candidate node, and at least one fourth candidate node; wherein the first candidate node and at least one second candidate node are candidate nodes in a first reference frame, and the third candidate node and at least one fourth candidate node are candidate nodes in a second reference frame; based on the selected node, determines a geometric prediction value of the current node.
- the technical solution of the present application is mainly aimed at optimizing the candidate nodes used for inter-frame prediction.
- the candidate nodes in the inter-frame candidate node set are expanded here, so that the inter-frame prediction can better predict the current node, thereby improving the accuracy of the inter-frame prediction, and can improve the encoding efficiency of the geometric information, thereby improving the encoding and decoding performance of the point cloud.
- FIG. 21 a schematic flow chart of a decoding method provided in an embodiment of the present application is shown. As shown in FIG. 21 , the method may include:
- the decoder after receiving the code stream transmitted by the encoder, parses the code stream to obtain the node index value of the node to be decoded.
- the decoding method of the embodiment of the present application is applied to a decoder.
- the decoding method may refer to a point cloud inter-frame prediction method; or a method for decoding geometric information between point cloud frames, which mainly improves the inter-frame prediction algorithm in the relevant technology, and can perform local motion processing on the geometric parameters of the prediction node, thereby achieving a better inter-frame prediction effect.
- the node to be decoded is one of the points in the current frame.
- the point in the point cloud, can be all points in the point cloud, or it can be part of the points in the point cloud, and these points are relatively concentrated in space.
- the node to be decoded can specifically refer to the node currently to be decoded in the point cloud.
- the node to be decoded is also referred to as a point to be decoded, a current node, a current point, a current node to be decoded, a current point to be decoded, etc., and the present application does not limit this.
- the node to be decoded may be a point in the point cloud currently to be decoded, or the node to be decoded may include multiple points in the point cloud currently to be decoded, which is not limited in the present application.
- the node to be decoded includes multiple points in the point cloud to be decoded currently, the multiple points included in the node to be decoded are repeated points. In this case, the node to be decoded includes multiple points having the same geometric prediction information.
- the predicted node index value may be a unique identifier of the node to be decoded.
- the preset node index value is a pre-set node index value, that is, the preset node index value is an index value specified or agreed upon by both the encoder and the decoder.
- the preset node index value includes at least one node index value corresponding to at least one reference frame.
- the preset node index value specifies which reference frame the node index value belongs to.
- the decoder can determine the node to be decoded from multiple decoded nodes in the reference frame according to the predicted node index value of the node to be decoded, wherein the multiple decoded nodes in the reference frame are candidate nodes for the predicted node.
- the multiple decoded nodes in the reference frame are: decoded node c, decoded node d, decoded node e, decoded node f, decoded node m, decoded node n, decoded node o, decoded node p, etc.
- the index value of decoded node c is 0, the index value of decoded node d is 1, the index value of decoded node e is 2, the index value of decoded node f is 3, the index value of decoded node m is 4, the index value of decoded node n is 5, the index value of decoded node o is 6, and the index value of decoded node p is 7.
- the decoder determines that the predicted node index value corresponding to the node to be decoded is 3 by parsing the bitstream, the decoder can determine that the node to be decoded is the decoded node f.
- the predicted node index value may be a parameter written in a profile, or may be the value of a flag, which is not specifically limited here.
- flag can be set to a digital form. For example, when the value of flag is 0, it indicates that the node to be decoded is the decoding point corresponding to the predicted node index value 0 (such as node c). When the value of flag is 2, it indicates that the node to be decoded is the decoding point corresponding to the predicted node index value 2 (such as node e).
- the node index value can be represented in binary form.
- the node index value 2 can be represented as 10
- the node index value 3 can be represented as 11.
- the node index value can also be represented in other base forms, which is not limited in the present application.
- S102 Determine the first decoded node before the node to be decoded in the current frame.
- the geometric information (geometric position information) of the first decoded node is determined.
- the first decoded node here refers to the geometric information of the first decoded node. That is to say, after determining the first decoded node before the node to be decoded in the current frame, the geometric information of the first decoded node in the point cloud is determined.
- the geometric information here can be understood as the position information of the first decoded node in the point cloud.
- the geometric information of the first decoded node can be radar coordinate information or Cartesian coordinate information, etc., and this application does not impose any limitation on this.
- the geometric parameters here refer to the parameters in the radar coordinate system.
- the geometric parameters may include: radius depth information, horizontal azimuth And the radar laser index value laserID.
- the geometric information of the first decoded node when the geometric information of the first decoded node is radar coordinate information, the geometric information of the first decoded node at least includes an angle parameter corresponding to the first decoded node, that is, a first angle parameter value.
- the angle parameter value is the angle information of the horizontal position of the horizontal plane, that is,
- the first angle parameter value of the first decoded node is the angle information of the horizontal position of the first decoded node on the horizontal plane.
- the current frame is a frame where the current node to be decoded is located, and the current frame includes at least one point.
- the current frame is obtained by scanning a circle of the laser radar along the XY plane (horizontal plane), and the current frame may include multiple points.
- each point contained in the current frame corresponds to the same radar index value, that is, corresponds to the same laser radar.
- the radar index value is the unique identification information of the laser radar.
- the first decoded node is a node that has been decoded before the node to be decoded in the current frame.
- the first decoded node may also be any H decoded nodes before the node to be decoded in the current frame, where H is a positive integer greater than or equal to 1.
- the first decoded node may be the two decoded nodes before the node to be decoded in the current frame, the first decoded node may also be the four decoded nodes before the node to be decoded in the current frame, etc., and the present application does not limit this.
- determining a first decoded node before a node to be decoded in a current frame may include:
- a previous decoded node of the node to be decoded is determined, and the previous decoded node is used as the first decoded node.
- the previous decoded node of the node to be decoded is used as the first decoded node.
- a predicted node is determined based on a predicted node index value corresponding to the node to be decoded and the first decoded node. It should be noted that after the predicted node is determined, the geometric information (geometric parameters) of the predicted node is determined.
- a decoded node corresponding to the predicted node index value is determined in at least one decoded node in the reference frame, and the decoded node corresponding to the predicted node index value is used as the predicted node.
- the predicted node is determined by the first angle parameter value of the first decoded node and the predicted node index value.
- the prediction node is also called a prediction node, a target node, a target point, etc., which is not limited in the embodiment of the present application.
- the predicted node may be a node in a reference frame other than the current frame, and the predicted node may also be a node in the current frame.
- whether the predicted node is a node in a reference frame other than the current frame or a node in the current frame depends on the prediction mode of the node to be decoded, wherein the prediction mode may include an inter-frame prediction mode and an intra-frame prediction mode.
- the prediction node when the prediction mode of the node to be decoded is the inter-frame prediction mode, the prediction node is a node in a reference frame other than the current frame; when the prediction mode of the node to be decoded is the intra-frame prediction mode, the prediction node is a node in a reference frame of the current frame.
- local motion processing is performed on the first geometric parameter of the predicted node according to the third geometric parameter of the first decoded node, so as to obtain the second geometric parameter of the predicted node.
- the first geometric parameter is a geometric parameter of the predicted node obtained based on the third geometric parameter of the first decoded node.
- the second geometric parameter is a geometric parameter obtained by performing local motion processing on the first geometric parameter.
- the second geometric parameter since the second geometric parameter is obtained after the first geometric parameter is processed by the local motion, the second geometric parameter has the prior information of the local motion compared with the first geometric parameter, thereby achieving a more refined prediction of the geometric information of the prediction node. This improves the accuracy of inter-frame prediction.
- the second geometric parameter takes into account the prior information of local motion
- the accuracy of the geometric reconstruction of the node to be decoded can be improved.
- the second geometric parameter after performing local motion processing on the first geometric parameter of the predicted node based on the first decoded node and determining the second geometric parameter of the predicted node, the second geometric parameter can be determined as the geometric prediction value of the node to be decoded.
- local motion processing may not be performed on the first geometric parameter, and the first geometric parameter may be directly used as the geometric prediction value of the node to be decoded.
- the decoder first parses the code stream to determine the predicted node index value corresponding to the node to be decoded; then, the decoder determines the first decoded node before the node to be decoded in the current frame; then, the decoder determines the predicted node based on the predicted node index value and the first decoded node; finally, the decoder performs local motion processing on the first geometric parameter of the predicted node based on the first decoded node to determine the second geometric parameter of the predicted node.
- the second geometric parameter Since the decoder performs local motion processing on the first geometric parameter of the predicted node to obtain the second geometric parameter, the second geometric parameter has prior information related to the local motion, which can improve the accuracy of inter-frame prediction, thereby improving the decoding efficiency of the geometric information of the point cloud and improving the decoding performance of the point cloud.
- S104 may be implemented by S1041 and S1042 as follows:
- the prediction node index value represents that the prediction node is a preset node
- local motion information of the prediction node is determined based on the fourth geometric parameter of the second decoded node and the third geometric parameter of the first decoded node
- the second angle parameter value of the second decoded node is less than or equal to and closest to the first angle parameter value of the first decoded node
- the second decoded node is determined based on the first decoded node in the reference frame where the prediction node is located.
- the predicted node index value represents that the predicted node is a preset node
- the first geometric parameter of the predicted node is used as the second geometric parameter.
- the predicted node index value indicates that the predicted node is not a preset node
- the decoder only when the predicted node index value indicates that the predicted node is a preset node, the decoder will perform local motion processing on the first geometric parameter of the predicted node to obtain the second geometric parameter.
- the preset nodes are nodes specified or negotiated by both the decoder and the encoder.
- the preset nodes are pre-specified nodes for which local motion processing is performed.
- the decoder may perform local motion processing on the first geometric parameter of the predicted node to obtain the second geometric parameter only when the predicted node index value matches the preset node index value; when the predicted node index value does not match the preset node index value, the decoder does not need to perform local motion processing on the first geometric parameter of the predicted node, and can use the first geometric parameter as the second geometric parameter.
- the predicted node index value matches the preset node index value, it is not only possible to know to which reference frame the predicted node corresponding to the predicted node index value belongs, but also possible to determine that local motion processing needs to be performed on the first geometric parameter of the predicted node.
- the second decoded node is a decoded node in the reference frame.
- the second angle parameter value of the second decoded node is less than or equal to the first angle parameter value of the first decoded node.
- the second decoded node is a decoded node in the reference frame having the same radar index as the first decoded node and whose angle parameter value of the first point is less than or equal to the angle parameter value of the first decoded node.
- the second angle parameter value of the second decoded node is less than or equal to and closest to the first angle parameter value of the first decoded node.
- the reference frame includes a decoded node c, a decoded node b, and a decoded node h
- the angle parameter value of the decoded node c is greater than the angle parameter value of the first decoded node
- the angle parameter value of the decoded node b and the angle parameter value of the decoded node h are both less than or equal to the angle parameter value of the first decoded node.
- the angle parameter value of the decoded node b is greater than the angle parameter value of the decoded node h.
- the decoded node b is the first decoded node whose angle parameter value is less than or equal to the angle parameter value of the first decoded node. In this way, the decoded node b is determined to be the second decoded node.
- the second decoded node and the first decoded node may have the same radar index, or the second decoded node and the first decoded node may have different radar indexes; the second decoded node and the first decoded node may have the same depth information, or the second decoded node and the first decoded node may have different depth information; the second decoded node and the first decoded node may have the same angle parameter value, or the second decoded node and the first decoded node may have different angle parameter values, and the present application does not limit this.
- the second angle parameter value of the second decoded node is less than or equal to and closest to the first angle parameter value of the first decoded node; the second decoded node is determined based on the first decoded node in the reference frame where the prediction node is located.
- the geometric parameters here refer to the parameters in the radar coordinate system.
- the geometric parameters may include: horizontal azimuth Radar laser index value laserID and the depth information radius of the predicted node.
- determining the local motion information of the predicted node based on the obtained fourth geometric parameter of the second decoded node and the third geometric parameter of the first decoded node may include:
- the third geometric parameter of the first decoded node and the fourth geometric parameter of the second decoded node are respectively subjected to a first coordinate transformation process to determine a fifth geometric parameter of the first decoded node and a sixth geometric parameter of the second decoded node;
- the first coordinate transformation process represents converting the radar coordinates into Cartesian coordinates;
- the first coordinate transformation processing represents converting the radar coordinates into Cartesian coordinates.
- first coordinate transformation process and the second coordinate transformation process are inverse processes of each other.
- the third geometric parameter of the first decoded node is a radar coordinate
- the third geometric parameter of the first decoded node is converted into a fifth geometric parameter of the first decoded node
- the fifth geometric parameter of the first decoded node is a Cartesian coordinate.
- the radar coordinates (radius, ⁇ , laserIdx) of the first decoded node are converted into Cartesian coordinates (x, y, z) of the first decoded node.
- the fifth geometric parameter of the first decoded node and the sixth geometric parameter of the second decoded node are subtracted to determine a difference value; alternatively, the fifth geometric parameter of the first decoded node and the sixth geometric parameter of the second decoded node are rotated or translated to determine a motion vector; and the difference value or motion vector is determined as local motion information.
- the fifth geometric parameter of the first decoded node is the Cartesian coordinate of the first decoded node
- the sixth geometric parameter of the second decoded node is the Cartesian coordinate of the second decoded node
- the Cartesian coordinates of the first decoded node and the Cartesian coordinates of the second decoded node are subtracted to obtain a difference value between the first decoded node and the second decoded node.
- the Cartesian coordinates of the first decoded node and the Cartesian coordinates of the second decoded node are rotated or translated to determine a motion vector between the first decoded node and the second decoded node.
- a difference value or a motion vector is determined as the local motion information.
- S1042 Based on the local motion information, perform local motion processing on the first geometric parameter of the prediction node to determine the second geometric parameter of the prediction node.
- the reference frame where the prediction node is located includes: a first reference frame and a second reference frame; wherein the first reference frame is at least one frame obtained by performing global motion on the second reference frame; and the second reference frame is a decoded frame of the previous K frames of the current frame, wherein K is an integer greater than 0.
- the first reference frame may be a previous frame of the current frame; and the second reference frame may be obtained by performing global motion on the previous frame.
- the preset node is at least one decoded node in the first reference frame except the second decoded node.
- determining the second geometric parameter of the prediction node based on the local motion information may include:
- the ninth geometric parameter of the prediction node is determined as the second geometric parameter of the prediction node.
- the second coordinate transformation process represents the conversion of Cartesian coordinates into radar coordinates.
- the seventh geometric parameter of the prediction node and the eighth geometric parameter of the prediction node are Cartesian coordinates
- the ninth geometric parameter of the prediction node is radar coordinates.
- the distance between the node and the origin is calculated based on the geometric parameters of the node and the geometric parameters of the origin to determine the depth information (radius); the horizontal azimuth is determined based on the first coordinate (x) and the second coordinate (y) in the geometric parameters of the node.
- the radar laser index value is determined based on the geometric parameters of the node, the tangent values of all radars and the offset position of each radar in the vertical direction.
- the eighth geometric parameter of the prediction node is subjected to a second coordinate transformation process to determine the ninth geometric parameter of the prediction node.
- the eighth geometric parameter of the prediction node may be (x, y, z), and the ninth geometric parameter of the prediction node may be determined by using the prior radar information of the current point cloud (radius,
- the calculation method of LaserIdx of a node or point is as follows: Assume that the geometric coordinates of the point are pointPos, the starting coordinates of the laser ray are LidarOrigin, and the number of lasers is LaserNum. The tangent value is tan ⁇ i , and the vertical offset position of each Laser is Zi , then:
- the radius is:
- LidarOrigin is generally 0.
- the predicted node index value represents that when the predicted node is not a preset node, the first geometric parameter of the predicted node is used as the second geometric parameter.
- the prediction node index value indicates that the prediction node is not a preset node
- directly using the first geometric parameter as the second geometric parameter of the prediction node is a parallel solution with S104 and S105.
- the specific execution order needs to be determined according to the prediction node, and the embodiment of the present application does not limit it.
- the first geometric parameter of the predicted node is used as the second geometric parameter, and the geometric prediction value of the node to be decoded is determined according to the second geometric parameter.
- the decoder and the encoder agree to perform local motion processing on node e (index value 2), node f (index value 3), node o (index value 6) and node p (index value 7) in the first reference frame, that is, the preset nodes are node e, node f, node o and node p.
- the predicted node index value of the predicted node is 0.
- the predicted node index value indicates that the predicted node is not a node in the preset nodes.
- the decoder does not perform local motion processing on the first geometric parameter of the predicted node (node c), and directly uses the first geometric parameter of the predicted node (node c) as the second geometric parameter of the predicted node.
- the second geometric parameter since the second geometric parameter is obtained by updating the first geometric parameter according to the local motion information of the prediction node, the second geometric parameter has the prior information of the local motion compared to the first geometric parameter, thereby achieving a more refined prediction of the geometric information of the prediction node, thereby improving the accuracy of the inter-frame prediction.
- the second geometric parameter since the second geometric parameter has the prior information of the local motion, when the reconstructed geometric information of the node to be decoded is determined using the second geometric parameter of the prediction node, the accuracy of the geometric reconstruction of the node to be decoded can be improved, thereby improving the decoding efficiency and accuracy of the geometric information of the point cloud.
- S103 includes S1031 to S1032:
- the second decoded node determines a second decoded node in a reference frame where the predicted node represented by the predicted node index value is located; the second angle parameter value of the second decoded node is less than or equal to and closest to the first angle parameter value of the first decoded node, and the second decoded node has the same radar index as the first decoded node.
- a second decoded node is determined in a reference frame where the prediction node is located according to a first angle parameter value of the first decoded node.
- the second decoded node is determined according to the first angle parameter value.
- the reference frame where the second decoded node is located has at least the same radar index as the current frame. Therefore, the second decoded node and the first decoded node have the same radar index, that is, they correspond to the same laser radar.
- the second decoded node in the first reference frame, is node g, and in the second reference frame, the second decoded node is node b.
- node g is a decoded node in the first reference frame that has the same radar index as the first decoded node (a) and whose first angle parameter value is less than or equal to the first angle parameter value of the first decoded node.
- Node b is a decoded node in the second reference frame that has the same radar index as the first decoded node (a) and whose first angle parameter value is less than or equal to the first angle parameter value of the first decoded node.
- the predicted node index value includes at least one node index value corresponding to at least one reference frame.
- the predicted node index value corresponds to which reference frame the node index value belongs to. That is, the reference frame where the predicted node is located can be known through the predicted node index value.
- the determination order corresponding to the prediction node can be determined according to the prediction node index value of the prediction node.
- the prediction node index value of the prediction node (node e) is 2
- the determination order of the prediction node is ge, that is, first determine node g, and then determine node e based on node g.
- the prediction node index value of the prediction node (node f) is 3
- the determination order of the prediction node is gef, that is, first determine node g, then determine node e based on node g, and finally determine node f based on node e.
- the prediction node index value is 0 (corresponding to node c)
- the reference frame where the prediction node represented by the prediction node index value 0 is located is the second reference frame
- the prediction node index value is 2 (corresponding to node e)
- the reference frame where the prediction node represented by the prediction node index value 2 is located is the first reference frame.
- a predicted node is determined in a reference frame where the second decoded node is located according to the second angle parameter value of the second decoded node.
- the predicted node is determined according to the second angle parameter value of the second decoded node.
- the predicted node is determined in the first reference frame according to the angle parameter value of node g.
- the predicted node is determined in the second reference frame according to the angle parameter value of node b.
- the predicted node and the second decoded node belong to the same reference frame, the predicted node and the second decoded node have the same radar index.
- S1032 includes S10321 to S10322:
- the second decoded node is used as the predicted node
- the second angle parameter value of the second decoded node is used as the third angle parameter value of the predicted node.
- node g is used as the prediction node.
- the index value of the prediction node is 8 (corresponding to node b)
- the second decoded node in the second reference frame is node b
- the next decoded node in a reference frame where the predicted node is located, can be determined according to the predicted node index value and the second decoded node; and the next decoded node of the next decoded node can be determined according to the next decoded node to obtain at least one next decoded node.
- the at least one next decoded node includes the predicted node, and the radar indexes of the at least one next decoded node are the same.
- the predicted node index value indicates that the predicted node is not the second decoded node
- at least one next decoded node is determined based on the second decoded node and the predicted node index value.
- a preset number of at least one next decoded node is determined in the reference frame where the prediction node is located; at this time, the prediction node is one of the at least one next decoded nodes; or, based on the second decoded node, the next decoded node is determined in the reference frame where the prediction node is located, until the prediction node corresponding to the prediction node index value is determined, at this time, the last decoded node is the prediction node, and the present application does not limit this.
- the predicted node index value is 2.
- the next decoded node can be determined based on the second decoded node, and two decoded nodes are obtained.
- the predicted node is the next decoded node.
- the next decoded node can also be determined based on the second decoded node; the next decoded node of the next decoded node is determined based on the next decoded node, and three decoded nodes are obtained.
- the predicted node is the next decoded node.
- the preset number is 4 and the predicted node index value is 6 (corresponding to node o)
- the predicted node index value (6) among the obtained 4 next decoded nodes, the third decoded node (node o) corresponding to the predicted node index value is determined, and the third decoded node (node o) is used as the predicted node.
- the next decoded node is determined based on the second decoded node until the predicted node is determined.
- the predicted node and the second decoded node correspond to the same reference frame.
- the i-th decoded node is determined in the reference frame where the predicted node is located based on the second decoded node; wherein i is a positive integer greater than or equal to 1 and less than M; the i-th angle parameter value of the i-th decoded node is greater than the second angle parameter value; when the i-th decoded node is not the predicted node, continue to determine the i+1-th decoded node based on the i-th decoded node until the i+1-th decoded node is the predicted node; wherein, the i+1-th angle parameter value of the i+1-th decoded node is greater than the i-th angle parameter value of the i-th decoded node.
- the i+1th decoded node is a predicted node, and the i+1th decoded node is the last decoded node.
- the determination order corresponding to the prediction node can be determined according to the prediction node index value of the prediction node.
- the prediction node index value of the prediction node (node e) 2
- the determination order of the prediction node is ge, that is, first determine node g, and then determine node e based on node g.
- the prediction node index value of the prediction node (node f) 3
- the determination order of the prediction node is gef, that is, first determine node g, then determine node e based on node g, and finally determine node f based on node e.
- Example 1 Based on Figure 23, the second decoded node in the first reference frame (the reference frame after the second reference frame undergoes global motion) is node g, and the first reference frame includes, in addition to the second decoded node: the first decoded node (node e), the second decoded node (node f), the third decoded node (node o) and the fourth decoded node (node p).
- the second decoded node in the second reference frame (the first decoded frame before the current frame) is node b
- the second reference frame includes, in addition to the second decoded node: the first decoded node (node c), the second decoded node (node d), the third decoded node (node m) and the fourth decoded node (node n).
- the predicted node can be determined by the following steps:
- Example 4 Based on Figure 23 and Example 1, if the predicted node index value represents that the predicted node is the second decoded node (node g), then the node g in the first reference frame where the predicted node is located is used as the predicted node, where the second angle parameter value of the second decoded node (node g) is less than or equal to the first angle parameter value of the first decoded node (node a).
- Example 5 Based on Figure 23, based on Example 3, if the predicted node index value represents that the predicted node is the second decoded node (node b), then node b in the second reference frame where the predicted node is located is used as the predicted node, wherein the second angle parameter value of the second decoded node (node b) is less than or equal to the first angle parameter value of the first decoded node (node a).
- S103 includes S301 to S303:
- the first decoded node in a reference frame where the predicted node represented by the predicted node index value is located, determine a second decoded node; a second angle parameter value of the second decoded node is less than or equal to and closest to a first angle parameter value of the first decoded node, and the second decoded node has the same radar index as the first decoded node.
- the decoded node corresponding to the first angle parameter value that is less than or equal to and closest to the first decoded node is determined as the second decoded node.
- the second decoded node when the predicted node represented by the predicted node index value is the second decoded node, the second decoded node is used as the predicted node, that is, the second angle parameter of the second decoded node is used as the third angle parameter value of the predicted node.
- the prediction node represented by the prediction node index value is not the second decoded node
- the prediction node is determined according to the order of the prediction tree.
- a prediction tree method is used to determine the prediction node.
- the predicted node represented by the predicted node index value is not the second decoded node, and the second angle parameter value of the second decoded node is equal to the first angle parameter value of the first decoded node, and the second decoded node and the first decoded node have the same radar index, the predicted node is determined according to the order of the prediction tree.
- determining the previous decoded node of the current node may include: determining a prediction tree corresponding to the current frame; and determining the previous decoded node of the current node based on the decoding order of the prediction tree.
- the prediction tree structure can be constructed in two different ways, which can include: KD-Tree (high-latency slow mode) and low-latency fast mode (using laser radar calibration information).
- KD-Tree high-latency slow mode
- low-latency fast mode using laser radar calibration information.
- the decoding order of the prediction tree can be one of the following: unordered, Morton order, azimuth order, radial distance order, etc., which is not specifically limited here.
- the prediction tree structure is reconstructed by parsing the bitstream, and then the prediction tree is traversed to determine the previous point of the node to be decoded in the decoding order of the prediction tree, which is the previous decoded node of the node to be decoded (i.e., the first decoded node).
- the decoded nodes after the second decoded node (node b) in the second reference frame include: node c, node d, node m, and node n.
- the predicted node is characterized as node d according to the predicted node index value of the predicted node
- the previous decoded node a of the node to be decoded is first determined; then the second decoded node b is determined in the second reference frame based on the angle parameter value of the decoded node a; according to the decoding order of the prediction tree, nodes c and node d after the second decoded node b are determined in the second reference frame in sequence; at this time, the last node d determined is the predicted node.
- the geometric parameter eg, geometric position information
- the geometric parameter may be used as the geometric prediction value of the node to be decoded.
- the geometric parameters here refer to the parameters in the radar coordinate system.
- the geometric parameters may include: horizontal azimuth Radar laser index value laserID and the depth information radius of the predicted node.
- the method also includes: parsing the bit stream to determine the geometric residual information and quantization parameters of the point to be decoded; based on the quantization parameters, performing inverse quantization processing on the geometric residual information to determine the geometric prediction residual; based on the geometric prediction residual and the geometric prediction value, determining the reconstructed geometric parameters of the point to be decoded.
- determining the reconstructed geometric parameters of the node to be decoded based on the geometric prediction residual and the geometric prediction value may include: performing an addition operation based on the geometric prediction residual and the geometric prediction value to determine the reconstructed geometric parameters of the node to be decoded.
- the geometric residual information of the node to be decoded is obtained by parsing the bit stream, and the quantization parameter is obtained by decoding the bit stream; then, the geometric residual information is inversely quantized according to the quantization parameter to obtain the geometric prediction residual; then, the geometric prediction residual and the geometric prediction value are summed to obtain the reconstructed geometric parameters of the node to be decoded, such as restoring the reconstructed geometric position information of the node to be decoded, and finally completing the geometric reconstruction at the decoding end.
- determining geometric residual information of a point to be decoded includes:
- the context model is used to decode the geometric residual information of the node to be decoded to obtain the geometric residual information.
- a context model corresponding to a node to be decoded is determined according to a predicted node index value, and the decoder uses the context model corresponding to the node to be decoded to decode the geometric residual information of the node to be decoded, thereby obtaining the geometric residual information.
- different decoded nodes in the reference frame may correspond to different context models.
- decoded nodes in different reference frames may correspond to different context models.
- at least one decoded node in different reference frames may correspond to different context models, which is not limited in the present application.
- a parameter for selecting the contex model corresponds to a parameter for selecting the contex model
- another parameter for selecting the contex model corresponds to perform the subsequent entropy decoding of the geometric residual information
- the decoding method is mainly for decoding optimization of the inter-frame prediction mode, and a flag bit can be used to determine whether the current node uses the inter-frame prediction mode. Therefore, in some embodiments of the present application, the method further includes:
- the method further includes:
- the first identification information is the first value, determining that the first identification information indicates that the node to be decoded does not use the inter-frame prediction mode;
- the first identification information is the second value, it is determined that the first identification information indicates that the node to be decoded uses the inter-frame prediction mode.
- the first value is different from the second value, and the first value and the second value can be in parameter form or in digital form.
- the first identification information can be a parameter written in the profile or a flag value, which is not specifically limited here.
- the first value can be set to 1 and the second value can be set to 0; or, the first value can be set to 0 and the second value can be set to 1; or, the first value can be set to true and the second value can be set to false; or, the first value can be set to false and the second value can be set to true; but this is not specifically limited here.
- the inter-frame prediction mode is not used for the node to be decoded, that is, there is no need to execute the decoding method described in the embodiment of the present application; if the value of the first identification information is 1 (true), then it can be determined that the inter-frame prediction mode is used for the node to be decoded, that is, the decoding method described in the embodiment of the present application needs to be executed.
- a flag bit can be set here to determine whether to enable the decoding method of the embodiment of the present application. Therefore, in some embodiments of the present application, the method further includes: parsing the code stream to determine the second identification information; when the second identification information indicates that the node to be decoded enables the local motion processing mode, performing the step of performing local motion processing on the first geometric parameter of the predicted node based on the first decoded node to determine the second geometric parameter of the predicted node.
- the method further includes:
- the second identification information is the first value, determining that the second identification information indicates that the node to be decoded does not enable the local motion processing mode
- the second identification information is the second value, it is determined that the second identification information indicates that the node to be decoded enables a local motion processing mode.
- the first value is different from the second value, and the first value and the second value can be in parameter form or in digital form.
- the second identification information can be a parameter written in the profile or a flag value, which is not specifically limited here.
- a 1-bit flag (i.e., the second identification information) can be used to indicate whether the decoding method of the embodiment of the present application is enabled or not.
- This flag can be placed in the header information of the high-level syntax element, such as the geometry header; and this flag can be conditionally enabled under certain conditions. If this flag does not appear in the bitstream, its default value is a fixed value. At the decoding end, if this flag does not appear in the bitstream, decoding may not be performed, and its default value is A fixed value.
- This embodiment provides a decoding method, which parses a bitstream to determine a predicted node index value corresponding to a node to be decoded; determines a first decoded node before the node to be decoded in a current frame; determines a predicted node based on the predicted node index value and the first decoded node; and performs local motion processing on a first geometric parameter of the predicted node based on the first decoded node to determine a second geometric parameter of the predicted node.
- the decoder Since the decoder performs local motion processing on the first geometric parameter of the predicted node to obtain a second geometric parameter, the second geometric parameter has prior information related to local motion, which can improve the accuracy of inter-frame prediction, thereby improving the decoding efficiency of the geometric information of the point cloud and improving the decoding performance of the point cloud.
- FIG22 a schematic diagram of a flow chart of an encoding method provided in an embodiment of the present application is shown. As shown in FIG22, the method may include:
- the first encoded node corresponds to the first decoded node of the decoding end.
- the first encoded node is node a; correspondingly, at the decoding end, the first decoded node is node a.
- first angle parameter of the first encoded node at the encoding end corresponds to the first angle parameter of the first decoded node at the decoding end.
- the node to be encoded is one of the points in the current frame.
- the encoding method of the embodiment of the present application is applied to an encoder.
- the encoding method may specifically refer to a point cloud inter-frame prediction method; more specifically, it is a method for encoding geometric information between point cloud frames, which mainly improves the inter-frame prediction algorithm in the relevant technology, and can perform local motion processing on the angle parameter value of the prediction node, so as to achieve a better inter-frame prediction effect.
- the current frame is a frame where the current node to be encoded is located, and the current frame includes at least one point.
- the current frame is obtained by scanning a circle of the laser radar along the XY plane (horizontal plane), and the current frame may include multiple points.
- the node to be encoded is also referred to as a point to be encoded, a current node, a current point, a current node to be encoded, a current point to be encoded, etc., and the present application does not limit this.
- the node to be encoded may be a point in the current point cloud to be encoded, or the node to be encoded may include multiple points in the current point cloud to be encoded, which is not limited in the present application.
- the node to be encoded includes multiple points in the current point cloud to be encoded
- the multiple points included in the node to be encoded are repeated points.
- the node to be encoded includes multiple points having the same geometric prediction information.
- the geometric information (geometric position information) of the first encoded node is determined.
- the first encoded node here refers to the geometric information of the first encoded node. That is to say, after determining the first encoded node before the node to be encoded in the current frame, the geometric information of the first encoded node in the point cloud is determined.
- the geometric information here can be understood as the position information of the first encoded node in the point cloud.
- the geometric information of the first encoded node can be radar coordinate information or Cartesian coordinate information, etc., and this application does not make any limitation on this.
- the geometric parameters here refer to parameters in the radar coordinate system.
- the geometric parameters may include: radius depth information, horizontal azimuth angle ⁇ and radar laser index value laserID.
- the geometric information of the first encoded node when the geometric information of the first encoded node is radar coordinate information, the geometric information of the first encoded node at least includes an angle parameter corresponding to the first encoded node, that is, a first angle parameter value.
- the angle parameter value is the angle information of the horizontal position of the horizontal plane, that is,
- the first angle parameter value of the first encoded node is the angle information of the horizontal position of the first encoded node on the horizontal plane.
- the first encoded node is the previous encoded node of the node to be encoded in the current frame.
- the first encoded node may also be any encoded node before the node to be encoded in the current frame.
- the first encoded node may be the two encoded nodes before the node to be encoded in the current frame, or the first encoded node may also be the four encoded nodes before the node to be encoded in the current frame, etc., and the present application does not limit this.
- S202 Determine a first candidate node that has at least one geometric parameter identical to that of the first encoded node in a reference frame, and determine at least one second candidate node in the reference frame based on the first candidate node.
- the first candidate node corresponds to the second decoded node at the decoding end.
- the first candidate node in the first reference frame is node g
- the first candidate node in the second reference frame is node b
- the second decoded node in the first reference frame is node g
- the second decoded node in the second reference frame is node b.
- the second candidate node corresponds to a decoded node other than the second decoded node in the reference frame where the second decoded node of the decoding end is located.
- the second candidate nodes in the first reference frame are node e, node f, node o and node p
- the second candidate nodes in the second reference frame are node c, node d, node m and node n
- the decoded nodes in the first reference frame except the second decoded node (node g) are node e, node f, node o and node p
- the decoded nodes in the second reference frame except the second decoded node (node b) are node c, node d, node m and node n.
- the geometric parameters here refer to the parameters in the radar coordinate system.
- the geometric parameters may include: radius depth information, horizontal azimuth And the radar laser index value laserID.
- the second angle parameter value of the first candidate node is less than or equal to and closest to the first angle parameter value of the first encoded node.
- the reference frame includes: a first reference frame and a second reference frame; the first reference frame is at least one frame obtained by global motion of the second reference frame; the second reference frame is a decoded frame of the previous K frames of the current frame, where K is an integer greater than 0.
- the preset node is at least one candidate node in the first reference frame except the first candidate node.
- At least one second candidate node, the first candidate node, and the first encoded node have the same radar index.
- the second angle parameter value of the first candidate node is less than or equal to and closest to the first angle parameter value of the first encoded node.
- the reference frame includes: a first reference frame and a second reference frame; the first reference frame is at least one frame obtained by performing global motion on the second reference frame; the second reference frame is an encoded frame of the previous K frames of the current frame, where K is an integer greater than 0.
- the first candidate node includes a first reference node and a second reference node; the second candidate node includes a third reference node and a fourth reference node; the first reference node and the third reference node belong to a first reference frame; the second reference node and the fourth reference node belong to a second reference frame.
- the first candidate node is the second decoded node at the decoding end.
- the second reference node in the first reference frame is node g
- the first reference frame includes, in addition to the first reference node: node e, node f, node o, and node p.
- the preset node may be at least one of node e, node f, node o, and node p.
- the first reference frame may be a previous frame of the current frame; the second reference frame may be a previous frame of the previous frame of the current frame.
- the current frame is Frame t
- the first reference frame may be Frame t-1
- the second reference frame may be Frame t-2
- t is an integer.
- a third reference frame may also be included, and the number of reference frames and the number of candidate nodes are not specifically limited.
- the first reference frame and the second reference frame are used as examples for detailed description.
- the preset node in the decoding end is at least one candidate node other than the first candidate node in the first reference frame of the encoding end, that is, the preset node is at least one candidate node in at least one second candidate node.
- determining at least one second candidate node in a reference frame according to the first candidate node may include:
- At least one fourth reference node is determined in the second reference frame according to the second reference candidate node.
- the second angle parameter value of the first reference node is less than or equal to and is closest to the first angle parameter value of the first encoded node;
- the fourth angle parameter value of at least one third reference node is greater than the second angle parameter value of the first reference node, and at least one third reference node has the same radar index as the first reference node;
- the second angle parameter value of the second reference node is less than or equal to and is closest to the first angle parameter value of the first encoded node;
- the fifth angle parameter value of at least one fourth reference node is greater than the second angle parameter value of the second reference node, and at least one fourth reference node has the same radar index as the second reference node.
- determining at least one third reference node in the first reference frame based on the first reference node may include: in the first reference frame, determining at least one third reference node encoded sequentially after the first reference node according to the order of the prediction tree.
- At least one third reference node encoded sequentially after the first reference node may be node e, node f, node o, and node p.
- determining at least one fourth reference node in the second reference frame based on the second reference candidate node may include: in the second reference frame, determining at least one fourth reference node encoded sequentially after the second reference node according to the order of the prediction tree.
- At least one fourth reference node to be encoded sequentially after the first reference node may be node c, node d, node m, and node n. .
- FIG23 shows a schematic diagram of a structure of geometric information inter-frame coding and decoding provided by an embodiment of the present application.
- the nodes to be coded in the current frame are filled with a grid, and the first coded node before the node to be coded is represented by a; there are a first reference frame and a second reference frame.
- a first candidate node b whose geometric parameters meet the first condition with the first encoded node a before the node to be encoded can be searched in the second reference frame; then, according to the encoding order of the prediction tree, the first fourth reference node c, the second fourth reference node d, the third fourth reference node m and the fourth fourth reference node n encoded after the first candidate node b (the second reference node) in the first reference frame are determined in sequence; among them, the first fourth reference node c, the second fourth reference node d, the third fourth reference node m and the fourth fourth reference node n are at least one second candidate node here.
- a first candidate node g (first reference node) whose geometric parameters meet the first condition with the first encoded node a before the node to be encoded can be searched in the first reference frame; then, according to the encoding order of the prediction tree, the first third reference node e, the second third reference node f, the third third reference node o and the fourth third reference node p to be encoded after the first candidate node g in the second reference frame are determined in sequence; among them, the first third reference node e, the second third reference node f, the third third reference node o and the fourth third reference node p are at least one second candidate node here.
- determining at least one second candidate node in the first reference frame according to the first candidate node may include:
- first reference frame sequentially determine, according to the order of the magnitude of the horizontal azimuth angles, a first third reference node whose horizontal azimuth angle is greater than and closest to the horizontal azimuth angle of the first candidate node, a second third reference node whose horizontal azimuth angle is greater than and closest to the horizontal azimuth angle of the first third reference node, a third third reference node whose horizontal azimuth angle is greater than and closest to the horizontal azimuth angle of the second third reference node, and a fourth reference node whose horizontal azimuth angle is greater than and closest to the horizontal azimuth angle of the third reference node;
- At least one second candidate node is determined based on the first third reference node, the second third reference node, the third third reference node and the fourth third reference node; wherein the radar laser index serial numbers of the first third reference node, the second third reference node, the third third reference node and the fourth third reference node are all the same as the radar laser index serial number of the previous encoded node.
- At least one second candidate node includes a first fourth reference node c, a second fourth reference node d, a third fourth reference node m, and a fourth fourth reference node n.
- first determine the first coded node a before the point to be coded first determines the first candidate node b with geometric parameters that meet the first condition with the first coded node a before; and then determine the first candidate node b according to the horizontal azimuth angle.
- the order of size is as follows:
- the obtained first fourth reference node c, second fourth reference node d, third fourth reference node m and fourth fourth reference node n are at least one second candidate node here.
- determining at least one second candidate node in the second reference frame according to the first candidate node may include:
- the second reference frame sequentially determine, according to the order of the magnitude of the horizontal azimuth angles, a first fourth reference node whose horizontal azimuth angle is greater than and closest to the horizontal azimuth angle of the first candidate node, a second fourth reference node whose horizontal azimuth angle is greater than and closest to the horizontal azimuth angle of the first fourth reference node, a third fourth reference node whose horizontal azimuth angle is greater than and closest to the horizontal azimuth angle of the second fourth reference node, and a fourth fourth reference node whose horizontal azimuth angle is greater than and closest to the horizontal azimuth angle of the third fourth reference node;
- At least one second candidate node is determined based on the first fourth reference node, the second fourth reference node, the third fourth reference node and the fourth fourth reference node; wherein the radar laser index serial numbers of the first fourth reference node, the second fourth reference node, the third fourth reference node and the fourth fourth reference node are all the same as the radar laser index serial number of the previous encoded node.
- At least one second candidate node includes a first third reference node e, a second third reference node f, a third third reference node o, and a fourth third reference node p.
- first determine the first coded node a before the point to be coded first determines the first candidate node g with geometric parameters that meet the first condition with the first coded node a before the point to be coded; and then determine the first candidate node g according to the horizontal azimuth angle.
- the order of size is as follows:
- the obtained first third reference node e, second third reference node f, third third reference node o and fourth third reference node p are at least one second candidate node here.
- a possible preset order is: the first fourth reference node c, the second fourth reference node d, the first third reference node e, the second third reference node f, the third fourth reference node m, the fourth fourth reference node n, the third third reference node o, the fourth third reference node p, the first candidate node b and the first candidate node g; the preset order at this time is not limited to this order and is not specifically limited here.
- a possible preset order is: the first fourth reference node c, the second fourth reference node d, the first third reference node e, the second third reference node f, the third fourth reference node m, the fourth fourth reference node n, the third third reference node o and the fourth third reference node p; the preset order at this time is not limited to this order and is not specifically limited here.
- determining at least one fourth candidate node in the second reference frame according to the third candidate node may include:
- FIG24 shows a schematic diagram of the structure of another geometric information inter-frame encoding and decoding provided by an embodiment of the present application.
- the nodes to be encoded in the current frame are filled with a grid, and the first encoded node before the node to be encoded is represented by a; there are a first reference frame and a second reference frame, wherein the first reference frame is a GMC frame of global motion.
- a first candidate node g whose geometric parameters meet the first condition with the first encoded node a before the node to be encoded can be searched in the first reference frame; then, according to the encoding order of the prediction tree or according to the order of the size of the horizontal azimuth angles, the first third reference node e and the second third reference node f to be encoded after the first candidate node g in the first reference frame are determined in sequence; wherein the first third reference node e and the second third reference node f are at least one second candidate node here.
- a first candidate node b whose geometric parameters meet the first condition with respect to the previous encoded node a for encoding the current node can be searched in the second reference frame; then, according to the encoding order of the prediction tree or according to the order of the size of the horizontal azimuth angles, the first fourth reference node c, the second fourth reference node d, the third fourth reference node m and the fourth fourth reference node n in the second reference frame are determined in sequence; among them, the first fourth reference node c, the second fourth reference node d, the third fourth reference node m and the fourth fourth reference node n are at least one second candidate node here.
- the inter-frame candidate node set may include 6 candidate nodes, 8 candidate nodes, 10 candidate nodes or other numbers of candidate nodes.
- the number of candidate nodes can continue to increase, or a new candidate node can be formed by averaging every several candidate nodes, which is not specifically limited here.
- the inter-frame candidate node set has a certain preset order, and the more likely the inter-frame prediction mode appears in the preset order, the higher its ranking, that is, the smaller the corresponding inter-frame prediction mode value, thereby improving performance and reducing the number of coding bits.
- S203 Perform local motion processing on geometric parameters of at least one candidate node among the at least one second candidate node to obtain at least one updated second candidate node.
- S203 can be implemented by S2031, S2032, S2033 and S2034 as follows:
- S203. Determine at least one third candidate node matching the preset node from at least one second candidate node.
- the preset node is at least one candidate node in the reference frame except the first candidate node.
- At least one third candidate node is determined from at least one second candidate node.
- the third candidate node may be in the first reference frame or in the second reference frame.
- the second reference node in the second reference frame is node g
- the second reference frame in addition to the second reference node, also includes: node e, node f, node o, and node p.
- the preset node may be at least one of node e, node f, node o, and node p.
- S2032 may be implemented by S20321 and S20322 as follows:
- the third geometric parameter of the first encoded node is subjected to a first coordinate transformation process to determine the fifth geometric parameter of the first encoded node; the fourth geometric parameter of the first candidate node is subjected to a first coordinate transformation process to determine the sixth geometric parameter of the first candidate node.
- the first coordinate transformation processing represents converting the radar coordinates into Cartesian coordinates.
- the third geometric parameter of the first encoded node is a radar coordinate
- the third geometric parameter of the first encoded node is a radar coordinate converted into a fifth geometric parameter of the first encoded node
- the fifth geometric parameter of the first encoded node is a Cartesian coordinate.
- the radar coordinates (radius, ⁇ , laserIdx) of the first encoded node are converted into Cartesian coordinates (x, y, z) of the first encoded node.
- determining local motion information based on the fifth geometric parameter of the first encoded node and the sixth geometric parameter of the first candidate node may include: subtracting the fifth geometric parameter of the first encoded node from the sixth geometric parameter of the first candidate node to determine a difference value; or, rotating or translating the fifth geometric parameter of the first encoded node and the sixth geometric parameter of the first candidate node to determine a motion vector; and determining the local motion information using the difference value or the motion vector.
- the fifth geometric parameter of the first encoded node is the Cartesian coordinate of the first encoded node
- the sixth geometric parameter of the first candidate node is the Cartesian coordinate of the second encoded node
- the Cartesian coordinates of the first encoded node and the Cartesian coordinates of the first candidate node are subtracted to obtain a difference value between the first encoded node and the first candidate node.
- the Cartesian coordinates of the first encoded node and the Cartesian coordinates of the first candidate node are rotated or translated to determine a motion vector between the first encoded node and the first candidate node.
- a difference value or a motion vector is determined as the local motion information.
- a first coordinate transformation is performed on the tenth geometric parameter of each third candidate node to determine the eleventh geometric parameter of each third candidate node; the eleventh geometric parameter of each third candidate node is respectively added to the local motion information to determine the twelfth geometric parameter of each third candidate node; a second coordinate transformation is performed on the twelfth geometric parameter of each third candidate node to determine the updated tenth geometric parameter of each third candidate node.
- the second coordinate transformation processing represents the conversion of Cartesian coordinates into radar coordinates.
- the distance between the node and the origin is calculated based on the geometric parameters of the node and the geometric parameters of the origin to determine the depth information (radius); the horizontal azimuth is determined based on the first coordinate (x) and the second coordinate (y) in the geometric parameters of the node.
- the radar laser index value is determined based on the geometric parameters of the node, the tangent values of all radars and the offset position of each radar in the vertical direction.
- the twelfth geometric parameter of each third candidate node is subjected to the second coordinate transformation process to determine the updated tenth geometric parameter of each third candidate node.
- the conversion of the twelfth geometric parameter into the updated tenth geometric parameter can be implemented as follows:
- the twelfth geometric parameter of the third candidate node is (x, y, z).
- the updated tenth geometric parameter (radius, LaserIdx).
- the calculation method of LaserIdx of a node or point is as follows: Assume that the geometric coordinates of the point are pointPos, the starting coordinates of the laser ray are LidarOrigin, and assume that the number of Lasers is LaserNum, the tangent value of each Laser is tan ⁇ i , and the offset position of each Laser in the vertical direction is Zi , then:
- the radius is:
- LidarOrigin is generally 0.
- S2034 Use the candidate nodes other than the at least one third candidate node among the at least one updated third candidate node and the at least one second candidate node as the at least one updated second candidate node.
- the at least one third candidate node and the at least one second candidate node after the update may be substituted with at least one third candidate node.
- the candidate nodes other than the three candidate nodes are used as at least one second candidate node after update.
- the updated at least one third candidate node is the updated node e
- the candidate nodes among the at least one second candidate node except the at least one third candidate node are node f, node o and node p
- the updated node e, node f, node o and node p are used as the at least one updated second candidate node.
- S203 may also include: determining the local motion information of each third candidate node based on the third geometric parameter of the first encoded node and the fourth geometric parameter of the first reference node; updating the tenth geometric parameter of the corresponding third candidate node based on the local motion information of each third candidate node, and determining the updated tenth geometric parameter of at least one third candidate node, thereby determining at least one updated third candidate node; using the candidate nodes other than at least one third candidate node among the updated at least one third candidate node and at least one third reference node as at least one updated third reference node; using the updated at least one third reference node and at least one fourth reference node as at least one updated second candidate node.
- At least one third reference node may include: the first third reference node (node e), the second third reference node (node f), the third third reference node (node o) and the fourth third reference node (node p).
- At least one fourth reference node in the second reference frame may include: the first fourth reference node (node c), the second fourth reference node (node d), the third fourth reference node (node m) and the fourth fourth reference node (node n).
- At this time, at least one third candidate node is: the first third reference node (node e) and the second third reference node (node f).
- the updated at least one third candidate node includes: the updated first third reference node (node e) and the updated second third reference node (node f).
- the updated at least one third reference node includes: the updated first third reference node (node e), the updated second third reference node (node f), the third third reference node (node o) and the fourth third reference node (node p).
- the updated at least one second candidate node may include: an updated first third reference node (node e), an updated second third reference node (node f), a third third reference node (node o), a fourth third reference node (node p), a first fourth reference node (node c), a second fourth reference node (node d), a third fourth reference node (node m) and a fourth fourth reference node (node n).
- At least one third reference node, at least one updated fourth reference node, the second reference node and the first reference node have a preset order.
- cost values are calculated for the first candidate node and each updated second candidate node, respectively, to obtain multiple rate-distortion cost results; the candidate node corresponding to the minimum rate-distortion cost among the multiple rate-distortion cost results is determined as the prediction node; based on the prediction node, the geometric prediction value of the node to be encoded is determined.
- the method may further include: determining a predicted node index value corresponding to the node to be encoded in a preset order; encoding the predicted node index value, and writing the obtained encoded bits into a bitstream.
- the method may further include: determining a geometric prediction residual value of the node to be encoded based on the geometric prediction value of the node to be encoded; encoding the geometric prediction residual value of the node to be encoded, and writing the obtained encoding bits into the bitstream.
- determining the geometric prediction residual value of the node to be encoded based on the geometric prediction value of the node to be encoded can include: determining the initial residual value of the node to be encoded based on the geometric prediction value of the node to be encoded; quantizing the initial residual value of the node to be encoded based on a quantization parameter to obtain the geometric prediction residual value of the node to be encoded.
- determining the initial residual value of the node to be encoded based on the geometric prediction value of the node to be encoded may include: determining the original value of the node to be encoded; and determining the initial residual value of the node to be encoded by performing a subtraction operation between the original value of the node to be encoded and the geometric prediction value of the node to be encoded.
- the method may further include: encoding the quantization parameter, and writing the obtained encoded bits into a bitstream.
- geometric position information For example, taking geometric position information as an example, first determine the geometric prediction value of the node to be encoded; then perform a difference operation based on the geometric position information of the current node and the geometric prediction value to obtain an initial residual value; and use the quantization parameter to quantize the initial residual value to determine the geometric prediction residual value. Finally, through continuous iteration, encode the inter-frame prediction mode value, geometric prediction residual value, prediction tree structure, quantization parameter and other parameters of each node position information in the prediction tree, and write the obtained coded bits into the bitstream.
- the encoding method is mainly for encoding optimization of the inter-frame prediction mode value.
- it can also be determined whether the current node uses the inter-frame prediction mode and generate a flag to identify it. Therefore, in some embodiments, the method can also include:
- the first identification information indicates whether the node to be encoded uses an inter-frame prediction mode
- the prediction mode is the inter-frame prediction mode
- a step of determining a first encoded node before the node to be encoded in the current frame is performed.
- determining the prediction mode of the node to be encoded and generating the first identification information based on the prediction mode may include:
- the first identification information indicates that the node to be encoded does not use the inter-frame prediction mode, determining that the value of the first identification information is a first value
- the value of the first identification information is determined to be the second value
- the first value is different from the second value, and the first value and the second value can be in parameter form or in digital form.
- the first identification information can be a parameter written in the profile or a flag value, which is not specifically limited here.
- the first value can be set to 1 and the second value can be set to 0; or, the first value can be set to 0 and the second value can be set to 1; or, the first value can be set to true and the second value can be set to false; or, the first value can be set to false and the second value can be set to true; but this is not specifically limited here.
- the method may further include: encoding the first identification information, and writing the obtained encoded bits into a bit stream.
- the prediction mode of the node to be encoded is determined to be the inter-frame prediction mode
- the generated first identification information is the second value
- the prediction mode of the node to be encoded is determined not to be the inter-frame prediction mode
- the generated first identification information is the first value.
- the first value is 0, and the second value is 1.
- the first identification information is 0, that is, there is no need to execute the encoding method of the embodiment of the present application; if it is determined that the node to be encoded uses the inter-frame prediction mode, then the first identification information is 1, that is, it is necessary to execute the encoding method of the embodiment of the present application. In this way, at the decoding end, by decoding to obtain the first identification information, it is possible to determine whether the node to be decoded uses the inter-frame prediction mode, thereby improving the decoding efficiency.
- a flag bit may be set to determine whether to enable the encoding method of the embodiment of the present application. Therefore, in some embodiments, the method may further include:
- the second identification information indicates whether the local motion processing mode is enabled for the node to be encoded
- a step of determining the first encoded node before the node to be encoded in the current frame is performed.
- determining whether to enable the local motion processing mode and generating the second identification information may include:
- the second identification information indicates that the node to be encoded does not enable the local motion processing mode, determining the value of the second identification information to be the first value
- the value of the second identification information is determined to be the second value.
- the first value is different from the second value, and the first value and the second value can be in parameter form or in digital form.
- the second identification information can be a parameter written in the profile or a flag value, which is not specifically limited here.
- the first value can be set to 1, and the second value can be set to 0; or, the first value can be set to 0, and the second value can be set to 1; or, the first value can be set to true, and the second value can be set to false; or, the first value can be set to false, and the second value can be set to true; but this is not specifically limited here.
- the method may further include: encoding the value of the second identification information, and writing the obtained encoded bits into the bit stream.
- the first identification information generated is the second value, and it is determined that the node to be encoded does not enable the local motion processing mode, and the first identification information generated is the first value.
- the first value is 0, and the second value is 1.
- the first identification information is 0, that is, there is no need to execute the encoding method of the embodiment of the present application; if it is determined that the node to be encoded enables the local motion processing mode, then the first identification information is 1, that is, it is necessary to execute the encoding method of the embodiment of the present application.
- the value of the second identification information can also be directly obtained through decoding, so that it can be determined whether the node to be decoded enables local motion processing, thereby improving decoding efficiency.
- a 1-bit flag (i.e., the second identification information) can be used to indicate whether the encoding method of the embodiment of the present application is enabled or not.
- This flag can be placed in the header information of the high-level syntax element, such as the geometry header; and this flag can be conditionally enabled under certain conditions. If this flag does not appear in the bitstream, its default value is a fixed value.
- the embodiment of the present application further provides a code stream, which is generated by bit encoding according to the information to be encoded; wherein the information to be encoded may include at least one of the following:
- the geometric prediction residual value, quantization parameter, prediction node index value, first identification information and second identification information of the node to be encoded
- the first identification information is used to indicate whether the node to be encoded uses the inter-frame prediction mode
- the second identification information is used to indicate whether the node to be encoded enables the local motion processing mode.
- the encoder can encode the information to be encoded and write the obtained encoded bits into the bitstream, which is then transmitted from the encoder to the decoder. Later, at the decoder, by decoding the bitstream, the geometric prediction residual value, quantization parameter, inter-frame prediction mode value and other information of the current node can be obtained, so that the geometric reconstruction information of the current node can be restored.
- This embodiment provides a coding method, which determines a first coded node preceding a node to be coded in a current frame; determines a first candidate node having at least one geometric parameter identical to the first coded node in a reference frame, and determines at least one second candidate node in the reference frame based on the first candidate node; performs local motion processing on the geometric parameters of at least one candidate node among the at least one second candidate node to obtain at least one updated second candidate node; and determines a geometric prediction value of the node to be coded based on the first candidate node and the at least one updated second candidate node.
- the encoder Since the encoder performs local motion processing on the geometric parameters of at least one candidate node among the at least one second candidate node to obtain at least one updated second candidate node, and determines a geometric prediction value of the node to be coded based on the first candidate node and the at least one updated second candidate node, the geometric prediction value has prior information related to local motion, which can improve the accuracy of inter-frame coding, thereby improving the coding efficiency of the geometric information of the point cloud and improving the coding performance of the point cloud.
- the main focus here is to expand the candidate nodes in the inter-frame candidate node set so that the inter-frame candidate nodes can be fully selected, thereby achieving better prediction.
- the tool proposed in the present application can use a 1-bit flag to indicate whether it is enabled or not.
- This flag is placed in the header information of the high-level syntax element, such as the geometry header, and this flag is conditionally enabled under certain conditions. If this flag does not appear in the bitstream, its default value is a fixed value. Similarly, the flag needs to be decoded at the decoding end. If this flag does not appear in the bitstream, it can be decoded without decoding, and its default value is a fixed value.
- the previous encoded node a of the current point to be encoded in the prediction tree is traversed;
- Points c, d, m and n encoded or decoded after point b in the previous reference frame are used as candidate points between frames;
- the points e and f encoded or decoded after the global motion of the previous frame as the reference frame point g are used as candidate points between frames;
- the inter-frame prediction point e is refined using local motion:
- the inter-frame prediction point is transformed to convert the radar coordinates of point e into Cartesian coordinates
- the point coordinates of the point cloud input are (x, y, z).
- the position information of the point cloud is converted into (radius, ⁇ , laserIdx)
- the calculation method of the LaserIdx of the node or point is as follows: Assume that the geometric coordinates of the point are pointPos, the starting coordinates of the laser ray are LidarOrigin, and assume that the number of Lasers is LaserNum, the tangent value of each Laser is tan ⁇ i , and the offset position of each Laser in the vertical direction is Zi , then:
- pointPos[0] x
- pointPos[1] y
- pointPos[2] z
- radius is
- LidarOrigin is generally 0.
- the method of converting the radar coordinate system to the Cartesian coordinate system is the inverse process of the above.
- selecting the inter-frame prediction mode as c or d or m or n corresponds to a parameter for selecting the contex model
- selecting the inter-frame prediction mode as e or f corresponds to another parameter for selecting the contex model to perform the subsequent entropy encoding/decoding of the residual coefficients.
- the geometric position information of the node is predicted to obtain the prediction residual, and the geometric prediction residual is quantized using the quantization parameter. Finally, through continuous iteration, the prediction mode, prediction residual, prediction tree structure, quantization parameter and other parameters of the prediction tree node position information are encoded to generate a binary code stream.
- the decoding end reconstructs the prediction tree structure by continuously parsing the bitstream, and traverses the prediction tree to find the previous decoded node a of the current point to be decoded;
- the prediction mode for decoding selects inter-frame prediction points in the order of c d e f m n.
- the prediction point is selected from the following up to 10 candidate points using the decoded prediction mode:
- Points c, d, m and n encoded or decoded after point b in the previous reference frame are used as candidate points between frames;
- a point g with the same ⁇ and laserID as the node a that has been decoded before the current point is decoded is searched, and points e and f that are encoded or decoded after point g in the previous frame that has undergone global motion and is used as candidate points between frames; or,
- the points e and f encoded or decoded after the global motion of the previous frame as the reference frame point g are used as candidate points between frames;
- the inter-frame prediction point e is refined using local motion:
- the inter-frame prediction point is transformed to convert the radar coordinates of point e into Cartesian coordinates
- the point coordinates of the point cloud input are (x, y, z).
- the position information of the point cloud is converted into (radius, ⁇ , laserIdx)
- the calculation method of LaserIdx of a node or point is as follows: Assuming the geometric coordinates of the point are pointPos, the starting coordinates of the laser ray are LidarOrigin, and assuming the number of Lasers is LaserNum, the tangent value of each Laser is tan ⁇ i, and the offset position of each Laser in the vertical direction is Zi, then:
- pointPos[0] x
- pointPos[1] y
- pointPos[2] z
- radius is
- LidarOrigin is generally 0.
- the method of converting the radar coordinate system to the Cartesian coordinate system is the inverse process of the above.
- selecting the inter-frame prediction mode as c or d or m or n corresponds to a parameter for selecting the contex model
- selecting the inter-frame prediction mode as e or f corresponds to another parameter for selecting the contex model to perform the subsequent entropy encoding/decoding of the residual coefficients.
- the geometric position prediction residual information and quantization parameters of the prediction node are obtained through analysis, and the prediction residual is dequantized to restore the reconstructed geometric position information of each node, and finally the geometric reconstruction at the decoding end is completed.
- Table 2-1 shows the test results provided by the embodiment of the present application in the case of lossless geometric position and lossless attributes, from which it can be seen that the encoding and decoding performance is improved.
- the decoder 250 may include: a decoding unit 2501, a first determination unit 2502 and a first local motion processing unit 2503; wherein,
- the decoding unit 2501 is configured to parse the bitstream, determine the predicted node index value corresponding to the node to be decoded; determine the first decoded node before the node to be decoded in the current frame;
- the first determining unit 2502 is configured to determine a prediction node according to the prediction node index value and the first decoded node;
- the first local motion processing unit 2503 is configured to perform local motion processing on the first geometric parameter of the prediction node based on the first decoded node to determine the second geometric parameter of the prediction node.
- the first local motion processing unit 2503 is further configured to determine the local motion information of the predicted node based on the fourth geometric parameter of the second decoded node and the third geometric parameter of the first decoded node when the predicted node index value represents that the predicted node is a preset node; the second angle parameter value of the second decoded node is less than or equal to and closest to the first angle parameter value of the first decoded node; the second decoded node is determined based on the first decoded node in the reference frame where the predicted node is located; based on the local motion information, perform local motion processing on the first geometric parameter of the predicted node to determine the second geometric parameter of the predicted node;
- the first determination unit 2502 is further configured to determine a geometric prediction value of the node to be decoded according to the first geometric parameter or the second geometric parameter.
- the reference frame where the prediction node is located includes: a first reference frame and a second reference frame.
- the first reference frame is at least one frame obtained by performing global motion on the second reference frame.
- the second reference frame is a decoded frame that is K frames before the current frame, where K is an integer greater than 0.
- the preset node is at least one decoded node in the first reference frame except the second decoded node.
- the first local motion processing unit 2503 is also configured to perform first coordinate transformation processing on the third geometric parameters of the first decoded node and the fourth geometric parameters of the second decoded node, respectively, to determine the fifth geometric parameters of the first decoded node and the sixth geometric parameters of the second decoded node; based on the fifth geometric parameters of the first decoded node and the sixth geometric parameters of the second decoded node, determine the local motion information of the predicted node.
- the first local motion processing unit 2503 is further configured to subtract the fifth geometric parameter of the first decoded node from the sixth geometric parameter of the second decoded node to determine a difference value; or to rotate or translate the fifth geometric parameter of the first decoded node and the sixth geometric parameter of the second decoded node to determine a motion vector; and to determine the local motion information using the difference value or the motion vector.
- the first local motion processing unit 2503 is further configured to perform a first coordinate transformation on the first geometric parameter of the prediction node to determine the seventh geometric parameter of the prediction node; add the seventh geometric parameter of the prediction node and the local motion information to obtain the eighth geometric parameter of the prediction node; perform a second coordinate transformation on the eighth geometric parameter of the prediction node to determine the ninth geometric parameter of the prediction node; and determine the ninth geometric parameter of the prediction node as the second geometric parameter of the prediction node.
- the first determining unit 2502 is further configured to use the first geometric parameter of the predicted node as the second geometric parameter when the predicted node index value indicates that the predicted node is not a preset node.
- the first determination unit 2502 is further configured to determine, based on the first decoded node, a second decoded node in a reference frame where the predicted node represented by the predicted node index value is located; the second angle parameter value of the second decoded node is less than or equal to and closest to the first angle parameter value of the first decoded node, and the second decoded node has the same radar index as the first decoded node; based on the second decoded node, determine the predicted node, the third angle parameter value of the predicted node is greater than the second angle parameter value of the second decoded node, and the predicted node has the same radar index as the second decoded node.
- the first determination unit 2502 is further configured to use the second angle parameter value corresponding to the second decoded node as the third angle parameter value of the predicted node; or, in the reference frame where the predicted node is located, determine at least one next decoded node based on the predicted node index value and the second decoded node; wherein the at least one next decoded node includes the predicted node; the next angle parameter value of the next decoded node is greater than and closest to the previous angle parameter value of its previous decoded node; and the at least one next decoded node has the same radar index as the second decoded node.
- the first determining unit 2502 is further configured to determine, based on the first decoded node, A second decoded node is determined in a reference frame where the prediction node represented by the point index value is located; a second angle parameter value of the second decoded node is less than or equal to and closest to the first angle parameter value of the first decoded node, and the second decoded node has the same radar index as the first decoded node; the second angle parameter value corresponding to the second decoded node is used as the third angle parameter value of the prediction node; or, in a decoded node after the second decoded node, the prediction node is determined according to the order of the prediction tree.
- the decoding unit 2501 is further configured to parse the bitstream to determine the geometric residual information and quantization parameter of the node to be decoded;
- the first determination unit 2502 is further configured to perform inverse quantization processing on the geometric residual information based on the quantization parameter to determine the geometric prediction residual; and determine the reconstructed geometric parameters of the node to be decoded based on the geometric prediction residual and the geometric prediction value.
- the first determination unit 2502 is further configured to determine the context model of the node to be decoded according to the predicted node index value; and use the context model to decode the geometric residual information of the node to be decoded to obtain the geometric residual information.
- the decoding unit 2501 is further configured to parse the code stream to determine the first identification information
- the first determining unit 2502 is further configured to determine a prediction node according to the node index value and the first decoded node when the first identification information indicates that the node to be decoded uses an inter-frame prediction mode.
- the first determination unit 2502 is further configured to, if the first identification information is a first value, determine that the first identification information indicates that the node to be decoded does not use the inter-frame prediction mode; if the first identification information is a second value, determine that the first identification information indicates that the node to be decoded uses the inter-frame prediction mode.
- the decoding unit 2501 is further configured to parse the code stream to determine the second identification information
- the first determination unit 2502 is further configured to, when the second identification information indicates that the node to be decoded enables local motion processing, execute the step of performing local motion processing on the first geometric parameters of the predicted node based on the first decoded node to determine the second geometric parameters of the predicted node.
- the first determination unit 2502 is further configured to, if the second identification information is a first value, determine that the second identification information indicates that the node to be decoded does not enable the local motion processing mode; if the second identification information is a second value, determine that the second identification information indicates that the node to be decoded enables the local motion processing mode.
- the first determining unit 2502 is further configured to determine a previous decoded node of the node to be decoded based on a decoding order of the prediction tree, and use the previous decoded node as the first decoded node.
- the first coordinate transformation process represents converting radar coordinates into Cartesian coordinates; and the second coordinate transformation process represents converting Cartesian coordinates into radar coordinates.
- a "unit" can be a part of a circuit, a part of a processor, a part of a program or software, etc., and of course it can also be a module, or it can be non-modular.
- the components in this embodiment can be integrated into a processing unit, or each unit can exist physically separately, or two or more units can be integrated into one unit.
- the above-mentioned integrated unit can be implemented in the form of hardware or in the form of a software functional module.
- the integrated unit is implemented in the form of a software function module and is not sold or used as an independent product, it can be stored in a computer-readable storage medium.
- this embodiment provides a computer-readable storage medium, which is applied to the decoder 250, and the computer-readable storage medium stores a computer program. When the computer program is executed by the second processor, it implements any method in the above embodiments.
- the decoder 250 may include: a first communication interface 2601, a first memory 2602 and a first processor 2603; each component is coupled together through a first bus system 2604. It can be understood that the first bus system 2604 is used to realize the connection and communication between these components.
- the first bus system 2604 also includes a power bus, a control bus and a status signal bus. However, for the sake of clarity, various buses are marked as the first bus system 2604 in Figure 26. Among them,
- the first communication interface 2601 is used to receive and send signals during the process of sending and receiving information with other external network elements;
- the first memory 2602 is used to store a computer program that can be run on the second processor 2603;
- the first processor 2603 is configured to, when running the computer program, execute:
- the first processor 2603 is further configured to execute the method described in any one of the aforementioned embodiments when running the computer program.
- the first memory 2602 in the embodiment of the present application can be a volatile memory or a non-volatile memory, or can include both volatile and non-volatile memories.
- the non-volatile memory can be a read-only memory (ROM), a programmable read-only memory (PROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), or a flash memory.
- the volatile memory can be a random access memory (RAM), which is used as an external cache.
- RAM static RAM
- DRAM dynamic RAM
- SDRAM synchronous DRAM
- DDRSDRAM double data rate synchronous DRAM
- ESDRAM enhanced SDRAM
- SLDRAM synchronous link DRAM
- DRRAM direct RAM bus RAM
- the first processor 2603 may be an integrated circuit chip with signal processing capabilities. In the implementation process, each step of the above method can be completed by the hardware integrated logic circuit or software instructions in the first processor 2603.
- the above-mentioned first processor 2603 can be a general-purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application-specific integrated circuit (Application Specific Integrated Circuit, ASIC), a field programmable gate array (Field Programmable Gate Array, FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components.
- DSP Digital Signal Processor
- ASIC Application Specific Integrated Circuit
- FPGA Field Programmable Gate Array
- the methods, steps and logic block diagrams disclosed in the embodiments of the present application can be implemented or executed.
- the general-purpose processor can be a microprocessor or the processor can also be any conventional processor, etc.
- the steps of the method disclosed in the embodiments of the present application can be directly embodied as a hardware decoding processor to execute, or the hardware and software modules in the decoding processor can be executed.
- the software module can be located in a mature storage medium in the field such as a random access memory, a flash memory, a read-only memory, a programmable read-only memory or an electrically erasable programmable memory, a register, etc.
- the storage medium is located in the first memory 2602, and the first processor 2603 reads the information in the first memory 2602 and completes the steps of the above method in combination with its hardware.
- the processing unit can be implemented in one or more application specific integrated circuits (Application Specific Integrated Circuits, ASIC), digital signal processors (Digital Signal Processing, DSP), digital signal processing devices (DSP Device, DSPD), programmable logic devices (Programmable Logic Device, PLD), field programmable gate arrays (Field-Programmable Gate Array, FPGA), general processors, controllers, microcontrollers, microprocessors, other electronic units for performing the functions described in this application or a combination thereof.
- ASIC Application Specific Integrated Circuits
- DSP Digital Signal Processing
- DSP Device digital signal processing devices
- PLD programmable logic devices
- FPGA field programmable gate array
- general processors controllers, microcontrollers, microprocessors, other electronic units for performing the functions described in this application or a combination thereof.
- the technology described in this application can be implemented by a module (such as a process, function, etc.) that performs the functions described in this application.
- the software code can be stored in a memory and executed by a processor.
- the memory can be implemented in the processor or outside the processor.
- the first processor 2603 is further configured to execute the method described in any one of the aforementioned embodiments when running the computer program.
- the present embodiment provides a decoder, in which the prediction nodes used for inter-frame prediction are mainly optimized. Specifically, local motion processing is performed on the geometric parameters of the prediction nodes, so that the inter-frame prediction can better predict the decoding nodes, thereby improving the accuracy of the inter-frame prediction, improving the encoding efficiency of the geometric information, and further improving the encoding and decoding performance of the point cloud.
- FIG. 27 shows a schematic diagram of the composition structure of an encoder provided by an embodiment of the present application.
- the encoder 270 may include: a second determination unit 2701, a second local motion processing unit 2702, and a prediction unit 2703; wherein,
- the second determining unit 2701 is configured to determine a first coded node preceding the node to be coded in the current frame; determine a first candidate node having at least one geometric parameter identical to the first coded node in the reference frame, and determine at least one second candidate node in the reference frame according to the first candidate node;
- the second local motion processing unit 2702 is configured to perform local motion processing on the geometric parameters of at least one candidate node among the at least one second candidate node to obtain the updated at least one second candidate node;
- the prediction unit 2703 is configured to determine a geometric prediction value of the node to be encoded based on the first candidate node and at least one updated second candidate node.
- the second angle parameter value of the first candidate node is less than or equal to and closest to the first angle parameter value of the first encoded node.
- the second local motion processing unit 2702 is further configured to determine at least one third candidate node that matches the preset node from the at least one second candidate node; determine local motion information based on the third geometric parameter of the first encoded node and the fourth geometric parameter of the first candidate node; update the tenth geometric parameter of each of the third candidate nodes based on the local motion information to obtain the updated tenth geometric parameter of the at least one third candidate node, thereby determining the updated at least one third candidate node; and use the candidate nodes among the updated at least one third candidate node and the at least one second candidate node except the at least one third candidate node as the updated at least one second candidate node.
- the reference frame includes: a first reference frame and a second reference frame.
- the first reference frame is at least one frame obtained by performing global motion on the second reference frame.
- the second reference frame is an encoded frame that is K frames before the current frame, where K is an integer greater than 0.
- the preset node is at least one candidate node in the first reference frame except the first candidate node.
- the second local motion processing unit 2702 is further configured to perform first coordinate transformation processing on the third geometric parameter of the first encoded node and the fourth geometric parameter of the first candidate node, respectively, to determine the fifth geometric parameter of the first encoded node and the sixth geometric parameter of the first candidate node; and determine the local motion information based on the fifth geometric parameter of the first encoded node and the sixth geometric parameter of the first candidate node.
- the second local motion processing unit 2702 is further configured to subtract the fifth geometric parameter of the first encoded node from the sixth geometric parameter of the first candidate node to determine a difference value; or, to rotate or translate the fifth geometric parameter of the first encoded node and the sixth geometric parameter of the first candidate node to determine a motion vector; and to determine the local motion information using the difference value or the motion vector.
- the second local motion processing unit 2702 is further configured to perform a first coordinate transformation on the tenth geometric parameter of each third candidate node to determine the eleventh geometric parameter of each third candidate node; add the eleventh geometric parameter of each third candidate node to the local motion information to determine the twelfth geometric parameter of each third candidate node; perform a second coordinate transformation on the twelfth geometric parameter of each third candidate node to determine the updated tenth geometric parameter of each third candidate node.
- the first candidate node includes a first reference node and a second reference node; the second candidate node includes a third reference node and a fourth reference node; the first reference node and the third reference node belong to a first reference frame; the second reference node and the fourth reference node belong to a second reference frame.
- the second determination unit 2701 is further configured to determine the first reference node having the same radar index as the first encoded node in the first reference frame; determine at least one third reference node in the first reference frame based on the first reference node; determine the second reference node having the same radar index as the first encoded node in the second reference frame; and determine at least one fourth reference node in the second reference frame based on the second reference candidate node.
- the second angle parameter value of the first reference node is less than or equal to and closest to the first angle parameter value of the first encoded node.
- the fourth angle parameter value of the at least one third reference node is greater than the second angle parameter value of the first reference node, and the at least one third reference node has the same radar index as the first reference node.
- the second angle parameter value of the second reference node is less than or equal to and closest to the first angle parameter value of the first encoded node.
- the fifth angle parameter value of the at least one fourth reference node is greater than the second angle parameter value of the second reference node, and the at least one fourth reference node has the same radar index as the second reference node.
- the second determination unit 2701 is further configured to determine, in the first reference frame, at least one third reference node that is encoded sequentially after the first reference node according to the order of the prediction tree.
- the second determination unit 2701 is further configured to determine, in the second reference frame, at least one fourth reference node that is encoded sequentially after the second reference node according to the order of the prediction tree.
- the second local motion processing unit 2702 is further configured to determine the local motion information of each third candidate node based on the third geometric parameter of the first encoded node and the fourth geometric parameter of the first reference node; based on the local motion information of each third candidate node, update the tenth geometric parameter of the corresponding third candidate node, determine the updated tenth geometric parameter of the at least one third candidate node, and thus determine the at least one updated third candidate node; use the candidate nodes other than the at least one third candidate node among the updated at least one third candidate node and the at least one third reference node as the at least one updated third reference node; use the updated at least one third reference node and the at least one fourth reference node as the at least one updated second candidate node.
- the at least one third reference node, the updated at least one fourth reference node, the second reference node, and the first reference node have a preset order.
- the second determining unit 2701 is further configured to respectively calculate cost values for the first candidate node and each of the updated second candidate nodes to obtain multiple rate-distortion cost results; determine the candidate node corresponding to the minimum rate-distortion cost among the multiple rate-distortion cost results as the predicted node;
- the prediction unit 2703 is further configured to determine a geometric prediction value of the node to be encoded based on the prediction node.
- the encoder 270 may further include an encoding unit 2704; wherein:
- the second determining unit 2701 is further configured to determine a predicted node index value corresponding to the node to be encoded in a preset order;
- the encoding unit 2704 is configured to encode the prediction node index value and write the obtained encoding bits into a bit stream.
- the second determining unit 2701 is further configured to determine, according to the geometric prediction value of the node to be encoded, The geometric prediction residual value of the node to be encoded;
- the encoding unit 2704 is further configured to encode the geometric prediction residual value of the node to be encoded, and write the obtained encoding bits into the bit stream.
- the second determination unit 2701 is further configured to determine the initial residual value of the node to be encoded based on the geometric prediction value of the node to be encoded; and quantize the initial residual value of the node to be encoded according to the quantization parameter to obtain the geometric prediction residual value of the node to be encoded.
- the second determination unit 2701 is further configured to determine the original value of the node to be encoded; and determine the initial residual value of the node to be encoded by performing a subtraction operation between the original value of the node to be encoded and the geometric prediction value of the node to be encoded.
- the encoding unit 2704 is further configured to encode the quantization parameter and write the obtained encoded bits into a bit stream.
- the second determination unit 2701 is further configured to determine a prediction mode of the node to be encoded, and generate first identification information based on the prediction mode; the first identification information indicates whether the node to be encoded uses an inter-frame prediction mode; when the prediction mode is an inter-frame prediction mode, the step of determining the first encoded node in the previous one of the node to be encoded in the current frame is performed.
- the second determination unit 2701 is further configured to determine that the value of the first identification information is a first value if the first identification information indicates that the node to be encoded does not use the inter-frame prediction mode; if the first identification information indicates that the node to be encoded uses the inter-frame prediction mode, determine that the value of the first identification information is a second value.
- the encoding unit 2704 is further configured to encode the first identification information and write the obtained encoded bits into the bit stream.
- the second determination unit 2701 is further configured to determine whether the local motion processing mode is enabled and generate second identification information; the second identification information indicates whether the local motion processing mode is enabled for the node to be encoded; when it is determined that the local motion processing mode is enabled for the node to be encoded, the step of determining the first encoded node in the previous one of the node to be encoded in the current frame is performed.
- the second determination unit 2701 is further configured to determine that the value of the second identification information is a first value if the second identification information indicates that the node to be encoded does not enable the local motion processing method; if the second identification information indicates that the node to be encoded enables the local motion processing method, determine that the value of the second identification information is a second value.
- the encoding unit 2704 is further configured to encode the second identification information and write the obtained encoded bits into a bit stream.
- the second determining unit 2701 is further configured to determine a previous encoded node of the node to be encoded based on the encoding order of the prediction tree, and use the previous encoded node as the first encoded node.
- the first coordinate transformation process represents converting radar coordinates into Cartesian coordinates; and the second coordinate transformation process represents converting Cartesian coordinates into radar coordinates.
- a "unit” may be a part of a circuit, a part of a processor, a part of a program or software, etc., and of course, it may be a module, or it may be non-modular.
- the components in the present embodiment may be integrated into a processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
- the above-mentioned integrated unit may be implemented in the form of hardware or in the form of a software functional module.
- the integrated unit is implemented in the form of a software function module and is not sold or used as an independent product, it can be stored in a computer-readable storage medium.
- the technical solution of this embodiment is essentially or the part that contributes to the prior art or all or part of the technical solution can be embodied in the form of a software product.
- the computer software product is stored in a storage medium, including several instructions for a computer device (which can be a personal computer, server, or network device, etc.) or a processor to perform all or part of the steps of the method described in this embodiment.
- the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (ROM), random access memory (RAM), disk or optical disk, etc., various media that can store program codes.
- an embodiment of the present application provides a computer-readable storage medium, which is applied to the encoder 270.
- the computer-readable storage medium stores a computer program, and when the computer program is executed by the first processor, the method described in any one of the aforementioned embodiments is implemented.
- the encoder 270 may include: a second communication interface 2801, a second memory 2802 and a second processor 2803; each component is coupled together through a second bus system 2804. It can be understood that the second bus system 2804 is used to realize the connection and communication between these components.
- the second bus system 2804 also includes a power bus, a control bus and a status signal bus. However, for the sake of clarity, various buses are marked as the second bus system 2804 in Figure 28. Among them,
- the second communication interface 2801 is used for receiving and sending signals during the process of sending and receiving information with other external network elements;
- the second memory 2802 is used to store a computer program that can be run on the second processor 2803;
- the second processor 2803 is configured to, when running the computer program, execute:
- a geometric prediction value of the node to be encoded is determined.
- the second processor 2803 is further configured to execute any one of the methods described in the foregoing embodiments when running the computer program.
- This embodiment provides an encoder, in which the candidate nodes for inter-frame prediction are mainly optimized. Specifically, local motion processing is performed on the geometric parameters of the candidate nodes, so that the inter-frame prediction can better predict the coding nodes, thereby improving the accuracy of the inter-frame prediction, improving the coding efficiency of the geometric information, and further improving the coding performance of the point cloud.
- a schematic diagram of the composition structure of a coding and decoding system provided in an embodiment of the present application is shown.
- a coding and decoding system 290 may include an encoder 2901 and a decoder 2902 .
- the encoder 2901 may be the encoder described in any one of the aforementioned embodiments
- the decoder 2902 may be the decoder described in any one of the aforementioned embodiments.
- the code stream is first parsed to determine the predicted node index value corresponding to the node to be decoded; then, the first decoded node before the node to be decoded in the current frame is determined; then, the predicted node is determined based on the predicted node index value and the first decoded node; finally, based on the first decoded node, the first geometric parameter of the predicted node is subjected to local motion processing to determine the second geometric parameter of the predicted node; based on the first geometric parameter or the second geometric parameter, the geometric prediction value of the node to be decoded is determined.
- the first encoded node before the node to be encoded in the current frame is first determined; then, a first candidate node having at least one geometric parameter identical to the first encoded node in the reference frame is determined, and at least one second candidate node is determined in the reference frame based on the first candidate node; then, the geometric parameters of at least one candidate node in the at least one second candidate node are subjected to local motion processing to obtain at least one updated second candidate node; finally, based on the first candidate node and the at least one updated second candidate node, the geometric prediction value of the node to be encoded is determined.
- the technical solution of the present application is mainly aimed at optimizing the candidate nodes used for inter-frame prediction.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Les modes de réalisation de la présente demande divulguent un procédé de codage, un procédé de décodage, un flux de code, un codeur, un décodeur et un support de stockage. Le procédé de décodage comprend : l'analyse d'un flux de code et la détermination d'une valeur d'indice de nœud de prédiction correspondant à un nœud à décoder (S101) ; la détermination d'un premier nœud décodé précédant le nœud à décoder dans la trame actuelle (S102) ; la détermination d'un nœud de prédiction selon la valeur d'indice de nœud de prédiction et le premier nœud décodé (S103) ; sur la base du premier nœud décodé, l'exécution d'un traitement de mouvement local sur un premier paramètre géométrique du nœud de prédiction et la détermination d'un second paramètre géométrique du nœud de prédiction (S104) ; et selon le premier paramètre géométrique ou le second paramètre géométrique, la détermination d'une valeur de prédiction géométrique du nœud à décoder (S105). De cette manière, la précision de prédiction inter-trame peut être améliorée de sorte que l'efficacité de codage d'informations géométriques est améliorée, et ainsi les performances de codage et de décodage d'un nuage de points peuvent être améliorées.
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/CN2023/087303 WO2024212043A1 (fr) | 2023-04-10 | 2023-04-10 | Procédé de codage, procédé de décodage, flux de code, codeur, décodeur et support de stockage |
| CN202380095922.2A CN120958806A (zh) | 2023-04-10 | 2023-04-10 | 编解码方法、码流、编码器、解码器以及存储介质 |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/CN2023/087303 WO2024212043A1 (fr) | 2023-04-10 | 2023-04-10 | Procédé de codage, procédé de décodage, flux de code, codeur, décodeur et support de stockage |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2024212043A1 true WO2024212043A1 (fr) | 2024-10-17 |
Family
ID=93058635
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2023/087303 Pending WO2024212043A1 (fr) | 2023-04-10 | 2023-04-10 | Procédé de codage, procédé de décodage, flux de code, codeur, décodeur et support de stockage |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN120958806A (fr) |
| WO (1) | WO2024212043A1 (fr) |
Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112565764A (zh) * | 2020-12-03 | 2021-03-26 | 西安电子科技大学 | 一种点云几何信息帧间编码及解码方法 |
| US20220108484A1 (en) * | 2020-10-07 | 2022-04-07 | Qualcomm Incorporated | Predictive geometry coding in g-pcc |
| CN114616592A (zh) * | 2019-10-31 | 2022-06-10 | 黑莓有限公司 | 用于云压缩的方位角先验和树表示的方法和系统 |
| WO2022147100A1 (fr) * | 2020-12-29 | 2022-07-07 | Qualcomm Incorporated | Codage d'inter-prédiction pour compression de nuage de points géométrique |
| CN115412717A (zh) * | 2021-05-26 | 2022-11-29 | 荣耀终端有限公司 | 一种点云方位角信息的预测编解码方法及装置 |
| CN115474051A (zh) * | 2021-06-11 | 2022-12-13 | 维沃移动通信有限公司 | 点云编码方法、点云解码方法及终端 |
| CN115720272A (zh) * | 2021-08-24 | 2023-02-28 | 西安电子科技大学 | 点云预测、点云编码、点云解码方法及设备 |
-
2023
- 2023-04-10 WO PCT/CN2023/087303 patent/WO2024212043A1/fr active Pending
- 2023-04-10 CN CN202380095922.2A patent/CN120958806A/zh active Pending
Patent Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114616592A (zh) * | 2019-10-31 | 2022-06-10 | 黑莓有限公司 | 用于云压缩的方位角先验和树表示的方法和系统 |
| US20220108484A1 (en) * | 2020-10-07 | 2022-04-07 | Qualcomm Incorporated | Predictive geometry coding in g-pcc |
| CN112565764A (zh) * | 2020-12-03 | 2021-03-26 | 西安电子科技大学 | 一种点云几何信息帧间编码及解码方法 |
| WO2022147100A1 (fr) * | 2020-12-29 | 2022-07-07 | Qualcomm Incorporated | Codage d'inter-prédiction pour compression de nuage de points géométrique |
| CN115412717A (zh) * | 2021-05-26 | 2022-11-29 | 荣耀终端有限公司 | 一种点云方位角信息的预测编解码方法及装置 |
| CN115474051A (zh) * | 2021-06-11 | 2022-12-13 | 维沃移动通信有限公司 | 点云编码方法、点云解码方法及终端 |
| CN115720272A (zh) * | 2021-08-24 | 2023-02-28 | 西安电子科技大学 | 点云预测、点云编码、点云解码方法及设备 |
Also Published As
| Publication number | Publication date |
|---|---|
| CN120958806A (zh) | 2025-11-14 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2024145904A1 (fr) | Procédé de codage, procédé de décodage, flux de code, codeur, décodeur, et support de stockage | |
| WO2024212043A1 (fr) | Procédé de codage, procédé de décodage, flux de code, codeur, décodeur et support de stockage | |
| WO2024212038A1 (fr) | Procédé de codage, procédé de décodage, flux de code, codeur, décodeur et support d'enregistrement | |
| WO2024212045A1 (fr) | Procédé de codage, procédé de décodage, flux de code, codeur, décodeur et support de stockage | |
| WO2025076663A1 (fr) | Procédé de codage, procédé de décodage, codeur, décodeur, et support de stockage | |
| WO2024212042A1 (fr) | Procédé de codage, procédé de décodage, flux de code, codeur, décodeur et support d'enregistrement | |
| WO2025015523A1 (fr) | Procédé de codage, procédé de décodage, flux de bits, codeur, décodeur et support de stockage | |
| WO2024216477A1 (fr) | Procédés de codage/décodage, codeur, décodeur, flux de code et support de stockage | |
| WO2025007360A1 (fr) | Procédé de codage, procédé de décodage, flux binaire, codeur, décodeur et support d'enregistrement | |
| WO2025007355A9 (fr) | Procédé de codage, procédé de décodage, flux de code, codeur, décodeur et support de stockage | |
| WO2025010601A9 (fr) | Procédé de codage, procédé de décodage, codeurs, décodeurs, flux de code et support de stockage | |
| WO2025076672A1 (fr) | Procédé de codage, procédé de décodage, codeur, décodeur, flux de code, et support de stockage | |
| WO2024234132A9 (fr) | Procédé de codage, procédé de décodage, flux de code, codeur, décodeur et support d'enregistrement | |
| WO2024216476A1 (fr) | Procédé de codage/décodage, codeur, décodeur, flux de code, et support de stockage | |
| WO2025010600A9 (fr) | Procédé de codage, procédé de décodage, flux de code, codeur, décodeur et support de stockage | |
| WO2025010604A1 (fr) | Procédé de codage de nuage de points, procédé de décodage de nuage de points, décodeur, flux de code et support d'enregistrement | |
| WO2024207481A1 (fr) | Procédé de codage, procédé de décodage, codeur, décodeur, support de stockage et de flux binaire | |
| WO2024216479A9 (fr) | Procédé de codage et de décodage, flux de code, codeur, décodeur et support de stockage | |
| WO2025145433A1 (fr) | Procédé de codage de nuage de points, procédé de décodage de nuage de points, codec, flux de code et support de stockage | |
| WO2024207456A1 (fr) | Procédé de codage et de décodage, codeur, décodeur, flux de code et support de stockage | |
| WO2025076668A9 (fr) | Procédé de codage, procédé de décodage, codeur, décodeur et support de stockage | |
| WO2024148598A1 (fr) | Procédé de codage, procédé de décodage, codeur, décodeur et support de stockage | |
| WO2025007349A1 (fr) | Procédés de codage et de décodage, flux binaire, codeur, décodeur et support de stockage | |
| WO2024145910A1 (fr) | Procédé de codage, procédé de décodage, flux de bits, codeur, décodeur et support de stockage | |
| WO2025217849A1 (fr) | Procédé de codage/décodage, codeur de nuage de points, décodeur de nuage de points et support de stockage |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23932343 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |