WO2025015465A1 - Procédé de codage et de décodage, décodeur, codeur et support de stockage lisible par ordinateur - Google Patents
Procédé de codage et de décodage, décodeur, codeur et support de stockage lisible par ordinateur Download PDFInfo
- Publication number
- WO2025015465A1 WO2025015465A1 PCT/CN2023/107558 CN2023107558W WO2025015465A1 WO 2025015465 A1 WO2025015465 A1 WO 2025015465A1 CN 2023107558 W CN2023107558 W CN 2023107558W WO 2025015465 A1 WO2025015465 A1 WO 2025015465A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- point cloud
- sequence
- encoding
- information corresponding
- depth map
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/285—Analysis of motion using a sequence of stereo image pairs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/597—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
Definitions
- the present application relates to point cloud compression coding and decoding technology, and in particular to a coding and decoding method, a decoder, an encoder and a computer-readable storage medium.
- Point cloud is a kind of three-dimensional data, which refers to a set of vectors in a three-dimensional coordinate system. These vectors are usually expressed in the form of (x, y, z) three-dimensional coordinates. They can also express information such as color, material, and reflection intensity. They are generally used as a representation of three-dimensional objects or scenes.
- Laser radar (LiDAR) light detection and ranging is a remote sensing technology that uses laser pulses to generate 3D point clouds, which can provide accurate structural information about the scene.
- the laser radar point cloud sequence is obtained by continuous scanning at the sampling frequency of the sensor.
- point cloud data has become one of its main data forms due to its concise expression of three-dimensional space. Autonomous driving, robot navigation, surveying and mapping modeling, etc. all require the representation of laser radar point clouds.
- the amount of point cloud data is huge, and directly storing point cloud data will consume a lot of memory and is not conducive to transmission.
- the representative point cloud compression algorithms are two technical solutions developed by the International Moving Picture Experts Group (MPEG), namely Video-based Point Cloud Compression (V-PCC) and Geometry-based Point Cloud Compression (G-PCC).
- MPEG Motion Picture Experts Group
- V-PCC Video-based Point Cloud Compression
- G-PCC Geometry-based Point Cloud Compression
- the compression in G-PCC is mainly achieved through the octree model and/or the triangle surface model.
- this method supports LiDAR point cloud compression, it only supports single-frame point cloud compression, and its decoding time is slow, and the quality of the reconstructed point cloud is poor, which reduces the efficiency and quality of point cloud encoding and decoding, and further reduces the point cloud encoding and decoding performance.
- the embodiments of the present application provide a coding and decoding method, a decoder, an encoder and a computer-readable storage medium, which can improve the efficiency of point cloud coding and decoding, thereby improving the coding and decoding performance.
- the present application provides a decoding method, including:
- the point cloud representation network parameters are determined by the encoder updating the network parameters of the initial network based on the time information and spatial information corresponding to the point cloud sequence and the depth map sequence corresponding to the point cloud sequence;
- the time information represents the acquisition time corresponding to the point cloud sequence
- the spatial information represents the spatial motion information of the acquisition device of the point cloud sequence
- At least one reconstructed point cloud corresponding to the point cloud sequence is determined based on the at least one reconstructed depth image.
- the present application provides an encoding method, including:
- the time information represents the acquisition time corresponding to the point cloud sequence
- the spatial information represents the spatial motion information of the acquisition device of the point cloud sequence
- the point cloud representation network parameters are encoded to determine encoding information corresponding to the point cloud sequence.
- the present application provides a decoder, including:
- the decoding part is configured to decode the encoded information corresponding to the point cloud sequence in the code stream to determine the point cloud representation network parameters; the point cloud representation network parameters are determined by the encoder updating the network parameters of the initial network based on the time information and spatial information corresponding to the point cloud sequence and the depth map sequence corresponding to the point cloud sequence;
- a network parameter determination part configured to determine a point cloud representation network based on the point cloud representation network parameters
- An image reconstruction part is configured to use the point cloud representation network to determine at least one reconstructed depth image according to the time information and spatial information corresponding to the point cloud sequence;
- the point cloud reconstruction part is configured to determine at least one reconstructed point cloud corresponding to the point cloud sequence based on the at least one reconstructed depth image.
- the present application provides an encoder, including:
- a mapping part is configured to perform coordinate transformation mapping on the point cloud sequence to determine a depth map sequence corresponding to the point cloud sequence
- the spatiotemporal coding part is configured to perform spatiotemporal position coding on the time information and spatial information corresponding to the point cloud sequence, and determine the spatiotemporal coding vector sequence corresponding to the depth map sequence;
- the time information represents the acquisition time corresponding to the point cloud sequence;
- the spatial information represents the spatial motion information of the acquisition device of the point cloud sequence;
- a parameter updating part is configured to update the network parameters of the initial network using the depth map sequence and the spatiotemporal coding vector sequence to determine the point cloud representation network parameters;
- the encoding part is configured to encode the point cloud representation network parameters and determine the encoding information corresponding to the point cloud sequence.
- An embodiment of the present application provides a code stream, which is generated by bit encoding according to coding information; wherein the coding information at least includes: coding information corresponding to a point cloud sequence; the coding information corresponding to the point cloud sequence is obtained by encoding point cloud representation network parameters; the point cloud representation network parameters are determined by updating the network parameters of the initial network using a depth map sequence corresponding to the point cloud sequence, and time information and spatial information corresponding to the point cloud sequence.
- the present application provides a decoder, including:
- a first memory configured to store executable instructions
- the first processor is configured to implement any of the decoding methods described above when executing the executable instructions stored in the first memory.
- the present application provides an encoder, including:
- a second memory configured to store executable instructions
- the second processor is configured to implement any of the encoding methods described above when executing the executable instructions stored in the second memory.
- An embodiment of the present application provides a computer-readable storage medium storing executable instructions for causing a first processor to execute to implement the above-mentioned decoding method, or for causing a second processor to execute to implement the above-mentioned encoding method.
- An embodiment of the present application provides a computer program product, including a computer program or instructions.
- the decoding method provided by the embodiment of the present application is implemented; or, when the computer program or instructions are executed by a second processor, the encoding method provided by the embodiment of the present application is implemented.
- the embodiment of the present application provides a coding and decoding method, a decoder, an encoder and a computer-readable storage medium.
- the decoder determines the point cloud representation network according to the point cloud representation network parameters transmitted by the encoder, uses the time information and spatial information corresponding to the point cloud representation network and at least one point cloud to reconstruct at least one depth image, and then restores at least one reconstructed point cloud based on the at least one depth image, thereby realizing the decoding and reconstruction of the depth image represented by the implicit neural network parameters using a neural network based on spatiotemporal position coding, thereby being able to give full play to the advantages of the neural network, greatly reducing the complexity of the operation, improving the decoding efficiency, and improving the reconstruction quality of the decoding.
- the high-frequency components of the point cloud representation network are obtained according to the time information and spatial information corresponding to at least one point cloud, which can improve the representation effect of the neural network, further improve the reconstruction quality, and thus improve the decoding performance.
- FIG1 is a schematic diagram of an optional flow chart of an encoding method provided in an embodiment of the present application.
- FIG2 is a schematic diagram of a process of mapping a point cloud into a depth map provided by an embodiment of the present application
- FIG3 is a schematic diagram of an optional flow chart of an encoding method provided in an embodiment of the present application.
- FIG4 is a schematic diagram of an optional process of training an initial network and encoding network parameters provided in an embodiment of the present application
- FIG5 is a schematic diagram of an optional flow chart of a decoding method provided in an embodiment of the present application.
- FIG6 is a schematic diagram of an optional flow chart of a decoding method provided in an embodiment of the present application.
- FIG7 is a schematic diagram of an optional process for decoding using a point cloud representation network provided in an embodiment of the present application.
- FIG8 is a schematic diagram of an optional process of encoding and decoding a noise point cloud to achieve a denoising effect according to an embodiment of the present application
- FIG9 is a schematic diagram of an optional process of encoding and decoding reconstruction and point cloud segmentation provided in an embodiment of the present application.
- FIG10A is a first diagram comparing the effects of the present application and the current G-PCC on semantic segmentation provided by an embodiment of the present application;
- FIG10B is a second comparison diagram of the effects of the present application and the current G-PCC on semantic segmentation provided by an embodiment of the present application;
- FIG11 is a schematic diagram of an optional structure of a decoder provided in an embodiment of the present application.
- FIG12 is a schematic diagram of an optional structure of an encoder provided in an embodiment of the present application.
- FIG13 is a schematic diagram of an optional structure of a decoder provided in an embodiment of the present application.
- FIG. 14 is a schematic diagram of an optional structure of an encoder provided in an embodiment of the present application.
- first ⁇ second ⁇ third involved are merely used to distinguish similar objects and do not represent a specific ordering of the objects. It can be understood that “first ⁇ second ⁇ third” can be interchanged with a specific order or sequence where permitted, so that the embodiments of the present application described here can be implemented in an order other than that illustrated or described here.
- Point cloud compression algorithms include: Video-based Point Cloud Compression (V-PCC) and Geometry-based Point Cloud Compression (G-PCC).
- V-PCC Video-based Point Cloud Compression
- G-PCC Geometry-based Point Cloud Compression
- point cloud compression in G-PCC is mainly implemented based on octree model and/or triangle surface model.
- V-PCC is mainly based on 3D to 2D projection and video compression.
- the current G-PCC compression method supports lidar point cloud compression, it only supports single-frame point cloud compression, and its decoding time is slow.
- implicit neural networks have great potential in both 3D and 2D signals, because point clouds have many points and are irregularly distributed, directly sending point clouds into implicit neural networks requires a long rendering time and the effect is very limited. Therefore, neural networks have not yet been applied to point cloud compression.
- the embodiments of the present application provide a coding and decoding method, a decoder, an encoder, and a computer-readable storage medium.
- implicit neural networks can play a better role.
- the complexity of operations can be greatly reduced and the quality of compressed reconstruction can be improved.
- the redundancy of the lidar point cloud sequence in the time domain can be effectively extracted, which is more conducive to the modeling and recovery of image signals, thereby improving coding and decoding efficiency and point cloud reconstruction quality, thereby improving coding and decoding performance.
- the following describes the coding method applied to the encoder provided in the embodiments of the present application. Law.
- Figure 1 is an optional flow chart of the encoding method provided in an embodiment of the present application, which will be explained in conjunction with the steps shown in Figure 1.
- the point cloud sequence includes at least one (frame) point cloud; the encoder performs coordinate transformation mapping on each point cloud in the at least one point cloud, maps each point cloud into a depth map, and thereby determines a depth map sequence corresponding to the point cloud sequence.
- the coordinate transformation mapping process can be shown in Figure 2.
- the three-dimensional point cloud can be regarded as a representation of the sensor data provided by the lidar scanner.
- Each point in the point cloud corresponds to the measurement value of a single lidar beam, and each point can be described by a three-dimensional space coordinate (x, y, z) and its attribute information.
- the depth map can be regarded as another representation of the lidar scan, which saves points in the three-dimensional space as points in a 360-degree two-dimensional image of the scanned environment.
- the row dimension represents the elevation angle of the laser beam
- the column dimension represents the azimuth angle.
- the lidar sensor With each incremental rotation around the z-axis, the lidar sensor returns many distance and intensity measurements, which are then stored in the corresponding cells of the depth image. As shown in Figure 2, a point p in space is mapped to a depth image cell by the corresponding azimuth angle ⁇ p and inclination angle ⁇ p. Indicates that the distance rangep of point p is stored in the corresponding cell. In this way, by performing coordinate mapping transformation on each point in the three-dimensional space, it is mapped to a point in the depth map using the depth map projection method to obtain the depth map corresponding to the point cloud.
- the embodiment of the present application converts the point cloud sequence into a two-dimensional depth image sequence, making it more efficient and compact in spatial arrangement.
- the two-dimensional expression can be more conveniently characterized using an implicit neural network. It can be understood that converting the point cloud into a depth map and then performing an implicit neural representation of the depth map allows the implicit neural network to give full play to its advantages. Compared with directly using implicit neural representations of three-dimensional point clouds, the complexity of the calculation can be greatly reduced to achieve better reconstruction quality.
- S102 Perform spatiotemporal position encoding on the time information and space information corresponding to the point cloud sequence to determine a spatiotemporal encoding vector sequence corresponding to the depth map sequence.
- the time information represents the acquisition time corresponding to the point cloud sequence
- the spatial information represents the spatial motion information of the acquisition device of the point cloud sequence.
- Each point cloud in the point cloud sequence is acquired by an acquisition device, such as a device equipped with a laser radar sensor, moving in a certain space.
- each point cloud in the point cloud sequence corresponds to the acquisition time when the acquisition device acquires the point cloud, and the spatial motion information of the acquisition device.
- the acquisition time is used as the time information corresponding to each point cloud
- the spatial motion information of the acquisition device such as the spatial position and posture of the acquisition device when acquiring each point cloud, is used as the spatial information corresponding to each point cloud.
- the time information and spatial information corresponding to the point cloud sequence include: the time information corresponding to each point cloud and the spatial information corresponding to each point cloud.
- the encoder performs position encoding on the time information and spatial information corresponding to each point cloud in the point cloud sequence, and encodes the time information and spatial information corresponding to each point cloud into a high-dimensional vector as the space-time coding vector corresponding to each point cloud, that is, the space-time coding vector corresponding to each depth map, thereby determining the space-time coding vector sequence corresponding to the depth map sequence.
- the encoder can perform spatiotemporal position encoding on the timestamp of the acquisition time corresponding to each depth map, the spatial translation position and rotation position of the sensor on the acquisition device, and obtain the corresponding position encoding high-dimensional vector as the spatiotemporal encoding vector corresponding to each depth map.
- the high-frequency classification of the input neural network can be improved, which helps the neural network model the spatiotemporal relationship between the laser radars and improves the reconstruction quality of the point cloud.
- the encoder uses a spatiotemporal coding vector sequence and a depth map sequence to train the initial network, and iteratively updates the network parameters of the initial network during the training process until the training goal is reached, and determines the most recently updated network parameters as the point cloud representation network parameters.
- the encoder reconstructs each depth map in the depth map sequence according to the spatiotemporal coding vector corresponding to each depth map in the spatiotemporal coding vector sequence through the initial network, and determines the reconstructed depth image corresponding to each depth map; based on the reconstructed depth image and each depth map, the network parameters of the initial network are iteratively updated until the preset update conditions are met, and the point cloud representation network parameters are determined based on the currently updated network parameters. That is to say, in each network training, the initial network will reconstruct each depth map in the depth map sequence according to the spatiotemporal coding vector corresponding to each depth map.
- Image restoration and reconstruction are performed on each depth map to determine the corresponding reconstructed depth image; the encoder updates the network parameters of the current training based on the difference between the reconstructed depth image and each depth map; if the preset update conditions are not met, the next training is performed based on the updated network parameters; until the preset update conditions are met, that is, the training goal is reached, the network parameters updated in the current training, that is, the currently updated network parameters, are used as the point cloud representation network parameters.
- the network parameters may include the weights of the initial network; the preset update conditions may include: the difference between the reconstructed depth image and the corresponding depth map meets the overfitting condition, or the number of updates reaches the prediction threshold, etc.
- the specific selection is made according to the actual situation, and the embodiments of the present application are not limited.
- the initial network includes: a multi-layer perceptron (MLP) and a multi-layer cascaded convolution (CNN) layer.
- MLP multi-layer perceptron
- CNN multi-layer cascaded convolution
- the multi-layer perceptron is used to express the initial depth map signal
- the multi-layer cascaded convolution layer is used to perform convolution recovery based on the initial depth map signal to determine the reconstructed depth image.
- the embodiment of the present application utilizes the depth map sequence and the spatiotemporal coding vector sequence to update the network parameters of the initial network, which can achieve the representation of the depth image through the implicit neural network and improve the representation efficiency of the LiDAR point cloud sequence.
- S104 Encode the point cloud representation network parameters to determine the encoding information corresponding to the point cloud sequence.
- the encoder performs lossless encoding on the point cloud representation network parameters, determines the encoding information corresponding to the point cloud sequence, and generates a corresponding code stream to send to the decoder.
- the encoder may perform Huffman encoding on the point cloud representation network parameters to obtain encoding information corresponding to the point cloud sequence.
- the encoder can generate a bitstream based on the network parameter encoding information corresponding to the point cloud sequence; and send the time information corresponding to the point cloud sequence, the spatial information corresponding to the point cloud sequence, and the bitstream to the decoder.
- the bitstream includes the network parameter encoding information corresponding to the point cloud sequence to generate the bitstream, and the time information corresponding to the point cloud sequence and the spatial information corresponding to the point cloud sequence do not occupy the bitstream additionally.
- the encoder converts the point cloud into a depth map, making it more efficient and compact in spatial arrangement; through spatiotemporal position encoding, the depth image is represented by an implicit neural network, which further improves the representation efficiency of the point cloud sequence; thereby reducing the computational complexity and improving the encoding efficiency.
- the encoder of the embodiment of the present application not only performs position encoding on the time information, but also combines the spatial posture information of the sensor (such as translation and rotation) to splice the encoding vectors, and together as the input of the implicit neural network, a better representation effect can be obtained, thereby improving the encoding quality, and then improving the encoding performance.
- S102 may be implemented by executing the process of S1021 - S1023 as follows:
- S1022 Perform position encoding on the spatial information corresponding to each point cloud, and determine the spatial encoding vector corresponding to each depth map.
- the encoder performs position encoding on the time information and the spatial information corresponding to each depth map respectively, and determines the time encoding vector and the spatial encoding vector corresponding to each depth map.
- the spatial information includes translation information and rotation information; here, the translation information represents the spatial translation position of the acquisition device, such as a sensor, and the rotation information represents the spatial rotation position of the acquisition device.
- the encoder can encode the translation information to determine a translation encoding vector; encode the rotation information to determine a rotation encoding vector; and determine the translation encoding vector and the rotation encoding vector as a spatial encoding vector.
- the encoder merges the time coding vector and the space coding vector determined by the independent position coding, and illustratively, the event coding vector, the translation coding vector and the rotation coding vector are vector-concatenated to obtain the time and space coding vector corresponding to each depth map.
- the encoder performs time and space position coding on the time information and space information corresponding to each point cloud in the point cloud sequence in the same process to determine the time and space coding vector sequence.
- the encoder may include a spatiotemporal position encoding module; wherein the spatiotemporal position encoding module may include a spatiotemporal position encoding module.
- the encoder normalizes the timestamp (time information), the sensor spatial translation position (translation information) and the sensor spatial rotation position (rotation information) respectively, and inputs them into the corresponding position encoding module to convert them into corresponding high-dimensional vectors.
- the three high-dimensional vectors are concatenated as a spatiotemporal encoding vector.
- the embodiments of the present application fully consider that the sampling process of the lidar point cloud sequence in actual applications is usually acquired by an acquisition device equipped with a lidar sensor moving in a certain space, and the time information is combined with the spatial posture information of the acquisition device, such as the encoding of translation and rotation, as the input of the implicit neural network, thereby improving the neural network's representation effect on the image and improving the performance in multiple downstream visual tasks.
- the encoder may also prune the initial network and/or quantize the network parameters based on the currently updated network parameters to determine the point cloud representation network parameters when the preset update conditions are met.
- the encoder may reduce the network parameters (weights) corresponding to the network paths that have a smaller impact on the reconstructed depth image to 0 in the currently updated network parameters, that is, prune the initial network to obtain the pruned network parameters; and/or the encoder may quantize the currently updated network parameters or the pruned network parameters, and determine the obtained quantized network parameters as the point cloud representation network parameters.
- the encoder uses a depth map sequence and a spatiotemporal coding vector sequence to perform network overfitting training on the initial network, iteratively updates the network parameters of the initial network during the training process, and performs network pruning (pruning) on the initial network when preset update conditions are met, quantizes the pruned network parameters, and weight encodes the quantized network parameters to determine the point cloud representation network parameters.
- network pruning pruning
- pruning and quantizing the network parameters can further compress the network parameters that need to be encoded and improve the encoding efficiency.
- Figure 5 is an optional flowchart of a decoding method provided in an embodiment of the present application, which will be explained in conjunction with the steps shown in Figure 5.
- the decoder parses the encoding information corresponding to the point cloud sequence from the bitstream, decodes the encoding information corresponding to the point cloud sequence, and determines the point cloud representation network parameters.
- the point cloud representation network parameters are determined by the encoder updating the network parameters of the initial network based on the time information and spatial information corresponding to the point cloud sequence, and the depth map sequence corresponding to the point cloud sequence. The process of determining the point cloud representation network parameters is as described in the encoding process of the encoder end, and will not be repeated here.
- the decoder may perform lossless Huffman decoding on the encoded information corresponding to the point cloud sequence to determine the point cloud representation network parameters.
- the decoder receives the time information corresponding to the point cloud sequence, the spatial information corresponding to the point cloud sequence, and the bitstream sent by the encoder. In this way, the decoder parses and decodes the bitstream to determine the point cloud representation network parameters, and receives the time information corresponding to the point cloud sequence and the spatial information corresponding to the point cloud sequence.
- S202 Determine a point cloud representation network based on point cloud representation network parameters.
- the decoder includes an initial network, and the network architecture of the initial network corresponds to the network architecture of the initial network in the encoder.
- the decoder determines the initial network, and updates the network parameters of the initial network using the point cloud representation network parameters obtained by decoding to determine the point cloud representation network.
- the initial network includes: a multi-layer perceptron and a multi-layer cascaded convolutional layer, the multi-layer perceptron is used to express the initial depth map signal; the multi-layer cascaded convolutional layer is used to perform convolution recovery based on the initial depth map signal to determine the reconstructed depth image.
- S203 Using the point cloud representation network, determine at least one reconstructed depth image according to the time information and spatial information corresponding to the point cloud sequence.
- the time information represents the acquisition time corresponding to the point cloud sequence
- the spatial information represents the spatial motion information of the acquisition device of the point cloud sequence.
- the time information and spatial information corresponding to the point cloud sequence include: the time information corresponding to each point cloud in the point cloud sequence; the spatial information corresponding to the point cloud sequence includes: the spatial information corresponding to each point cloud.
- the decoder uses the point cloud representation network to determine at least one reconstructed depth image based on the time information and spatial information corresponding to at least one point cloud in the point cloud sequence.
- the reconstructed depth image corresponding to each point cloud in the entire point cloud is determined according to the time information and spatial information corresponding to the partial point cloud, as at least one reconstructed depth image; or the reconstructed depth image corresponding to each point cloud in the partial point cloud can be determined according to the time information and spatial information corresponding to the partial point cloud in the point cloud sequence, as at least one reconstructed depth image according to the actual decoding needs.
- the point cloud representation network is determined according to the point cloud representation network parameters transmitted by the encoder, the point cloud representation network parameters are the depth map representation in the form of neural network parameters. Therefore, the decoder can use the point cloud representation network to reconstruct at least one reconstructed depth image corresponding to at least one spatiotemporal coding vector according to at least one spatiotemporal coding vector.
- S204 Determine at least one reconstructed point cloud corresponding to the point cloud sequence based on the at least one reconstructed depth image.
- the decoder may be able to recover a reconstructed point cloud corresponding to each reconstructed depth image in at least one reconstructed depth image based on the reconstructed depth image, thereby determining at least one reconstructed point cloud.
- coordinate inverse mapping is performed on at least one reconstructed depth image to determine at least one reconstructed point cloud.
- the decoder determines the point cloud representation network according to the point cloud representation network parameters transmitted by the encoder, uses the time information and spatial information corresponding to the point cloud representation network and at least one point cloud to reconstruct at least one depth image, and then restores at least one reconstructed point cloud based on at least one depth image, realizing the decoding and reconstruction of the depth image represented by the implicit neural network parameters based on spatiotemporal position coding using a neural network, so that the advantages of the neural network can be more fully utilized, the complexity of the operation can be greatly reduced, the decoding efficiency can be improved, and the reconstruction quality of the decoding can be improved.
- the high-frequency components of the point cloud representation network are obtained according to the time information and spatial information corresponding to at least one point cloud, which can improve the representation effect of the neural network, further improve the reconstruction quality, and thus improve the decoding performance.
- S203 may be implemented by executing the process of S2031 - S2032 as follows:
- the decoder performs spatiotemporal position encoding on at least one time information and at least one spatial information corresponding to at least one point cloud in the point cloud sequence, and determines at least one spatiotemporal encoding vector.
- the decoder includes a space-time coding module, which can be used to perform space-time position coding on at least one time information and at least one spatial information corresponding to at least one point cloud to determine at least one space-time coding vector.
- the decoder performs position encoding on the time information corresponding to each point cloud to determine the time encoding vector corresponding to each point cloud; performs position encoding on the spatial information corresponding to each point cloud to determine the space encoding vector corresponding to each point cloud; merges the time encoding vector with the space encoding vector to determine the space-time encoding vector corresponding to each point cloud, thereby determining at least one space-time encoding vector.
- the spatial information includes translation information and rotation information; the spatial information corresponding to each point cloud is position-encoded, and the decoder encodes the translation information to determine a translation encoding vector; encodes the rotation information to determine a rotation encoding vector; and the translation encoding vector and the rotation encoding vector are determined as a spatial encoding vector.
- the decoder performs vector splicing on the time encoding vector, the translation encoding vector, and the rotation encoding vector corresponding to each point cloud in at least one point cloud to determine at least one spatiotemporal encoding vector.
- S2032 Utilize the point cloud representation network to perform restoration and reconstruction according to at least one spatiotemporal coding vector, and determine at least one reconstructed depth image.
- the time information and spatial information corresponding to the depth map include: time information t, translation information (x, y, z) and rotation information ( ⁇ , ⁇ , ⁇ ).
- the encoder inputs the time information t, translation information (x, y, z) and rotation information ( ⁇ , ⁇ , ⁇ ) into the spatiotemporal coding module for position coding respectively to determine the spatiotemporal coding vector corresponding to the depth map.
- the encoder inputs the spatiotemporal coding vector into the initial network, and expresses the initial depth map signal according to the spatiotemporal coding vector through the multi-layer perceptron of the initial network, such as an image signal with a resolution lower than the original depth map, and then through the multi-layer cascaded convolution layers in the initial network, performs step-by-step convolution recovery based on the initial depth map signal to determine the reconstructed depth map corresponding to the depth map.
- the decoder performs coordinate inverse mapping based on the reconstructed depth map to determine the reconstructed point cloud.
- the method of the embodiment of the present application can support geometric lossy compression of point clouds and has excellent performance in data compression.
- Table 1 BD Rate is used to quantitatively evaluate lossy compression.
- the embodiment of the present application provides an average performance gain of 60.39% and 47.78% relative to G-PCC.
- This data shows that the compression efficiency of the embodiment of the present application significantly exceeds the current point cloud codec G-PCC, greatly improving the encoding and decoding performance.
- the decoding speed of the embodiment of our application is higher than that of G-PCC, and only the corresponding time information such as timestamp needs to be input to decode and reconstruct the point cloud on the corresponding time information, supporting real-time decoding processing.
- the point clouds in the point cloud sequence are point clouds containing noise, and the at least one reconstructed point cloud is at least one denoised point cloud.
- the point cloud sequence input to the encoder may be a noisy radar point cloud sequence.
- the decoder can output a denoised reconstructed point cloud without using an additional denoising method.
- the radar point cloud sequence with different degrees of noise added is projected into a depth map sequence and then input into the initial network of the encoder for overfitting training.
- the trained network parameters are encoded and decoded, and the reconstructed point cloud output by the decoder based on the decoded network parameters has a denoising effect.
- a better performance evaluation index can be obtained.
- Gaussian noise and random noise with different degrees of distortion are used, and CD (Chamfer distance) is used as an evaluation index.
- CD Chip distance
- Table 2 The experimental results of denoising are shown in Table 2. Compared with the reconstructed point cloud of G-PCC with the same degree of distortion and different filter combinations, the CD value of the embodiment of the present application performs better, indicating that the denoising effect is better.
- the original lidar point cloud sequence can be geometrically lossy compressed and decoded using the encoding and decoding method of the embodiments of the present application, and the obtained reconstructed point cloud can be used in downstream visual tasks such as target detection and semantic segmentation to obtain better performance.
- a pre-trained semantic segmentation network is used to perform semantic segmentation on at least one reconstructed point cloud, and a semantic segmentation result corresponding to the at least one reconstructed point cloud is determined.
- the radar point cloud sequence is projected into a depth map sequence and then input into the initial network of the encoder for overfitting training.
- the trained network parameters are encoded and decoded.
- the decoder performs semantic segmentation on the reconstructed point cloud based on the reconstructed point cloud output by the decoded network parameters, and the semantic segmentation result corresponding to the reconstructed point cloud can be obtained.
- a pre-trained target detection network can also be used to perform semantic segmentation on at least one reconstructed point cloud to determine the target detection result corresponding to at least one reconstructed point cloud.
- the reconstructed point cloud obtained by the encoding and decoding method of the embodiment of the present application can also be applied to other visual task processing, which is specifically selected according to the actual situation and is not limited by the embodiment of the present application.
- the reconstructed point cloud output after the compression process shown in FIG7 is sent to the pre-trained semantic segmentation network.
- the reconstructed point cloud obtained by the encoding and decoding method of the embodiment of the present application can achieve better performance evaluation indicators.
- the mean overall semantic classes of class Intersection Over Union per-class intersection over union (mIoU) and overall accuracy (OA) of the embodiment of the present application (NeRI) and G-PCC are compared under the same geometric distortion. It can be seen that under the same peak signal-to-noise ratio (PSNR), the mIoU and OA values of the embodiment of the present application are higher.
- PSNR peak signal-to-noise ratio
- This data shows that the encoding and decoding reconstruction quality of the embodiment of the present application is higher, and it has obvious advantages in actual visual task processing compared with the current G-PCC encoding and decoding.
- the embodiment of the present application provides a decoder 1, as shown in FIG11 , comprising:
- the decoding part 11 is configured to decode the encoded information corresponding to the point cloud sequence in the code stream to determine the point cloud representation network parameters; the point cloud representation network parameters are determined by the encoder updating the network parameters of the initial network based on the time information and spatial information corresponding to the point cloud sequence and the depth map sequence corresponding to the point cloud sequence;
- a network parameter determination part 12 is configured to determine a point cloud representation network based on the point cloud representation network parameters
- the image reconstruction part 13 is configured to use the point cloud representation network to determine at least one reconstructed depth image according to the time information and spatial information corresponding to the point cloud sequence;
- the point cloud reconstruction part 14 is configured to determine at least one reconstructed point cloud corresponding to the point cloud sequence based on the at least one reconstructed depth image.
- the time information corresponding to the point cloud sequence includes: the time information corresponding to each point cloud in the point cloud sequence; the spatial information corresponding to the point cloud sequence includes: the spatial information corresponding to each point cloud; the image reconstruction part 13 is also configured to perform spatiotemporal position encoding on at least one time information and at least one spatial information corresponding to at least one point cloud in the point cloud sequence, and determine at least one spatiotemporal encoding vector; use the point cloud representation network to perform restoration and reconstruction according to the at least one spatiotemporal encoding vector, and determine the at least one reconstructed depth image.
- the image reconstruction part 13 is further configured to, for each point cloud in the at least one point cloud, position encode the time information corresponding to the each point cloud to determine the time encoding vector corresponding to the each point cloud; position encode the space information corresponding to the each point cloud to determine the space encoding vector corresponding to the each point cloud; merge the time encoding vector with the space encoding vector to determine the space-time encoding vector corresponding to the each point cloud, thereby determining the at least one space-time encoding vector.
- the spatial information includes translation information and rotation information; the network parameter determination part 12 is also configured to encode the translation information to determine the translation coding vector; encode the rotation information to determine the rotation coding vector; and determine the translation coding vector and the rotation coding vector as the spatial coding vector.
- the network parameter determination part 12 is further configured to determine an initial network, and use the point cloud representation network parameters to update the network parameters of the initial network to determine the point cloud representation network.
- the initial network includes: a multi-layer perceptron and a multi-layer cascaded convolutional layer, the multi-layer perceptron is used to express the initial depth map signal; the multi-layer cascaded convolutional layer is used to perform convolution recovery based on the initial depth map signal to determine the reconstructed depth image.
- the point cloud reconstruction part 14 is further configured to perform coordinate inverse mapping on the at least one reconstructed depth image to determine the at least one reconstructed point cloud.
- the point clouds in the point cloud sequence are point clouds containing noise, and the at least one reconstructed point cloud is at least one denoised point cloud.
- the decoder 1 also includes a semantic segmentation part; the semantic segmentation part is configured to use a pre-trained semantic segmentation network to perform semantic segmentation on the at least one reconstructed point cloud, and determine a semantic segmentation result corresponding to the at least one reconstructed point cloud.
- the decoding part 11 is further configured to receive the time information corresponding to the point cloud sequence, the spatial information corresponding to the point cloud sequence and the code stream sent by the encoder.
- the embodiment of the present application provides an encoder 2, as shown in FIG12, including:
- a mapping part 21 is configured to perform coordinate transformation mapping on the point cloud sequence to determine a depth map sequence corresponding to the point cloud sequence;
- the spatiotemporal coding part 22 is configured to perform spatiotemporal position coding on the time information and spatial information corresponding to the point cloud sequence, and determine the spatiotemporal coding vector sequence corresponding to the depth map sequence;
- the time information represents the acquisition time corresponding to the point cloud sequence;
- the spatial information represents the spatial motion information of the acquisition device of the point cloud sequence;
- a parameter updating part 23 is configured to update the network parameters of the initial network using the depth map sequence and the spatiotemporal coding vector sequence to determine the point cloud representation network parameters;
- the encoding part 24 is configured to encode the point cloud representation network parameters and determine the encoding information corresponding to the point cloud sequence.
- the time information corresponding to the point cloud sequence includes: the time information corresponding to each point cloud in the point cloud sequence; the space information corresponding to the point cloud sequence includes: the time-space encoding part 22 is also configured to For each point cloud in the point cloud sequence, position-encode the time information corresponding to each point cloud to determine the time encoding vector corresponding to each depth map; position-encode the spatial information corresponding to each point cloud to determine the space encoding vector corresponding to each depth map; merge the time encoding vector with the space encoding vector to determine the space-time encoding vector corresponding to each depth map, thereby determining a space-time encoding vector sequence.
- the spatial information includes translation information and rotation information; the space-time coding part 22 is also configured to encode the translation information to determine the translation coding vector; encode the rotation information to determine the rotation coding vector; and determine the translation coding vector and the rotation coding vector as the spatial coding vector.
- the initial network includes: a multi-layer perceptron and a multi-layer cascaded convolutional layer, the multi-layer perceptron is used to express the initial depth map signal; the multi-layer cascaded convolutional layer is used to perform convolution recovery based on the initial depth map signal to determine the reconstructed depth image.
- the parameter updating part 23 is also configured to reconstruct each depth map in the depth map sequence through the initial network according to the spatiotemporal coding vector corresponding to each depth map in the spatiotemporal coding vector sequence, and determine the reconstructed depth image corresponding to each depth map; based on the reconstructed depth image and each depth map, iteratively update the network parameters of the initial network until the preset update conditions are met, and then determine the point cloud representation network parameters based on the currently updated network parameters.
- the parameter updating part 23 is further configured to perform pruning and/or network parameter quantization on the initial network based on the currently updated network parameters, so as to determine the point cloud representation network parameters.
- the encoder 2 also includes a sending part, which is configured to generate a code stream based on the network parameter encoding information corresponding to the point cloud sequence; and send the time information corresponding to the point cloud sequence, the spatial information corresponding to the point cloud sequence and the code stream to the decoder.
- a sending part which is configured to generate a code stream based on the network parameter encoding information corresponding to the point cloud sequence; and send the time information corresponding to the point cloud sequence, the spatial information corresponding to the point cloud sequence and the code stream to the decoder.
- the embodiment of the present application further provides a decoder
- FIG13 is an optional structural diagram of the decoder 3 provided in the embodiment of the present application.
- the decoder 3 includes: a first memory 32 and a first processor 33.
- the first memory 32 and the first processor 33 are connected through a first communication bus 34;
- the first memory 32 is used to store executable instructions;
- the first processor 33 is used to execute the executable instructions stored in the first memory 32, and implement the decoding method provided in the embodiment of the present application.
- the embodiment of the present application further provides an encoder
- FIG14 is an optional structural diagram of the encoder 4 provided in the embodiment of the present application.
- the encoder 4 includes: a second memory 42 and a second processor 43.
- the second memory 42 and the second processor 43 are connected via a second communication bus 44;
- the second memory 42 is used to store executable instructions;
- the second processor 43 is used to execute the executable instructions stored in the second memory 42, and implement the encoding method provided in the embodiment of the present application.
- An embodiment of the present application provides a computer-readable storage medium storing executable instructions, wherein executable instructions are stored.
- the first processor When the executable instructions are executed by a first processor, the first processor will be caused to execute any one of the decoding methods provided in the embodiments of the present application; or, when the executable instructions are executed by a second processor, the second processor will be caused to execute any one of the encoding methods provided in the embodiments of the present application.
- the computer-readable storage medium may be a memory such as FRAM, ROM, PROM, EPROM, EEPROM, flash memory, magnetic surface storage, optical disk, or CD-ROM; or it may be various devices including one or any combination of the above memories.
- executable instructions may be in the form of a program, software, software module, script or code, written in any form of programming language (including compiled or interpreted languages, or declarative or procedural languages), and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine or other unit suitable for use in a computing environment.
- executable instructions may, but need not, correspond to a file in a file system, may be stored as part of a file storing other programs or data, such as in one or more scripts in a Hyper Text Markup Language (HTML) document, in a single file dedicated to the program in question, or, Stored in multiple coordinating files (e.g., files storing one or more modules, subroutines, or code portions).
- HTML Hyper Text Markup Language
- executable instructions may be deployed to be executed on one computing device, or on multiple computing devices located at one site, or on multiple computing devices distributed across multiple sites and interconnected by a communication network.
- the embodiments of the present application may be provided as methods, systems, or computer program products. Therefore, the present application may adopt the form of hardware embodiments, software embodiments, or embodiments in combination with software and hardware. Moreover, the present application may adopt the form of a computer program product implemented on one or more computer-usable storage media (including but not limited to disk storage and optical storage, etc.) that contain computer-usable program code.
- a computer program product implemented on one or more computer-usable storage media (including but not limited to disk storage and optical storage, etc.) that contain computer-usable program code.
- These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing device to work in a specific manner, so that the instructions stored in the computer-readable memory produce a manufactured product including an instruction device that implements the functions specified in one or more processes in the flowchart and/or one or more boxes in the block diagram.
- These computer program instructions may also be loaded onto a computer or other programmable data processing device so that a series of operational steps are executed on the computer or other programmable device to produce a computer-implemented process, whereby the instructions executed on the computer or other programmable device provide steps for implementing the functions specified in one or more processes in the flowchart and/or one or more boxes in the block diagram.
- the embodiment of the present application provides a coding and decoding method, a decoder, an encoder and a computer-readable storage medium.
- the decoder determines the point cloud representation network according to the point cloud representation network parameters transmitted by the encoder, uses the time information and spatial information corresponding to the point cloud representation network and at least one point cloud to reconstruct at least one depth image, and then restores at least one reconstructed point cloud based on the at least one depth image, realizing the decoding and reconstruction of the depth image represented by the implicit neural network parameters based on the spatiotemporal position coding, so that the advantages of the neural network can be more fully utilized, the complexity of the operation is greatly reduced, the decoding efficiency is improved, and the reconstruction quality of the decoding is improved.
- the high-frequency components of the point cloud representation network are obtained according to the time information and spatial information corresponding to at least one point cloud, which can improve the representation effect of the neural network, further improve the reconstruction quality, and then improve the decoding performance.
- the encoder converts the point cloud into a depth map, making it more efficient and compact in spatial arrangement; through the spatiotemporal position coding, the depth image is represented by the implicit neural network, which further improves the representation efficiency of the point cloud sequence; thereby reducing the complexity of the operation and improving the coding efficiency.
- the encoder of the embodiment of the present application not only performs position encoding on the time information, but also combines the spatial posture information of the sensor (such as translation and rotation) to perform splicing of encoding vectors, which are used together as the input of the implicit neural network, thereby obtaining a better characterization effect, thereby improving the encoding quality and further improving the encoding performance.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
Des modes de réalisation de la présente demande divulguent un procédé de codage et de décodage, un décodeur, un codeur et un support de stockage lisible par ordinateur, qui peuvent améliorer l'efficacité de codage et de décodage et la qualité de reconstruction d'image, et améliorer ainsi les performances de codage et de décodage. Le procédé consiste à : effectuer un décodage d'informations codées correspondant à une séquence de nuage de points dans un flux de code et déterminer un paramètre de réseau de représentation de nuage de points, le paramètre de réseau de représentation de nuage de points étant déterminé par le codeur effectuant une mise à jour de paramètre de réseau du réseau initial sur la base d'informations temporelles et d'informations spatiales correspondant à la séquence de nuage de points, et une séquence de carte de profondeur correspondant à la séquence de nuage de points ; sur la base du paramètre de réseau de représentation en nuage de points, déterminer un réseau de représentation en nuage de points ; à l'aide du réseau de représentation en nuage de points, et sur la base des informations temporelles et des informations spatiales correspondant à la séquence de nuage de points, déterminer au moins une image de profondeur reconstruite, les informations temporelles représentant un temps d'acquisition correspondant à la séquence de nuage de points, et les informations spatiales représentant des informations de mouvement spatial d'un dispositif de collecte de la séquence de nuage de points ; et, sur la base de ladite image de profondeur reconstruite, déterminer au moins un nuage de points reconstruit correspondant à la séquence de nuage de points.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/CN2023/107558 WO2025015465A1 (fr) | 2023-07-14 | 2023-07-14 | Procédé de codage et de décodage, décodeur, codeur et support de stockage lisible par ordinateur |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/CN2023/107558 WO2025015465A1 (fr) | 2023-07-14 | 2023-07-14 | Procédé de codage et de décodage, décodeur, codeur et support de stockage lisible par ordinateur |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025015465A1 true WO2025015465A1 (fr) | 2025-01-23 |
Family
ID=94280886
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2023/107558 Pending WO2025015465A1 (fr) | 2023-07-14 | 2023-07-14 | Procédé de codage et de décodage, décodeur, codeur et support de stockage lisible par ordinateur |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025015465A1 (fr) |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109964237A (zh) * | 2016-09-15 | 2019-07-02 | 谷歌有限责任公司 | 图像深度预测神经网络 |
| EP3614673A1 (fr) * | 2018-08-23 | 2020-02-26 | InterDigital VC Holdings, Inc. | Procédé et appareil de codage/décodage d'un nuage de points représentant un objet 3d |
| US20200349722A1 (en) * | 2016-12-02 | 2020-11-05 | Google Llc | Determining structure and motion in images using neural networks |
| DE102021201168A1 (de) * | 2021-02-09 | 2022-08-11 | Robert Bosch Gesellschaft mit beschränkter Haftung | Ermitteln von Tiefenkarten aus Radar-Daten und/oder Lidar-Daten |
| WO2023020710A1 (fr) * | 2021-08-20 | 2023-02-23 | Telefonaktiebolaget Lm Ericsson (Publ) | Prédiction de profondeur améliorée pour une compression d'image de profondeur |
| US20230213643A1 (en) * | 2022-01-05 | 2023-07-06 | Waymo Llc | Camera-radar sensor fusion using local attention mechanism |
-
2023
- 2023-07-14 WO PCT/CN2023/107558 patent/WO2025015465A1/fr active Pending
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109964237A (zh) * | 2016-09-15 | 2019-07-02 | 谷歌有限责任公司 | 图像深度预测神经网络 |
| US20200349722A1 (en) * | 2016-12-02 | 2020-11-05 | Google Llc | Determining structure and motion in images using neural networks |
| EP3614673A1 (fr) * | 2018-08-23 | 2020-02-26 | InterDigital VC Holdings, Inc. | Procédé et appareil de codage/décodage d'un nuage de points représentant un objet 3d |
| DE102021201168A1 (de) * | 2021-02-09 | 2022-08-11 | Robert Bosch Gesellschaft mit beschränkter Haftung | Ermitteln von Tiefenkarten aus Radar-Daten und/oder Lidar-Daten |
| WO2023020710A1 (fr) * | 2021-08-20 | 2023-02-23 | Telefonaktiebolaget Lm Ericsson (Publ) | Prédiction de profondeur améliorée pour une compression d'image de profondeur |
| US20230213643A1 (en) * | 2022-01-05 | 2023-07-06 | Waymo Llc | Camera-radar sensor fusion using local attention mechanism |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN114915795B (zh) | 基于二维规则化平面投影的点云编解码方法及装置 | |
| US20230055026A1 (en) | Point cloud data encoding method, point cloud data decoding method, point cloud data processing method, apparatuses, electronic device, computer program product, and computer-readable storage medium | |
| CN119011851B (zh) | 基于变分自编码器改进熵模型的视频压缩方法及系统 | |
| WO2023130333A1 (fr) | Procédé de codage et de décodage, codeur, décodeur, et support de stockage | |
| CN116016953A (zh) | 一种基于深度熵编码的动态点云属性压缩方法 | |
| CN114915792B (zh) | 基于二维规则化平面投影的点云编解码方法及装置 | |
| EP4236322B1 (fr) | Procédé et dispositif de codage/décodage de nuage de points basés sur une projection plane régularisée bidimensionnelle | |
| Wiemann et al. | Compressing ROS sensor and geometry messages with draco | |
| WO2025015465A1 (fr) | Procédé de codage et de décodage, décodeur, codeur et support de stockage lisible par ordinateur | |
| WO2022226850A1 (fr) | Procédé d'amélioration de qualité de nuage de points, procédés de codage et de décodage, appareils, et support de stockage | |
| JP7708511B2 (ja) | 点群コーディングのための方法、装置及び媒体 | |
| CN116325732A (zh) | 点云的解码、编码方法、解码器、编码器和编解码系统 | |
| WO2023116897A1 (fr) | Procédé, appareil et support de codage en nuage de points | |
| KR20250003303A (ko) | 암시적 신경망 표현 기반의 다중시점 비디오 코딩방법 및 장치 | |
| CN114283214B (zh) | 一种光场图像编码和解码方法、系统、装置、设备及介质 | |
| WO2023123284A1 (fr) | Procédé de décodage, procédé de codage, décodeur, codeur et support de stockage | |
| CN119741385B (zh) | 一种面向语义分割的激光雷达点云压缩系统 | |
| Meng et al. | PCGCD: Joint Point Cloud Geometry Compression and Denoising | |
| JP7735574B2 (ja) | 点群コーディングのための方法、装置、及び媒体 | |
| US20250337942A1 (en) | Method, device, and computer program product for compressing point cloud data | |
| CN118691691A (zh) | 点云数据的编码方法、解码方法、终端设备及存储介质 | |
| CN118842789B (zh) | 数据传输方法及相关设备 | |
| WO2024149258A1 (fr) | Procédé, appareil et support de codage de nuage de points | |
| Sridhara et al. | Region-Adaptive Learned Hierarchical Encoding for 3D Gaussian Splatting Data | |
| HK40067591A (en) | Point cloud data encoding method, decoding method, point cloud data processing method and apparatus |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23945364 Country of ref document: EP Kind code of ref document: A1 |