WO2025096197A1 - Entropy bottleneck layer for learning-based mesh compression - Google Patents
Entropy bottleneck layer for learning-based mesh compression Download PDFInfo
- Publication number
- WO2025096197A1 WO2025096197A1 PCT/US2024/051558 US2024051558W WO2025096197A1 WO 2025096197 A1 WO2025096197 A1 WO 2025096197A1 US 2024051558 W US2024051558 W US 2024051558W WO 2025096197 A1 WO2025096197 A1 WO 2025096197A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- mesh
- rate
- obtaining
- entropy
- vertices
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
- G06T9/001—Model-based coding, e.g. wire frame
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
- G06T9/002—Image coding using neural networks
Definitions
- the present embodiments generally relate to a method and an apparatus for 3D mesh compression.
- Point cloud data is also believed to consume a large portion of network traffic, e.g., among connected cars over 5G network, and immersive communications (VR/AR). Point cloud understanding and communication would essentially lead to efficient representation formats.
- raw point cloud data need to be properly organized and processed for the purposes of world modeling and sensing.
- point clouds may represent a sequential scan of the same scene, which contains multiple moving objects. They are called dynamic point clouds as compared to static point clouds captured from a static scene or static objects. Dynamic point clouds are typically organized into frames, with different frames being captured at different time.
- Mesh is another data format that is composed of both points (also known as vertex) and faces. Compared to point clouds, mesh is assumed to be a complete model of 3D surfaces, while the points in point clouds are only a set of discretized samples of the surface.
- CG Computer Graphics
- data is typically available in mesh format as the data is generated by the content providers themselves.
- the raw data is typically in point cloud format.
- mesh can be created via a meshing procedure.
- a method for training a learning-based mesh compression system with an entropy bottleneck layer comprising: obtaining a mesh; adjusting said mesh to obtain an adjusted mesh; performing a serialization on said mesh or the adjusted mesh to obtain an ordered list of vertices; for a vertex of said ordered list of vertices: predicting said vertex to obtain a predicted position, obtaining a prediction error between a position of said vertex and said predicted position, and obtaining a rate of encoding said prediction error based on a probability distribution parameter of said prediction error; obtaining a loss value based on said rate; and updating parameters for neural networks used to encode and decode said mesh, based on said loss value.
- an apparatus for training a learning-based mesh compression system with an entropy bottleneck layer comprising one or more processors and at least one memory coupled to said one or more processors, wherein said one or more processors are configured to obtain a mesh; adjust said mesh to obtain an adjusted mesh; perform a serialization on said mesh or the adjusted mesh to obtain an ordered list of vertices; for a vertex of said ordered list of vertices: predict said vertex to obtain a predicted position, obtain a prediction error between a position of said vertex and said predicted position, and obtain a rate of encoding said prediction error based on a probability distribution parameter of said prediction error; obtain a loss value based on said rate; and update parameters for neural networks used to encode and decode said mesh, based on said loss value.
- said loss value is further based on a distortion between said mesh and a reconstructed version of said adjusted mesh.
- said obtaining a rate is performed for each vertex of a plurality of vertices in said ordered list of vertices.
- said predicting said vertex is based on previous vertices in said ordered list of vertices.
- said predicting said vertex is based on a prediction table for said ordered list of vertices, wherein said prediction table indicates which vertices are used as references for said vertex.
- said serialization is based on an edgebreaker.
- said adjusting said mesh is performed by adding noise to said mesh.
- a uniform noise with a variance is added as said noise, and wherein said rate of said prediction error is obtained further based on said variance.
- said adjusting said mesh is performed by quantization of said mesh.
- a quantization step size is used to quantize said mesh, and wherein said rate of said prediction error is obtained further based on said quantization step size.
- quantization is applied during forward computation of training and bypassed during backward propagation of training.
- the method further comprises: decomposing an input mesh into a base mesh and a feature map, wherein said base mesh corresponds to said mesh.
- the method can further comprise extracting said feature map and said base mesh from said input mesh; generating another version of said base mesh from said entropy bottleneck layer; and decoding said another version of said mesh to form a decoded mesh.
- said loss value is further based on a distortion between said input mesh and said decoded mesh.
- the method comprises: obtaining a probability distribution parameter set based on said prediction errors, wherein said probability distribution parameter set is used to obtain said rate.
- the method can further comprises obtaining a hyperparameter set describing said probability distribution parameter set; and obtaining another rate based on said hyperparameter set.
- said obtaining a probability distribution parameter set comprises: extracting features from said prediction errors; quantizing output from said extracting features to form said hyperparameter set; and applying a feature decoder to said hyperparameter set to obtain said parameter set.
- One or more embodiments also provide a computer program comprising instructions which when executed by one or more processors cause the one or more processors to perform the encoding method or decoding method according to any of the embodiments described herein.
- One or more of the present embodiments also provide a computer readable storage medium having stored thereon instructions for encoding or decoding point cloud data according to the methods described herein.
- One or more embodiments also provide a computer readable storage medium having stored thereon point cloud data generated according to the methods described above.
- One or more embodiments also provide a method and apparatus for transmitting or receiving the point cloud data generated according to the methods described herein.
- FIG. 1 illustrates a block diagram of a system within which aspects of the present embodiments may be implemented.
- FIG. 2 illustrates a diagram of a learning-based compression system.
- FIG. 3 illustrates a diagram to train a learning-based compression system.
- FIG. 4 illustrates a diagram of an entropy bottleneck layer.
- FIG. 5 illustrates a diagram of a learning-based mesh compression system.
- FIG. 6 illustrates a diagram of the coding of base mesh.
- FIG. 7 illustrates a diagram to train a learning-based mesh compression system with a proposed mesh entropy bottleneck layer, according to an embodiment.
- FIG. 8 illustrates a diagram of an entropy bottleneck layer for a mesh compression system (EN), according to an embodiment.
- FIG. 9 presents an illustration of parallelogram-based predictor generation, according to an embodiment.
- FIG. 10 illustrates a diagram of an entropy bottleneck layer for a mesh compression system (EQ), according to an embodiment.
- FIG. 11 illustrates a diagram of an entropy bottleneck layer for a mesh compression system (LN), according to an embodiment.
- FIG. 12 illustrates a diagram of an entropy bottleneck layer for a mesh compression system (LQ), according to an embodiment.
- FIG. 13 illustrates a diagram of an entropy bottleneck layer for a mesh compression system with distribution parameter estimation, according to an embodiment.
- FIG. 14 illustrates a diagram of an entropy bottleneck layer for a mesh compression system with distribution parameter estimation, according to another embodiment.
- FIG. 15 illustrates a diagram of the design of EDP, according to an embodiment.
- FIG. 1 illustrates a block diagram of an example of a system in which various aspects and embodiments can be implemented.
- System 100 may be embodied as a device including the various components described below and is configured to perform one or more of the aspects described in this application. Examples of such devices, include, but are not limited to, various electronic devices such as personal computers, laptop computers, smartphones, tablet computers, digital multimedia set top boxes, digital television receivers, personal video recording systems, connected home appliances, and servers.
- Elements of system 100 singly or in combination, may be embodied in a single integrated circuit, multiple ICs, and/or discrete components.
- the processing and encoder/decoder elements of system 100 are distributed across multiple ICs and/or discrete components.
- system 100 is communicatively coupled to other systems, or to other electronic devices, via, for example, a communications bus or through dedicated input and/or output ports.
- system 100 is configured to implement one or more of the aspects described in this application.
- the system 100 includes at least one processor 110 configured to execute instructions loaded therein for implementing, for example, the various aspects described in this application.
- Processor 110 may include embedded memory, input output interface, and various other circuitries as known in the art.
- the system 100 includes at least one memory 120 (e.g., a volatile memory device, and/or a non-volatile memory device).
- System 100 includes a storage device 140, which may include non-volatile memory and/or volatile memory, including, but not limited to, EEPROM, ROM, PROM, RAM, DRAM, SRAM, flash, magnetic disk drive, and/or optical disk drive.
- the storage device 140 may include an internal storage device, an attached storage device, and/or a network accessible storage device, as non-limiting examples.
- System 100 includes an encoder/decoder module 130 configured, for example, to process data to provide an encoded video or decoded video, and the encoder/decoder module 130 may include its own processor and memory.
- the encoder/decoder module 130 represents module(s) that may be included in a device to perform the encoding and/or decoding functions. As is known, a device may include one or both of the encoding and decoding modules. Additionally, encoder/decoder module 130 may be implemented as a separate element of system 100 or may be incorporated within processor 110 as a combination of hardware and software as known to those skilled in the art.
- Program code to be loaded onto processor 110 or encoder/decoder 130 to perform the various aspects described in this application may be stored in storage device 140 and subsequently loaded onto memory 120 for execution by processor 110.
- one or more of processor 110, memory 120, storage device 140, and encoder/decoder module 130 may store one or more of various items during the performance of the processes described in this application. Such stored items may include, but are not limited to, the input video, the decoded video or portions of the decoded video, the bitstream, matrices, variables, and intermediate or final results from the processing of equations, formulas, operations, and operational logic.
- memory inside of the processor 110 and/or the encoder/decoder module 130 is used to store instructions and to provide working memory for processing that is needed during encoding or decoding.
- a memory external to the processing device (for example, the processing device may be either the processor 110 or the encoder/decoder module 130) is used for one or more of these functions.
- the external memory may be the memory 120 and/or the storage device 140, for example, a dynamic volatile memory and/or a non-volatile flash memory.
- an external non-volatile flash memory is used to store the operating system of a television.
- a fast external dynamic volatile memory such as a RAM is used as working memory for video coding and decoding operations, such as for MPEG-2, JPEG Pleno, MPEG-I, V-DMC, HEVC, or VVC.
- the input to the elements of system 100 may be provided through various input devices as indicated in block 105.
- Such input devices include, but are not limited to, (i) an RF portion that receives an RF signal transmitted, for example, over the air by a broadcaster, (ii) a Composite input terminal, (iii) a USB input terminal, and/or (iv) an HDMI input terminal.
- the input devices of block 105 have associated respective input processing elements as known in the art.
- the RF portion may be associated with elements suitable for (i) selecting a desired frequency (also referred to as selecting a signal, or band-limiting a signal to a band of frequencies), (ii) down converting the selected signal, (iii) bandlimiting again to a narrower band of frequencies to select (for example) a signal frequency band which may be referred to as a channel in certain embodiments, (iv) demodulating the down converted and band-limited signal, (v) performing error correction, and (vi) demultiplexing to select the desired stream of data packets.
- the RF portion of various embodiments includes one or more elements to perform these functions, for example, frequency selectors, signal selectors, bandlimiters, channel selectors, filters, downconverters, demodulators, error correctors, and demultiplexers.
- the RF portion may include a tuner that performs various of these functions, including, for example, down converting the received signal to a lower frequency (for example, an intermediate frequency or a near-baseband frequency) or to baseband.
- the RF portion and its associated input processing element receives an RF signal transmitted over a wired (for example, cable) medium, and performs frequency selection by filtering, down converting, and filtering again to a desired frequency band.
- Adding elements may include inserting elements in between existing elements, for example, inserting amplifiers and an analog- to-digital converter.
- the RF portion includes an antenna.
- the USB and/or HDMI terminals may include respective interface processors for connecting system 100 to other electronic devices across USB and/or HDMI connections.
- various aspects of input processing for example, Reed-Solomon error correction, may be implemented, for example, within a separate input processing IC or within processor 110 as necessary.
- aspects of USB or HDMI interface processing may be implemented within separate interface ICs or within processor 110 as necessary.
- the demodulated, error corrected, and demultiplexed stream is provided to various processing elements, including, for example, processor 110, and encoder/decoder 130 operating in combination with the memory and storage elements to process the datastream as necessary for presentation on an output device.
- connection arrangement 115 for example, an internal bus as known in the art, including the I2C bus, wiring, and printed circuit boards.
- the system 100 includes communication interface 150 that enables communication with other devices via communication channel 190.
- the communication interface 150 may include, but is not limited to, a transceiver configured to transmit and to receive data over communication channel 190.
- the communication interface 150 may include, but is not limited to, a modem or network card and the communication channel 190 may be implemented, for example, within a wired and/or a wireless medium.
- Data is streamed to the system 100, in various embodiments, using a Wi-Fi network such as IEEE 802. 11.
- the Wi-Fi signal of these embodiments is received over the communications channel 190 and the communications interface 150 which are adapted for Wi-Fi communications.
- the communications channel 190 of these embodiments is typically connected to an access point or router that provides access to outside networks including the Internet for allowing streaming applications and other over-the-top communications.
- Other embodiments provide streamed data to the system 100 using a set-top box that delivers the data over the HDMI connection of the input block 105.
- Still other embodiments provide streamed data to the system 100 using the RF connection of the input block 105.
- the system 100 may provide an output signal to various output devices, including a display 165, speakers 175, and other peripheral devices 185.
- the other peripheral devices 185 include, in various examples of embodiments, one or more of a stand-alone DVR, a disk player, a stereo system, a lighting system, and other devices that provide a function based on the output of the system 100.
- control signals are communicated between the system 100 and the display 165, speakers 175, or other peripheral devices 185 using signaling such as AV. Link, CEC, or other communications protocols that enable device-to-device control with or without user intervention.
- the output devices may be communicatively coupled to system 100 via dedicated connections through respective interfaces 160, 170, and 180.
- the output devices may be connected to system 100 using the communications channel 190 via the communications interface 150.
- the display 165 and speakers 175 may be integrated in a single unit with the other components of system 100 in an electronic device, for example, a television.
- the display interface 160 includes a display driver, for example, a timing controller (T Con) chip.
- T Con timing controller
- the display 165 and speaker 175 may alternatively be separate from one or more of the other components, for example, if the RF portion of input 105 is part of a separate set-top box.
- the output signal may be provided via dedicated output connections, including, for example, HDMI ports, USB ports, or COMP outputs.
- VR Virtual Reality
- immersive worlds are foreseen by many as the future of 2D video.
- the basic idea is to immerse the viewers in an environment all around them as opposed to the standard TV where they can only look at the virtual world in the front.
- Point cloud as well as mesh are good format candidates to distribute VR contents. They may be static or dynamic and are typically of moderate size, no more than millions of points at a time.
- Point clouds and meshes may be also used for various purposes such as culture heritage/civil engineering in which objects like statues or buildings are scanned in 3D in order to share the spatial configuration of the object without moving or visiting them. Also, it is a way to ensure preserving the knowledge of the object in case it may be destroyed, for instance, a temple by an earthquake. Such point clouds and meshes are typically static, colored, and huge.
- 3D point cloud data are essentially discrete samples on the surfaces of objects or scenes. To fully represent the real world with point samples, in practice it requires a huge number of points. For instance, a typical VR immersive scene contains millions of points, while point clouds typically contain hundreds of millions of points. Therefore, the processing of such large-scale point clouds is computationally expensive, especially for consumer devices, e.g., smartphone, tablet, and automotive navigation system, that have limited computational power. Additionally, the discrete samples comprising the 3D point cloud data still contain incomplete infomiation about the underlying surfaces of objects and scenes. Hence, recently efforts are being made to also explore mesh representation for 3D scene/surface representation.
- mesh can be considered as a 3D point cloud accompanied with the connectivity information between the points.
- the mesh representation bridges the gap between point clouds and the underlying continuous surfaces through local 2D polygonal patches (called face) approximation of the underlying surface.
- mesh can simplify the point cloud by reducing the number of points. For example, a face in a mesh can be appropriate to represent a flat area while point clouds may require a dense block of points to represent the same flat area.
- the first step for any processing or inference on the mesh data is to have efficient storage methodologies.
- the down-sampled mesh is then fed to the subsequent machine task for further consumption.
- further reduction in storage space can be achieved by converting the raw mesh data (original or downsampled) into a fixed length codeword or a feature map living on a very low-resolution mesh.
- This codeword or the feature map can be converted to a bitstream through existing entropy coding techniques.
- the codeword or feature map is also useful by itself as it represents global or local surface information, respectively, of the underlying scene/object and can be paired with subsequent downstream (machine vision) tasks.
- FIG. 2 A typical learning-based compression system is depicted in FIG. 2.
- An input signal either image, video or point cloud, PC, is first sent to a feature extraction block (210), that is composed of learnable neural network layers.
- the generated feature map F is then entropy coded via an ENCF block (220).
- ENCF block the feature map F is first rounded to integer number or quantized to integer number before being entropy coded into a bitstream BS
- the bitstream is entropy decoded by a DECF block (230).
- the decoded feature map F is sent to a feature decoder (240) to reconstruct the input image, video or point cloud, PC.
- Such an end-to-end compression system is referenced as a deep feature based codec.
- an entropy bottleneck layer was introduced as shown in FIG. 3 to estimate the entropy of the feature map to be coded.
- the entropy bottleneck layer (320, EB block) will replace the ENCF and DECF in the compression system during the training stage. Of course, during inference stage, the ENCF and DECF are brought back for actual coding purpose.
- the introduced EB block takes the feature map generated by the feature extractor (310) as input. It will first output a distorted feature map F, that has certain noise being added.
- the EB block additionally outputs an estimated rate (entropy) for the feature map.
- the entropy bottleneck layer is differentiable by its design. In the end, the training of the compression system is supervised by jointly counting the distortion between input signal PC and reconstructed signal PC outputted by the feature decoder (330), as well as the estimated rate required to code the feature map.
- the entropy bottleneck layer is designed as shown in FIG. 4.
- Certain uniform noise is injected (410) to the input feature map F.
- the noise level is controlled by an input parameter e g., variance o.
- a feature map F distorted by the noise is outputted from the EB block.
- the feature map F is sent to a DR (distribution to rate) block (420).
- the DR block additionally takes the distribution parameter set P and the variance a as input, and outputs the estimated rate required to code the feature map F.
- the distribution parameter set P models the probability distribution of the input feature F. It will be iteratively updated to minimize the rate during training.
- the two blocks (410, 420) are not structured using neural network layers, but deterministic procedures.
- the distribution parameter set P contains parameters to be learned during training, and is updated via the backward propagation during training stage.
- FIG. 5 illustrates a typical diagram of a learning-based mesh compression system that is composed of two branches.
- the main branch for feature coding function is the same as the typical learning-based compression in FIG. 2.
- the encoder is to extract and aggregate features from the input mesh using a feature extraction block (510).
- the extracted feature map F is sent to an entropy coder ENCF (530).
- the feature map F decoded by an entropy decoder DECF (550) is sent to a feature decoder (560) to reconstruct the mesh.
- the input mesh is typically downsampled to a base mesh M with a lower resolution for the benefit of compression.
- the feature map F is associated with the base mesh M.
- the second branch of the mesh compression in FIG. 5 is to code the base mesh M. It is composed of an ENCM block (520) for encoding, and a DECM block (540) for decoding.
- the encoding and decoding of the base mesh involve coding a list of faces. With triangle mesh case, each face is represented by a list of three connected points. Each point is described by a 3D coordinate.
- M — (X, T) where X is the list of points, and T is the list of faces. Each point in a face is represented by its index in X.
- To code a base mesh one needs to code the point list X, and the face list T.
- FIG. 6 the diagram of coding the base mesh (or an original mesh in another embodiment) is illustrated.
- the pipeline is composed of two sub-branches.
- a first sub-branch is the coding of face list T. It is simply done by an arithmetic encoder AET block (630) on encoder side, and an arithmetic decoder ADT block (650) on decoder side.
- the coded faces are part of the bitstream BS .
- the second sub-branch is the coding of the point list X.
- Each point position in the list is quantized, by Qx block (610), before being arithmetically coded, by AEx block (620).
- the quantized point is represented by X.
- the decoder first arithmetically decodes the bitstream by ADx block (640) then inverse quantized by Q -1 x block (660).
- the coded points are part of the bitstream BSM.
- a first entropy bottleneck layer EBF (730) functions the same as the entropy bottleneck layer EB in FIG. 3. It outputs the reconstructed feature F and estimated rate for coding the feature map R F .
- the second mesh entropy bottleneck layer EBM (720) is proposed to take care of the coding of base mesh.
- the mesh entropy bottleneck layer takes a mesh as input, and outputs a reconstructed mesh M and an estimated rate for the mesh R M .
- a loss function is defined by a weighted sum of total rates (R F -I- /? M ), and a distortion measured between the input mesh and the decoded mesh.
- the training of the learning-based mesh codec is then be supervised by the joint rate distortion loss.
- the base mesh has a unique data structure than a feature map, existing entropy bottleneck layer is unable to be applied.
- FIG. 8 depicts the diagram of a proposed mesh entropy bottleneck layer, according to an embodiment. Note that the coding of the face list T is typically lossless, and it consumes a fixed size of bitrate given the same input base mesh. Hence there is no need to count the bitrate required to code the face list. In the proposed diagram, we need only care about the coding of the point list.
- the input mesh is serialized (820).
- the serialization has the mesh organized into M s , a ID sequence of points, representing a coding order of the points.
- the serialization is implemented by a process known as edgebreaker.
- This edgebreaker serialization process works as a state machine. In each state, it moves from one triangle to an adjacent one. Once passed through the adjacent, the next triangle would be either left or right since the remaining is the one passed on the current state. During this traversal, all visited triangle and bounding vertices are marked sequentially. Assuming that T is coded losslessly, the decoder can recover the exact same order.
- the organized sequence M s also maintains a “prediction table” indicating which points are used as references to code a next point. It is worthy to note that the “prediction table” can be virtual and be generated on the fly when the edgebreaker runs over the fact list T.
- a predictor X P is computed by the prediction block (830) as per the following parallelogram equation:
- X Q , X ⁇ and X 2 are reference points, and X P is the predicting point, where we want to deform from by computing (840) an error E N — X N — X P where X N is the current point. Once predicted, the current triangle moves from /QA) " 2 to triangle X 0 X 2 X N . The error is then sent to a DR block (850). Based on the estimated probability distribution parameter set P and the variance ⁇ J used to generate the noise, a rate R N to encode E N is estimated by the DR block (850). By having the rates of all points being estimated and counted, a total rate required for the point list R is outputted.
- this proposed mesh entropy bottleneck layer is referenced as embodiment EN (early split of the branches with noise adder).
- the role of DR block is to estimate the rate/entropy based on the distribution parameter.
- the error E N [e lt e 2 , e 3 ] consists of the error to be coded along the x-, y-, and z- coordinates
- the rate R N can be estimated as the total entropy of the errors, where q(e ; ) is the estimated probability mass function of having an error centered at e ⁇ . It can be computed as the following integral:
- D(-) denotes the probability density function which is parameterized by the parameter set P (to be learned during the training process via backpropagation).
- the probability distribution D(-) can be a Gaussian distribution.
- the parameter set consists of two numbers: a mean and a variance.
- a first embodiment is shown in FIG. 10. It is referenced as embodiment EQ (early split of the branches with quantization). Comparing to embodiment EN as shown in FIG. 8, the noise adder (810) is replaced by a quantization block (1010) that performs both forward quantization and inverse quantization. Because of this change, the DR block takes as input the quantization step size QP instead of the noise variance a when estimating the rate. Just like the noise variance er, QP is provided as another type of hyper parameter indicating the level of compression, a larger quantization step (or noise variance o’) leads to larger compression and distortion. Apparently, the rate depends on the quantization step (or noise variance o’).
- FIG. 11 A second embodiment is shown in FIG. 11. It is referenced as embodiment LN (late split of the branches with noise adder). Comparing to embodiment EN as shown in FIG. 8, the starting position of the rate estimation branch is moved from before to after the noise adder (1110). The remaining processing diagram basically remain the same, except that the distribution to rate estimation DR block does not take the noise variance ⁇ J as input. However, the derivation of the DR block still follows the rationale described before with respect to FIG. 8.
- a third embodiment is shown in FIG. 12. It is referenced as embodiment LQ (late split of the branches with quantization). Its difference comparing to embodiment LN is that the noise adder (1110) is changed to a quantization block (1210). This embodiment presents a good performance with a simpler design.
- the parameter set P in FIG. 13 is an output of the EDP block (1310).
- the EDP block takes the error EN as input and outputs the estimated parameter set P.
- the EDP block consists of neural network layers similar to the feature extraction (FE) block, which digests the prediction error E N and generates the parameter set P.
- the second embodiment is provided in FIG. 14, which is a more advanced embodiment compared to FIG. 13.
- the EDP (1410) takes the EN as input and outputs two sets of parameters P and P where P" is the hyper parameter set describing P. With P the parameter set P can be reproduced. Then the hyper parameter set F” is additionally fed to another rate estimation block DR’ (1420) to estimate the rate of P" (denoted as Rp).
- P is the distribution parameter set describing P and it is to be learned during training via backpropagation.
- the output rate R is the total rate of all the RN and the Rp.
- the EDP block consists of three steps as shown in FIG. 15.
- the first step applies neural network layers such as a feature extractor (1510) from EN.
- the feature extractor output is fed to a quantizer Q (1520), leading to the hyperparameter set P.
- another group of neural network layers such as a feature decoder (1530) takes P" as input and outputs the parameter set P.
- each of the methods comprises one or more steps or actions for achieving the described method. Unless a specific order of steps or actions is required for proper operation of the method, the order and/or use of specific steps and/or actions may be modified or combined. Additionally, terms such as “first”, “second”, etc. may be used in various embodiments to modify an element, component, step, operation, etc., such as, for example, a “first decoding” and a “second decoding”. Use of such terms does not imply an ordering to the modified operations unless specifically required. So, in this example, the first decoding need not be performed before the second decoding, and may occur, for example, before, during, or in an overlapping time period with the second decoding.
- the implementations and aspects described herein may be implemented in, for example, a method or a process, an apparatus, a software program, a data stream, or a signal. Even if only discussed in the context of a single form of implementation (for example, discussed only as a method), the implementation of features discussed may also be implemented in other forms (for example, an apparatus or program).
- An apparatus may be implemented in, for example, appropriate hardware, software, and firmware.
- the methods may be implemented in, for example, an apparatus, for example, a processor, which refers to processing devices in general, including, for example, a computer, a microprocessor, an integrated circuit, or a programmable logic device. Processors also include communication devices, for example, computers, cell phones, portable/personal digital assistants (“PDAs”), and other devices that facilitate communication of information between end-users.
- PDAs portable/personal digital assistants
- references to “one embodiment” or “an embodiment” or “one implementation” or “an implementation”, as well as other variations thereof, means that a particular feature, structure, characteristic, and so forth described in connection with the embodiment is included in at least one embodiment.
- the appearances of the phrase “in one embodiment” or “in an embodiment” or “in one implementation” or “in an implementation”, as well any other variations, appearing in various places throughout this application are not necessarily all referring to the same embodiment.
- this application may refer to “determining” various pieces of information. Determining the information may include one or more of, for example, estimating the information, calculating the information, predicting the information, or retrieving the information from memory.
- this application may refer to “accessing” various pieces of information. Accessing the information may include one or more of, for example, receiving the information, retrieving the information (for example, from memory), storing the information, moving the information, copying the information, calculating the information, determining the information, predicting the information, or estimating the information. [99] Additionally, this application may refer to “receiving” various pieces of information. Receiving is, as with “accessing”, intended to be a broad term. Receiving the information may include one or more of, for example, accessing the information, or retrieving the information (for example, from memory).
- receiving is typically involved, in one way or another, during operations, for example, storing the information, processing the information, transmitting the information, moving the information, copying the information, erasing the information, calculating the information, determining the information, predicting the information, or estimating the information.
- such phrasing is intended to encompass the selection of the first listed option (A) only, or the selection of the second listed option (B) only, or the selection of the third listed option (C) only, or the selection of the first and the second listed options (A and B) only, or the selection of the first and third listed options (A and C) only, or the selection of the second and third listed options (B and C) only, or the selection of all three options (A and B and C).
- This may be extended, as is clear to one of ordinary skill in this and related arts, for as many items as are listed.
- implementations may produce a variety of signals formatted to carry information that may be, for example, stored or transmitted.
- the information may include, for example, instructions for performing a method, or data produced by one of the described implementations.
- a signal may be formatted to carry the bitstream of a described embodiment.
- Such a signal may be formatted, for example, as an electromagnetic wave (for example, using a radio frequency portion of spectrum) or as a baseband signal.
- the formatting may include, for example, encoding a data stream and modulating a carrier with the encoded data stream.
- the information that the signal carries may be, for example, analog or digital information.
- the signal may be transmitted over a variety of different wired or wireless links, as is known.
- the signal may be stored on a processor-readable medium.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
As a type of information, connectivity in mesh presents challenges not only in designing efficient AI architectures, but also in the training of a learning-based compression system for mesh. In one implementation, we present an entropy bottleneck layer as an enabler by allowing the training of an end-to-end learning-based compression system for image/video/point clouds. Specially, we propose ways to model how the distortions introduced by compression. We also propose ways to model the distribution of the mesh points and estimate the rate required to code the mesh.
Description
ENTROPY BOTTLENECK LAYER FOR LEARNING-BASED MESH COMPRESSION
TECHNICAL FIELD
[1] The present embodiments generally relate to a method and an apparatus for 3D mesh compression.
BACKGROUND
[2] 3D point cloud is a universal data format across several business domains, e.g., from autonomous driving, robotics, augmented reality/virtual reality (AR/VR), civil engineering, computer graphics, to the animation/movie industry. 3D LiDAR (Light Detection and Ranging) sensors have been widely deployed in self-driving cars. Affordable LiDAR sensors are released from Velodyne Velabit, Apple iPad Pro 2020 and Intel RealSense LiDAR camera L515. With great advances in sensing technologies, 3D point cloud data has become more practical than ever and is expected to be an ultimate enabler in the applications mentioned.
[3] Point cloud data is also believed to consume a large portion of network traffic, e.g., among connected cars over 5G network, and immersive communications (VR/AR). Point cloud understanding and communication would essentially lead to efficient representation formats. In particular, raw point cloud data need to be properly organized and processed for the purposes of world modeling and sensing. Furthermore, point clouds may represent a sequential scan of the same scene, which contains multiple moving objects. They are called dynamic point clouds as compared to static point clouds captured from a static scene or static objects. Dynamic point clouds are typically organized into frames, with different frames being captured at different time.
[4] Mesh is another data format that is composed of both points (also known as vertex) and faces. Compared to point clouds, mesh is assumed to be a complete model of 3D surfaces, while the points in point clouds are only a set of discretized samples of the surface.
[5] For Computer Graphics (CG), data is typically available in mesh format as the data is generated by the content providers themselves. For data acquired by sensors, including LiDAR, the raw data is typically in point cloud format. For those point cloud data, mesh can be created via a meshing procedure.
SUMMARY
[6] According to an embodiment, a method for training a learning-based mesh compression system with an entropy bottleneck layer is presented, comprising: obtaining a mesh; adjusting said mesh to obtain an adjusted mesh; performing a serialization on said mesh or the adjusted mesh to obtain an ordered list of vertices; for a vertex of said ordered list of vertices: predicting said vertex to obtain a predicted position, obtaining a prediction error between a position of said vertex and said predicted position, and obtaining a rate of encoding said prediction error based on a probability distribution parameter of said prediction error; obtaining a loss value based on said rate; and updating parameters for neural networks used to encode and decode said mesh, based on said loss value.
[7] According to another embodiment, an apparatus for training a learning-based mesh compression system with an entropy bottleneck layer is presented, comprising one or more processors and at least one memory coupled to said one or more processors, wherein said one or more processors are configured to obtain a mesh; adjust said mesh to obtain an adjusted mesh; perform a serialization on said mesh or the adjusted mesh to obtain an ordered list of vertices; for a vertex of said ordered list of vertices: predict said vertex to obtain a predicted position, obtain a prediction error between a position of said vertex and said predicted position, and obtain a rate of encoding said prediction error based on a probability distribution parameter of said prediction error; obtain a loss value based on said rate; and update parameters for neural networks used to encode and decode said mesh, based on said loss value.
[8] In one embodiment, said loss value is further based on a distortion between said mesh and a reconstructed version of said adjusted mesh.
[9] In one embodiment, said obtaining a rate is performed for each vertex of a plurality of vertices in said ordered list of vertices.
[10] In one embodiment, said predicting said vertex is based on previous vertices in said ordered list of vertices.
[11] In one embodiment, said predicting said vertex is based on a prediction table for said ordered list of vertices, wherein said prediction table indicates which vertices are used as references for said vertex.
[12] In one embodiment, said serialization is based on an edgebreaker.
[13] In one embodiment, said adjusting said mesh is performed by adding noise to said mesh.
[14] In one embodiment, a uniform noise with a variance is added as said noise, and wherein said rate of said prediction error is obtained further based on said variance.
[15] In one embodiment, said adjusting said mesh is performed by quantization of said mesh.
[16] In one embodiment, a quantization step size is used to quantize said mesh, and wherein said rate of said prediction error is obtained further based on said quantization step size.
[17] In one embodiment, quantization is applied during forward computation of training and bypassed during backward propagation of training.
[18] In one embodiment, the method further comprises: decomposing an input mesh into a base mesh and a feature map, wherein said base mesh corresponds to said mesh. The method can further comprise extracting said feature map and said base mesh from said input mesh; generating another version of said base mesh from said entropy bottleneck layer; and decoding said another version of said mesh to form a decoded mesh.
[19] In one embodiment, said loss value is further based on a distortion between said input mesh and said decoded mesh.
[20] In one embodiment, the method comprises: obtaining a probability distribution parameter set based on said prediction errors, wherein said probability distribution parameter set is used to obtain said rate. The method can further comprises obtaining a hyperparameter set describing said probability distribution parameter set; and obtaining another rate based on said hyperparameter set.
[21] In one embodiment, said obtaining a probability distribution parameter set comprises: extracting features from said prediction errors; quantizing output from said extracting features to form said hyperparameter set; and applying a feature decoder to said hyperparameter set to obtain said parameter set.
[22] One or more embodiments also provide a computer program comprising instructions which when executed by one or more processors cause the one or more processors to perform the encoding method or decoding method according to any of the embodiments described herein. One or more of the present embodiments also provide a computer readable storage medium having
stored thereon instructions for encoding or decoding point cloud data according to the methods described herein.
[23] One or more embodiments also provide a computer readable storage medium having stored thereon point cloud data generated according to the methods described above. One or more embodiments also provide a method and apparatus for transmitting or receiving the point cloud data generated according to the methods described herein.
BRIEF DESCRIPTION OF THE DRAWINGS
[24] FIG. 1 illustrates a block diagram of a system within which aspects of the present embodiments may be implemented.
[25] FIG. 2 illustrates a diagram of a learning-based compression system.
[26] FIG. 3 illustrates a diagram to train a learning-based compression system.
[27] FIG. 4 illustrates a diagram of an entropy bottleneck layer.
[28] FIG. 5 illustrates a diagram of a learning-based mesh compression system.
[29] FIG. 6 illustrates a diagram of the coding of base mesh.
[30] FIG. 7 illustrates a diagram to train a learning-based mesh compression system with a proposed mesh entropy bottleneck layer, according to an embodiment.
[31] FIG. 8 illustrates a diagram of an entropy bottleneck layer for a mesh compression system (EN), according to an embodiment.
[32] FIG. 9 presents an illustration of parallelogram-based predictor generation, according to an embodiment.
[33] FIG. 10 illustrates a diagram of an entropy bottleneck layer for a mesh compression system (EQ), according to an embodiment.
[34] FIG. 11 illustrates a diagram of an entropy bottleneck layer for a mesh compression system (LN), according to an embodiment.
[35] FIG. 12 illustrates a diagram of an entropy bottleneck layer for a mesh compression system (LQ), according to an embodiment.
[36] FIG. 13 illustrates a diagram of an entropy bottleneck layer for a mesh compression system
with distribution parameter estimation, according to an embodiment.
[37] FIG. 14 illustrates a diagram of an entropy bottleneck layer for a mesh compression system with distribution parameter estimation, according to another embodiment.
[38] FIG. 15 illustrates a diagram of the design of EDP, according to an embodiment.
DETAILED DESCRIPTION
[39] FIG. 1 illustrates a block diagram of an example of a system in which various aspects and embodiments can be implemented. System 100 may be embodied as a device including the various components described below and is configured to perform one or more of the aspects described in this application. Examples of such devices, include, but are not limited to, various electronic devices such as personal computers, laptop computers, smartphones, tablet computers, digital multimedia set top boxes, digital television receivers, personal video recording systems, connected home appliances, and servers. Elements of system 100, singly or in combination, may be embodied in a single integrated circuit, multiple ICs, and/or discrete components. For example, in at least one embodiment, the processing and encoder/decoder elements of system 100 are distributed across multiple ICs and/or discrete components. In various embodiments, the system 100 is communicatively coupled to other systems, or to other electronic devices, via, for example, a communications bus or through dedicated input and/or output ports. In various embodiments, the system 100 is configured to implement one or more of the aspects described in this application.
[40] The system 100 includes at least one processor 110 configured to execute instructions loaded therein for implementing, for example, the various aspects described in this application. Processor 110 may include embedded memory, input output interface, and various other circuitries as known in the art. The system 100 includes at least one memory 120 (e.g., a volatile memory device, and/or a non-volatile memory device). System 100 includes a storage device 140, which may include non-volatile memory and/or volatile memory, including, but not limited to, EEPROM, ROM, PROM, RAM, DRAM, SRAM, flash, magnetic disk drive, and/or optical disk drive. The storage device 140 may include an internal storage device, an attached storage device, and/or a network accessible storage device, as non-limiting examples.
[41] System 100 includes an encoder/decoder module 130 configured, for example, to process data to provide an encoded video or decoded video, and the encoder/decoder module 130 may
include its own processor and memory. The encoder/decoder module 130 represents module(s) that may be included in a device to perform the encoding and/or decoding functions. As is known, a device may include one or both of the encoding and decoding modules. Additionally, encoder/decoder module 130 may be implemented as a separate element of system 100 or may be incorporated within processor 110 as a combination of hardware and software as known to those skilled in the art.
[42] Program code to be loaded onto processor 110 or encoder/decoder 130 to perform the various aspects described in this application may be stored in storage device 140 and subsequently loaded onto memory 120 for execution by processor 110. In accordance with various embodiments, one or more of processor 110, memory 120, storage device 140, and encoder/decoder module 130 may store one or more of various items during the performance of the processes described in this application. Such stored items may include, but are not limited to, the input video, the decoded video or portions of the decoded video, the bitstream, matrices, variables, and intermediate or final results from the processing of equations, formulas, operations, and operational logic.
[43] In several embodiments, memory inside of the processor 110 and/or the encoder/decoder module 130 is used to store instructions and to provide working memory for processing that is needed during encoding or decoding. In other embodiments, however, a memory external to the processing device (for example, the processing device may be either the processor 110 or the encoder/decoder module 130) is used for one or more of these functions. The external memory may be the memory 120 and/or the storage device 140, for example, a dynamic volatile memory and/or a non-volatile flash memory. In several embodiments, an external non-volatile flash memory is used to store the operating system of a television. In at least one embodiment, a fast external dynamic volatile memory such as a RAM is used as working memory for video coding and decoding operations, such as for MPEG-2, JPEG Pleno, MPEG-I, V-DMC, HEVC, or VVC.
[44] The input to the elements of system 100 may be provided through various input devices as indicated in block 105. Such input devices include, but are not limited to, (i) an RF portion that receives an RF signal transmitted, for example, over the air by a broadcaster, (ii) a Composite input terminal, (iii) a USB input terminal, and/or (iv) an HDMI input terminal.
[45] In various embodiments, the input devices of block 105 have associated respective input processing elements as known in the art. For example, the RF portion may be associated with
elements suitable for (i) selecting a desired frequency (also referred to as selecting a signal, or band-limiting a signal to a band of frequencies), (ii) down converting the selected signal, (iii) bandlimiting again to a narrower band of frequencies to select (for example) a signal frequency band which may be referred to as a channel in certain embodiments, (iv) demodulating the down converted and band-limited signal, (v) performing error correction, and (vi) demultiplexing to select the desired stream of data packets. The RF portion of various embodiments includes one or more elements to perform these functions, for example, frequency selectors, signal selectors, bandlimiters, channel selectors, filters, downconverters, demodulators, error correctors, and demultiplexers. The RF portion may include a tuner that performs various of these functions, including, for example, down converting the received signal to a lower frequency (for example, an intermediate frequency or a near-baseband frequency) or to baseband. In one set-top box embodiment, the RF portion and its associated input processing element receives an RF signal transmitted over a wired (for example, cable) medium, and performs frequency selection by filtering, down converting, and filtering again to a desired frequency band. Various embodiments rearrange the order of the above-described (and other) elements, remove some of these elements, and/or add other elements performing similar or different functions. Adding elements may include inserting elements in between existing elements, for example, inserting amplifiers and an analog- to-digital converter. In various embodiments, the RF portion includes an antenna.
[46] Additionally, the USB and/or HDMI terminals may include respective interface processors for connecting system 100 to other electronic devices across USB and/or HDMI connections. It is to be understood that various aspects of input processing, for example, Reed-Solomon error correction, may be implemented, for example, within a separate input processing IC or within processor 110 as necessary. Similarly, aspects of USB or HDMI interface processing may be implemented within separate interface ICs or within processor 110 as necessary. The demodulated, error corrected, and demultiplexed stream is provided to various processing elements, including, for example, processor 110, and encoder/decoder 130 operating in combination with the memory and storage elements to process the datastream as necessary for presentation on an output device.
[47] Various elements of system 100 may be provided within an integrated housing, Within the integrated housing, the various elements may be interconnected and transmit data therebetween using suitable connection arrangement 115, for example, an internal bus as known in the art,
including the I2C bus, wiring, and printed circuit boards.
[48] The system 100 includes communication interface 150 that enables communication with other devices via communication channel 190. The communication interface 150 may include, but is not limited to, a transceiver configured to transmit and to receive data over communication channel 190. The communication interface 150 may include, but is not limited to, a modem or network card and the communication channel 190 may be implemented, for example, within a wired and/or a wireless medium.
[49] Data is streamed to the system 100, in various embodiments, using a Wi-Fi network such as IEEE 802. 11. The Wi-Fi signal of these embodiments is received over the communications channel 190 and the communications interface 150 which are adapted for Wi-Fi communications. The communications channel 190 of these embodiments is typically connected to an access point or router that provides access to outside networks including the Internet for allowing streaming applications and other over-the-top communications. Other embodiments provide streamed data to the system 100 using a set-top box that delivers the data over the HDMI connection of the input block 105. Still other embodiments provide streamed data to the system 100 using the RF connection of the input block 105.
[50] The system 100 may provide an output signal to various output devices, including a display 165, speakers 175, and other peripheral devices 185. The other peripheral devices 185 include, in various examples of embodiments, one or more of a stand-alone DVR, a disk player, a stereo system, a lighting system, and other devices that provide a function based on the output of the system 100. In various embodiments, control signals are communicated between the system 100 and the display 165, speakers 175, or other peripheral devices 185 using signaling such as AV. Link, CEC, or other communications protocols that enable device-to-device control with or without user intervention. The output devices may be communicatively coupled to system 100 via dedicated connections through respective interfaces 160, 170, and 180. Alternatively, the output devices may be connected to system 100 using the communications channel 190 via the communications interface 150. The display 165 and speakers 175 may be integrated in a single unit with the other components of system 100 in an electronic device, for example, a television. In various embodiments, the display interface 160 includes a display driver, for example, a timing controller (T Con) chip.
[51] The display 165 and speaker 175 may alternatively be separate from one or more of the other components, for example, if the RF portion of input 105 is part of a separate set-top box. In various embodiments in which the display 165 and speakers 175 are external components, the output signal may be provided via dedicated output connections, including, for example, HDMI ports, USB ports, or COMP outputs.
[52] Point Cloud / Mesh Use Cases
[53] The automotive industry and autonomous car are domains in which point clouds may be used. Autonomous cars should be able to “probe” their environment to make good driving decisions based on the reality of their immediate surroundings. Typical sensors like LiDARs produce (dynamic) point clouds that are used by the perception engine. These point clouds are not intended to be viewed by human eyes and they are typically sparse, not necessarily colored, and dynamic with a high frequency of capture. They may have other attributes like the reflectance ratio provided by the LiDAR as this attribute is indicative of the material of the sensed object and may help in making a decision.
[54] Virtual Reality (VR) and immersive worlds are foreseen by many as the future of 2D video. The basic idea is to immerse the viewers in an environment all around them as opposed to the standard TV where they can only look at the virtual world in the front. There are several gradations in the immersivity depending on the freedom of the viewer in the environment. Point cloud as well as mesh are good format candidates to distribute VR contents. They may be static or dynamic and are typically of moderate size, no more than millions of points at a time.
[55] Point clouds and meshes may be also used for various purposes such as culture heritage/civil engineering in which objects like statues or buildings are scanned in 3D in order to share the spatial configuration of the object without moving or visiting them. Also, it is a way to ensure preserving the knowledge of the object in case it may be destroyed, for instance, a temple by an earthquake. Such point clouds and meshes are typically static, colored, and huge.
[56] World modeling and sensing via point clouds/meshes could be an essential technology to allow machines to gain knowledge about the 3D world around them, which is crucial for the applications discussed herein.
[57] 3D point cloud data are essentially discrete samples on the surfaces of objects or scenes.
To fully represent the real world with point samples, in practice it requires a huge number of points. For instance, a typical VR immersive scene contains millions of points, while point clouds typically contain hundreds of millions of points. Therefore, the processing of such large-scale point clouds is computationally expensive, especially for consumer devices, e.g., smartphone, tablet, and automotive navigation system, that have limited computational power. Additionally, the discrete samples comprising the 3D point cloud data still contain incomplete infomiation about the underlying surfaces of objects and scenes. Hence, recently efforts are being made to also explore mesh representation for 3D scene/surface representation.
[58] Meshes can be considered as a 3D point cloud accompanied with the connectivity information between the points. Thus, the mesh representation bridges the gap between point clouds and the underlying continuous surfaces through local 2D polygonal patches (called face) approximation of the underlying surface. In another view, mesh can simplify the point cloud by reducing the number of points. For example, a face in a mesh can be appropriate to represent a flat area while point clouds may require a dense block of points to represent the same flat area.
[59] The first step for any processing or inference on the mesh data is to have efficient storage methodologies. To store and process the mesh with affordable computational cost, one solution is to down-sample it first, where the down-sampled mesh summarizes the geometry of the input mesh while having much fewer (but bigger) faces. The down-sampled mesh is then fed to the subsequent machine task for further consumption. However, further reduction in storage space can be achieved by converting the raw mesh data (original or downsampled) into a fixed length codeword or a feature map living on a very low-resolution mesh. This codeword or the feature map can be converted to a bitstream through existing entropy coding techniques. Moreover, the codeword or feature map is also useful by itself as it represents global or local surface information, respectively, of the underlying scene/object and can be paired with subsequent downstream (machine vision) tasks.
[60] Learning based approaches have been used in coding of several types of data, for example, image, video, and point clouds. A typical learning-based compression system is depicted in FIG. 2. An input signal, either image, video or point cloud, PC, is first sent to a feature extraction block (210), that is composed of learnable neural network layers. The generated feature map F is then entropy coded via an ENCF block (220). In ENCF block, the feature map F is first rounded to
integer number or quantized to integer number before being entropy coded into a bitstream BS On decoder side, the bitstream is entropy decoded by a DECF block (230). The decoded feature map F is sent to a feature decoder (240) to reconstruct the input image, video or point cloud, PC. Such an end-to-end compression system is referenced as a deep feature based codec.
[61] To enable the training of a neural network-based system, it is essentially required that all blocks are differentiable. The differentiability is required to update the network parameters during a procedure named as backward propagation. However, the rounding operation or the quantization during the entropy coding is not differentiable.
[62] To address the differentiability issue with entropy coding, an entropy bottleneck layer was introduced as shown in FIG. 3 to estimate the entropy of the feature map to be coded. The entropy bottleneck layer (320, EB block) will replace the ENCF and DECF in the compression system during the training stage. Of course, during inference stage, the ENCF and DECF are brought back for actual coding purpose. The introduced EB block takes the feature map generated by the feature extractor (310) as input. It will first output a distorted feature map F, that has certain noise being added. The EB block additionally outputs an estimated rate (entropy) for the feature map. The entropy bottleneck layer is differentiable by its design. In the end, the training of the compression system is supervised by jointly counting the distortion between input signal PC and reconstructed signal PC outputted by the feature decoder (330), as well as the estimated rate required to code the feature map.
[63] In one example, the entropy bottleneck layer is designed as shown in FIG. 4. Certain uniform noise is injected (410) to the input feature map F. The noise level is controlled by an input parameter e g., variance o. A feature map F distorted by the noise is outputted from the EB block. In parallel, the feature map F is sent to a DR (distribution to rate) block (420). Aside from F, the DR block additionally takes the distribution parameter set P and the variance a as input, and outputs the estimated rate required to code the feature map F. Note that the distribution parameter set P models the probability distribution of the input feature F. It will be iteratively updated to minimize the rate during training. Note the two blocks (410, 420) are not structured using neural network layers, but deterministic procedures. However, the distribution parameter set P contains parameters to be learned during training, and is updated via the backward propagation during training stage.
[64] In the follows, we propose entropy bottlenecks to enable the training of learning-based mesh compression systems.
[65] A Typical Learning-based Mesh Compression System
[66] FIG. 5 illustrates a typical diagram of a learning-based mesh compression system that is composed of two branches. The main branch for feature coding function is the same as the typical learning-based compression in FIG. 2. In the main branch, the encoder is to extract and aggregate features from the input mesh using a feature extraction block (510). The extracted feature map F is sent to an entropy coder ENCF (530). The feature map F decoded by an entropy decoder DECF (550) is sent to a feature decoder (560) to reconstruct the mesh. With the feature extraction / aggregation, the input mesh is typically downsampled to a base mesh M with a lower resolution for the benefit of compression. The feature map F is associated with the base mesh M.
[67] The second branch of the mesh compression in FIG. 5 is to code the base mesh M. It is composed of an ENCM block (520) for encoding, and a DECM block (540) for decoding. The encoding and decoding of the base mesh involve coding a list of faces. With triangle mesh case, each face is represented by a list of three connected points. Each point is described by a 3D coordinate. In one example, we let M — (X, T), where X is the list of points, and T is the list of faces. Each point in a face is represented by its index in X. To code a base mesh, one needs to code the point list X, and the face list T.
[68] As shown in FIG. 6, the diagram of coding the base mesh (or an original mesh in another embodiment) is illustrated. The pipeline is composed of two sub-branches. A first sub-branch is the coding of face list T. It is simply done by an arithmetic encoder AET block (630) on encoder side, and an arithmetic decoder ADT block (650) on decoder side. The coded faces are part of the bitstream BS .
[69] The second sub-branch is the coding of the point list X. Each point position in the list is quantized, by Qx block (610), before being arithmetically coded, by AEx block (620). The quantized point is represented by X. The decoder first arithmetically decodes the bitstream by ADx block (640) then inverse quantized by Q-1x block (660). The coded points are part of the bitstream BSM.
[70] Entropy Bottleneck for Mesh Coding
[71] As shown in FIG. 3, an entropy bottleneck layer is required to train a learning-based compression system in FIG. 2. In one embodiment, in order to train the learning-based mesh compression system in FIG. 5, we propose a diagram shown in FIG. 7.
[72] Corresponding to the two coding branches in FIG. 5, two entropy bottleneck layers are inserted between the feature extractor (710) and feature decoder (740). A first entropy bottleneck layer EBF (730) functions the same as the entropy bottleneck layer EB in FIG. 3. It outputs the reconstructed feature F and estimated rate for coding the feature map RF . The second mesh entropy bottleneck layer EBM (720) is proposed to take care of the coding of base mesh. The mesh entropy bottleneck layer takes a mesh as input, and outputs a reconstructed mesh M and an estimated rate for the mesh RM. With the estimated rate RF and RM available, a loss function is defined by a weighted sum of total rates (RF -I- /?M), and a distortion measured between the input mesh and the decoded mesh. The training of the learning-based mesh codec is then be supervised by the joint rate distortion loss. As the base mesh has a unique data structure than a feature map, existing entropy bottleneck layer is unable to be applied.
[73] FIG. 8 depicts the diagram of a proposed mesh entropy bottleneck layer, according to an embodiment. Note that the coding of the face list T is typically lossless, and it consumes a fixed size of bitrate given the same input base mesh. Hence there is no need to count the bitrate required to code the face list. In the proposed diagram, we need only care about the coding of the point list.
[74] First, a noise adder block (810) is used to add a uniform noise into the point list of the input mesh M = (X, T) . It outputs a mesh M = (X, T) distorted by the noise, where X is a point list with adjusted positions.
[75] Second, the input mesh is serialized (820). The serialization has the mesh organized into Ms, a ID sequence of points, representing a coding order of the points. The serialization is implemented by a process known as edgebreaker. This edgebreaker serialization process works as a state machine. In each state, it moves from one triangle to an adjacent one. Once passed through the adjacent, the next triangle would be either left or right since the remaining is the one passed on the current state. During this traversal, all visited triangle and bounding vertices are marked sequentially. Assuming that T is coded losslessly, the decoder can recover the exact same order. The organized sequence Ms also maintains a “prediction table” indicating which points are
used as references to code a next point. It is worthy to note that the “prediction table” can be virtual and be generated on the fly when the edgebreaker runs over the fact list T.
[76] For each point in the organized sequence Ms (with references available), the rate required to code it is estimated in the following way. Three reference points are accessed as per the prediction table. In one embodiment, a predictor XP is computed by the prediction block (830) as per the following parallelogram equation:
XP = XO + (X2 - X (1)
This is further illustrated in FIG. 9. XQ, X± and X2 are reference points, and XP is the predicting point, where we want to deform from by computing (840) an error EN — XN — XP where XN is the current point. Once predicted, the current triangle moves from /QA) "2 to triangle X0X2XN. The error is then sent to a DR block (850). Based on the estimated probability distribution parameter set P and the variance <J used to generate the noise, a rate RN to encode EN is estimated by the DR block (850). By having the rates of all points being estimated and counted, a total rate required for the point list R is outputted.
[77] In this document, this proposed mesh entropy bottleneck layer is referenced as embodiment EN (early split of the branches with noise adder).
[78] DR block
[79] The role of DR block is to estimate the rate/entropy based on the distribution parameter. Suppose the error EN = [elt e2, e3] consists of the error to be coded along the x-, y-, and z- coordinates, then the rate RN can be estimated as the total entropy of the errors,
where q(e;) is the estimated probability mass function of having an error centered at e^. It can be computed as the following integral:
Here D(-) denotes the probability density function which is parameterized by the parameter set P (to be learned during the training process via backpropagation). In one embodiment, the probability distribution D(-) can be a Gaussian distribution. In this case, the parameter set consists of two numbers: a mean and a variance.
[80] Note that from FIG. 4, when noise a increases, the quality of the output feature map F
degrades. However, the interval of integration in Eq. (3), i.e., [e(- — + ^] also gets larger.
Thus, q(et) becomes larger, leading to a smaller rate RN. Therefore, the entropy bottleneck layer can successfully model the tradeoff between rate and distortion.
[81] More Embodiments of Proposed Entropy Bottleneck Layer
[82] In the following, we describe several variations of the mesh entropy bottleneck layer proposed above.
[83] A first embodiment is shown in FIG. 10. It is referenced as embodiment EQ (early split of the branches with quantization). Comparing to embodiment EN as shown in FIG. 8, the noise adder (810) is replaced by a quantization block (1010) that performs both forward quantization and inverse quantization. Because of this change, the DR block takes as input the quantization step size QP instead of the noise variance a when estimating the rate. Just like the noise variance er, QP is provided as another type of hyper parameter indicating the level of compression, a larger quantization step (or noise variance o’) leads to larger compression and distortion. Apparently, the rate depends on the quantization step (or noise variance o’).
[84] A second embodiment is shown in FIG. 11. It is referenced as embodiment LN (late split of the branches with noise adder). Comparing to embodiment EN as shown in FIG. 8, the starting position of the rate estimation branch is moved from before to after the noise adder (1110). The remaining processing diagram basically remain the same, except that the distribution to rate estimation DR block does not take the noise variance <J as input. However, the derivation of the DR block still follows the rationale described before with respect to FIG. 8.
[85] A third embodiment is shown in FIG. 12. It is referenced as embodiment LQ (late split of the branches with quantization). Its difference comparing to embodiment LN is that the noise adder (1110) is changed to a quantization block (1210). This embodiment presents a good performance with a simpler design.
[86] With embodiment EQ and LQ, a quantization block is introduced, that is non-differentiable and impose challenges in backward propagations during training. In the figures, we use a dashed line bypassing the quantization block Q. It means that during forward computation of the training, quantization is applied. However, during backward propagation of the training, the quantization step is bypassed.
[87] Entropy Bottleneck Layer with Distribution Parameter Estimation Block
[88] In the following, we describe additional embodiments having an additional block to estimate distribution parameters (EDP) inside the proposed entropy bottleneck layer.
[89] The first embodiment is illustrated in FIG. 13. Different from FIG. 12, the parameter set P in FIG. 13 is an output of the EDP block (1310). Particularly, the EDP block takes the error EN as input and outputs the estimated parameter set P. In one embodiment, the EDP block consists of neural network layers similar to the feature extraction (FE) block, which digests the prediction error EN and generates the parameter set P.
[90] The second embodiment is provided in FIG. 14, which is a more advanced embodiment compared to FIG. 13. In this embodiment, the EDP (1410) takes the EN as input and outputs two sets of parameters P and P where P" is the hyper parameter set describing P. With P the parameter set P can be reproduced. Then the hyper parameter set F” is additionally fed to another rate estimation block DR’ (1420) to estimate the rate of P" (denoted as Rp). Here P” is the distribution parameter set describing P and it is to be learned during training via backpropagation. In this case, the output rate R is the total rate of all the RN and the Rp.
[91] It should be noted that while a quantization block is used to distort the input mesh in FIG. 13 and FIG. 14, a noise adder can be used in place of the quantization block.
[92] In one embodiment, the EDP block consists of three steps as shown in FIG. 15. The first step applies neural network layers such as a feature extractor (1510) from EN. Next, the feature extractor output is fed to a quantizer Q (1520), leading to the hyperparameter set P Then in the last step, another group of neural network layers such as a feature decoder (1530) takes P" as input and outputs the parameter set P.
[93] Various numeric values are used in the present application. The specific values are for example purposes and the aspects described are not limited to these specific values.
[94] Various methods are described herein, and each of the methods comprises one or more steps or actions for achieving the described method. Unless a specific order of steps or actions is required for proper operation of the method, the order and/or use of specific steps and/or actions may be modified or combined. Additionally, terms such as “first”, “second”, etc. may be used in various embodiments to modify an element, component, step, operation, etc., such as, for example,
a “first decoding” and a “second decoding”. Use of such terms does not imply an ordering to the modified operations unless specifically required. So, in this example, the first decoding need not be performed before the second decoding, and may occur, for example, before, during, or in an overlapping time period with the second decoding.
[95] The implementations and aspects described herein may be implemented in, for example, a method or a process, an apparatus, a software program, a data stream, or a signal. Even if only discussed in the context of a single form of implementation (for example, discussed only as a method), the implementation of features discussed may also be implemented in other forms (for example, an apparatus or program). An apparatus may be implemented in, for example, appropriate hardware, software, and firmware. The methods may be implemented in, for example, an apparatus, for example, a processor, which refers to processing devices in general, including, for example, a computer, a microprocessor, an integrated circuit, or a programmable logic device. Processors also include communication devices, for example, computers, cell phones, portable/personal digital assistants (“PDAs”), and other devices that facilitate communication of information between end-users.
[96] Reference to “one embodiment” or “an embodiment” or “one implementation” or “an implementation”, as well as other variations thereof, means that a particular feature, structure, characteristic, and so forth described in connection with the embodiment is included in at least one embodiment. Thus, the appearances of the phrase “in one embodiment” or “in an embodiment” or “in one implementation” or “in an implementation”, as well any other variations, appearing in various places throughout this application are not necessarily all referring to the same embodiment.
[97] Additionally, this application may refer to “determining” various pieces of information. Determining the information may include one or more of, for example, estimating the information, calculating the information, predicting the information, or retrieving the information from memory.
[98] Further, this application may refer to “accessing” various pieces of information. Accessing the information may include one or more of, for example, receiving the information, retrieving the information (for example, from memory), storing the information, moving the information, copying the information, calculating the information, determining the information, predicting the information, or estimating the information.
[99] Additionally, this application may refer to “receiving” various pieces of information. Receiving is, as with “accessing”, intended to be a broad term. Receiving the information may include one or more of, for example, accessing the information, or retrieving the information (for example, from memory). Further, “receiving” is typically involved, in one way or another, during operations, for example, storing the information, processing the information, transmitting the information, moving the information, copying the information, erasing the information, calculating the information, determining the information, predicting the information, or estimating the information.
[100] It is to be appreciated that the use of any of the following “/”, “and/or”, and “at least one of’, for example, in the cases of “A/B”, “A and/or B” and “at least one of A and B”, is intended to encompass the selection of the first listed option (A) only, or the selection of the second listed option (B) only, or the selection of both options (A and B). As a further example, in the cases of “A, B, and/or C” and “at least one of A, B, and C”, such phrasing is intended to encompass the selection of the first listed option (A) only, or the selection of the second listed option (B) only, or the selection of the third listed option (C) only, or the selection of the first and the second listed options (A and B) only, or the selection of the first and third listed options (A and C) only, or the selection of the second and third listed options (B and C) only, or the selection of all three options (A and B and C). This may be extended, as is clear to one of ordinary skill in this and related arts, for as many items as are listed.
[101] As will be evident to one of ordinary skill in the art, implementations may produce a variety of signals formatted to carry information that may be, for example, stored or transmitted. The information may include, for example, instructions for performing a method, or data produced by one of the described implementations. For example, a signal may be formatted to carry the bitstream of a described embodiment. Such a signal may be formatted, for example, as an electromagnetic wave (for example, using a radio frequency portion of spectrum) or as a baseband signal. The formatting may include, for example, encoding a data stream and modulating a carrier with the encoded data stream. The information that the signal carries may be, for example, analog or digital information. The signal may be transmitted over a variety of different wired or wireless links, as is known. The signal may be stored on a processor-readable medium.
Claims
1. A method for training a learning-based mesh compression system with an entropy bottleneck layer, comprising: obtaining a mesh; adjusting said mesh to obtain an adjusted mesh; performing a serialization on said mesh or the adjusted mesh to obtain an ordered list of vertices; for a vertex of said ordered list of vertices: predicting said vertex to obtain a predicted position, obtaining a prediction error between a position of said vertex and said predicted position, and obtaining a rate of encoding said prediction error based on a probability distribution parameter of said prediction error; obtaining a loss value based on said rate; and updating parameters for neural networks used to encode and decode said mesh, based on said loss value.
2. The method of claim 1, wherein said loss value is further based on a distortion between said mesh and a reconstructed version of said adjusted mesh.
3. The method of claim 1 or 2, wherein said obtaining a rate is performed for each vertex of a plurality of vertices in said ordered list of vertices.
4. The method of any one of claims 1-3, wherein said predicting said vertex is based on previous vertices in said ordered list of vertices.
5. The method of any one of claims 1-4, wherein said predicting said vertex is based on a prediction table for said ordered list of vertices, wherein said prediction table indicates which vertices are used as references for said vertex.
6. The method of any one of claims 1-5, wherein said serialization is based on an edgebreaker.
7. The method of any one of claims 1-6, wherein said adjusting said mesh is performed by adding noise to said mesh.
8. The method of claim 7, wherein a uniform noise with a variance is added as said noise, and wherein said rate of said prediction error is obtained further based on said variance.
9. The method of any one of claims 1-6, wherein said adjusting said mesh is performed by quantization of said mesh.
10. The method of claim 9, wherein a quantization step size is used to quantize said mesh, and wherein said rate of said prediction error is obtained further based on said quantization step size.
11. The method of claim 9 or 10, wherein quantization is applied during forward computation of training, and wherein quantization is bypassed during backward propagation of training.
12. The method of any one of claims 1-11, further comprising: decomposing an input mesh into a base mesh and a feature map, wherein said base mesh corresponds to said mesh.
13. The method of claim 12, further comprising: extracting said feature map and said base mesh from said input mesh; generating another version of said base mesh from said entropy bottleneck layer; and decoding said another version of said mesh to form a decoded mesh.
14. The method of claim 13, wherein said loss value is further based on a distortion between said input mesh and said decoded mesh.
15. The method of any one of claims 2-14, further comprising: obtaining a probability distribution parameter set based on said prediction errors, wherein said probability distribution parameter set is used to obtain said rate.
16. The method of claim 15, further comprising: obtaining a hyperparameter set describing said probability distribution parameter set; and obtaining another rate based on said hyperparameter set.
17. The method of claim 15 or 16, wherein said obtaining a probability distribution parameter set comprises: extracting features from said prediction errors; quantizing output from said extracting features to form said hyperparameter set; and applying a feature decoder to said hyperparameter set to obtain said probability distribution parameter set.
18. An apparatus, comprising one or more processors and at least one memory coupled to said one or more processors, wherein said one or more processors are configured to perform the method of any of claims 1-17.
19. A non-transitory computer readable medium comprising instructions which, when the instructions are executed by a computer, cause the computer to perform the method of any of claims 1-17.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202363595559P | 2023-11-02 | 2023-11-02 | |
| US63/595,559 | 2023-11-02 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025096197A1 true WO2025096197A1 (en) | 2025-05-08 |
Family
ID=93335474
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2024/051558 Pending WO2025096197A1 (en) | 2023-11-02 | 2024-10-16 | Entropy bottleneck layer for learning-based mesh compression |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025096197A1 (en) |
-
2024
- 2024-10-16 WO PCT/US2024/051558 patent/WO2025096197A1/en active Pending
Non-Patent Citations (2)
| Title |
|---|
| JOHANNES BALL\'E ET AL: "Variational image compression with a scale hyperprior", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 1 February 2018 (2018-02-01), XP080857190 * |
| ZENG XINYAO ET AL: "DMVC: Deep Mesh Vertex Compression", 2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO WORKSHOPS (ICMEW), IEEE, vol. 1, 18, 19,, 10 July 2023 (2023-07-10), pages 152 - 157, XP034410178, DOI: 10.1109/ICMEW59549.2023.00033 * |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20250139834A1 (en) | Scalable framework for point cloud compression | |
| US12488511B2 (en) | Apparatus and method for point cloud processing | |
| US20250045971A1 (en) | Hybrid framework for point cloud compression | |
| WO2025038454A1 (en) | Base graph based mesh compression | |
| US20240193819A1 (en) | Learning-based point cloud compression via tearing transform | |
| US20250014228A1 (en) | State summarization for binary voxel grid coding | |
| WO2024081133A1 (en) | Sparse tensor-based bitwise deep octree coding | |
| WO2023081007A1 (en) | Learning-based point cloud compression via adaptive point generation | |
| WO2025096197A1 (en) | Entropy bottleneck layer for learning-based mesh compression | |
| US20250358444A1 (en) | Model sharing for point cloud compression | |
| US20250373863A1 (en) | End-to-end learning-based point cloud coding framework | |
| US20250337901A1 (en) | 3d swin transformer with sorted grouping for point cloud compression | |
| US20240282013A1 (en) | Learning-based point cloud compression via unfolding of 3d point clouds | |
| WO2025096198A1 (en) | Learning-based mesh compression using latent mesh | |
| WO2025184001A1 (en) | Learning-based image coding for 3d point cloud data, with each color component of a 2d image corresponding to a dimension of 3d cartesian coordinates of a point position | |
| US20240406427A1 (en) | Method and apparatus for point cloud compression using hybrid deep entropy coding | |
| KR20240115237A (en) | Point cloud compression based on outlier grouping | |
| WO2025184178A1 (en) | Dynamic deep octree coding for point cloud compression |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24799756 Country of ref document: EP Kind code of ref document: A1 |