WO2025226697A1 - Method, apparatus, and medium for visual data processing - Google Patents
Method, apparatus, and medium for visual data processingInfo
- Publication number
- WO2025226697A1 WO2025226697A1 PCT/US2025/025788 US2025025788W WO2025226697A1 WO 2025226697 A1 WO2025226697 A1 WO 2025226697A1 US 2025025788 W US2025025788 W US 2025025788W WO 2025226697 A1 WO2025226697 A1 WO 2025226697A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- codestream
- size
- indication
- visual data
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/13—Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/90—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
- H04N19/91—Entropy coding, e.g. variable length coding [VLC] or arithmetic coding
Definitions
- Embodiments of the present disclosure relates generally to visual data processing techniques, and more particularly, to neural network-based visual data coding.
- BACKGROUND [0002] The past decade has witnessed the rapid development of deep learning in a variety of areas, especially in computer vision and image processing. Neural network was invented originally with the interdisciplinary research of neuroscience and mathematics. It has shown strong capabilities in the context of non-linear transform and classification. Neural network- based image/video compression technology has gained significant progress during the past half decade.
- Embodiments of the present disclosure provide a solution for visual data processing.
- a method for visual data processing is proposed.
- the method comprises: performing a conversion between visual data and a codestream of the visual data with a neural network (NN)-based model, the codestream comprising a first indication indicating a size of a data unit in a first portion of the codestream, wherein the first portion comprises a second indication indicating a size of the first portion of the codestream, and the first indication is comprised in a second portion of the codestream different from the first portion, or wherein the first indication is independent from the size of the first portion of the codestream, or wherein the first indication is coded with a fixed length coding.
- NN neural network
- the first indication and the second indication are comprised in different portions of the codestream, or the first indication is independent from the size of the first portion of the codestream, or the first indication is coded with a fixed length coding.
- the proposed method can advantageously decouple the size of the first portion of the codestream from a value of the first indication. Thereby, a potential oscillation caused by a coupling relation between the size of the first portion of the codestream and the 1 F1251247PCT value of the first indication can be avoided, and thus the coding efficiency can be improved.
- an apparatus for visual data processing is proposed.
- the apparatus comprises a processor and a non-transitory memory with instructions thereon.
- the instructions upon execution by the processor, cause the processor to perform a method in accordance with the first aspect of the present disclosure.
- a non-transitory computer-readable storage medium is proposed.
- the non-transitory computer-readable storage medium stores instructions that cause a processor to perform a method in accordance with the first aspect of the present disclosure.
- another non-transitory computer-readable recording medium is proposed.
- the non-transitory computer-readable recording medium stores a codestream of visual data which is generated by a method performed by an apparatus for visual data processing.
- the method comprises: performing a conversion from the visual data to the codestream with a neural network (NN)-based model, the codestream comprising a first indication indicating a size of a data unit in a first portion of the codestream, wherein the first portion comprises a second indication indicating a size of the first portion of the codestream, and the first indication is comprised in a second portion of the codestream different from the first portion, or wherein the first indication is independent from the size of the first portion of the codestream, or wherein the first indication is coded with a fixed length coding.
- NN neural network
- the method comprises: performing a conversion from the visual data to the codestream with a neural network (NN)-based model, the codestream comprising a first indication indicating a size of a data unit in a first portion of the codestream; and storing the codestream in a non- transitory computer-readable recording medium, wherein the first portion comprises a second indication indicating a size of the first portion of the codestream, and the first indication is comprised in a second portion of the codestream different from the first portion, or wherein the first indication is independent from the size of the first portion of the codestream, or wherein the first indication is coded with a fixed length coding.
- NN neural network
- Fig. 1A illustrates a block diagram that illustrates an example visual data coding system in accordance with some embodiments of the present disclosure
- Fig.1B is a schematic diagram illustrating an example transform coding scheme
- FIG. 2 illustrates example latent representations of an image
- FIG. 3 is a schematic diagram illustrating an example autoencoder implementing a hyperprior model
- Fig. 4 is a schematic diagram illustrating an example combined model configured to jointly optimize a context model along with a hyperprior and the autoencoder
- Fig.5 illustrates an example encoding process
- Fig.6 illustrates an example decoding process
- Fig. 7 illustrates an example decoding process according to some embodiments of the present disclosure
- Fig.8 illustrates an example learning-based image codec architecture
- Fig.9 illustrates an example synthesis transform for learning based image coding
- FIG. 10 illustrates an example leaky Rectified Linear Unit (ReLU) activation function
- FIG.11 illustrates an example ReLU activation function
- Fig.12 illustrates a flowchart of a method for visual data processing in accordance with some embodiments of the present disclosure
- Fig. 13 illustrates a block diagram of a computing device in which various embodiments of the present disclosure can be implemented.
- the same or similar reference numerals usually refer to the same or similar elements.
- Fig. 1A is a block diagram that illustrates an example visual data coding system 100 that may utilize the techniques of this disclosure.
- the visual data coding system 100 may include a source device 110 and a destination device 120.
- the source device 110 can be also referred to as a visual data encoding device, and the destination device 120 can be also referred to as a visual data decoding device.
- the source device 110 can be configured to generate encoded visual data and the destination device 120 can be configured to decode the encoded visual data generated by the source device 110.
- the source device 110 may include a visual data source 112, a visual data encoder 114, and an input/output (I/O) interface 116. 4 F1251247PCT [0033]
- the visual data source 112 may include a source such as a visual data capture device.
- the visual data capture device examples include, but are not limited to, an interface to receive visual data from a visual data provider, a computer graphics system for generating visual data, and/or a combination thereof.
- the visual data may comprise one or more pictures of a video or one or more images.
- the visual data encoder 114 encodes the visual data from the visual data source 112 to generate a bitstream.
- the bitstream may include a sequence of bits that form a coded representation of the visual data.
- the bitstream may include coded pictures and associated visual data.
- the coded picture is a coded representation of a picture.
- the associated visual data may include sequence parameter sets, picture parameter sets, and other syntax structures.
- the I/O interface 116 may include a modulator/demodulator and/or a transmitter.
- the encoded visual data may be transmitted directly to destination device 120 via the I/O interface 116 through the network 130A.
- the encoded visual data may also be stored onto a storage medium/server 130B for access by destination device 120.
- the destination device 120 may include an I/O interface 126, a visual data decoder 124, and a display device 122.
- the I/O interface 126 may include a receiver and/or a modem.
- the I/O interface 126 may acquire encoded visual data from the source device 110 or the storage medium/server 130B.
- the visual data decoder 124 may decode the encoded visual data.
- the display device 122 may display the decoded visual data to a user.
- the display device 122 may be integrated with the destination device 120, or may be external to the destination device 120 which is configured to interface with an external display device.
- the visual data encoder 114 and the visual data decoder 124 may operate according to a visual data coding standard, such as video coding standard or still picture coding standard and other current and/or further standards.
- a visual data coding standard such as video coding standard or still picture coding standard and other current and/or further standards.
- the term visual data processing encompasses visual data coding or compression, visual data decoding or decompression and visual data transcoding in which visual data are represented from one 5 F1251247PCT compressed format into another compressed format or at a different compressed bitrate.
- NN neural network
- the present disclosure is related to neural network (NN)-based image and video coding. Specifically, it is related to the method of signaling tool header of skip mode in support of regional accessibility, wherein regional accessibility refers to the capability of correctly decoding only a regional part of an image (also referred to as a picture) or a video.
- this disclosure is related to a neural network-based image and video compression method comprising modification of components of an image using convolution layers.
- the weights of the convolution layer is included in the bitstream.
- Image/video compression (also referred to as image/video coding) usually refers to the computing technology that compresses image/video into binary code to facilitate storage and transmission.
- the binary codes may or may not support losslessly reconstructing the original image/video, termed lossless compression and lossy compression.
- Neural network- based video compression is in two flavors, neural network-based coding tools and end-to-end neural network-based video compression.
- the former is embedded into existing classical video codecs as coding tools and only serves as part of the framework, while the latter is a separate framework developed based on neural networks without depending on classical video codecs.
- a series of classical video coding standards have been developed to accommodate the increasing visual content.
- the international standardization organizations ISO/IEC has two expert groups namely Joint Photographic Experts Group (JPEG) and Moving Picture Experts Group (MPEG), and ITU-T also has its own Video Coding Experts Group (VCEG) which is for standardization of image/video coding technology.
- JPEG Joint Photographic Experts Group
- MPEG Moving Picture Experts Group
- VCEG Video Coding Experts Group
- VVC Versatile Video Coding
- Neural networks also known as artificial neural networks (ANN) are the computational models used in machine learning technology which are usually composed of multiple processing layers and each layer is composed of multiple simple but non-linear basic computational units.
- ANN artificial neural networks
- neural networks for image compression methods can be classified in two categories, i.e., pixel probability modeling and auto-encoder. The former one belongs to the predictive coding strategy, while the latter one is the transform-based solution. Sometimes, these two methods are combined together in literature.
- ⁇ ( ⁇ ) ⁇ ( ⁇ ) ⁇ ( ⁇
- condition may also take the sample values of other color components into consideration.
- R sample is dependent on previously coded pixels (including R/G/B samples)
- the current G sample may be coded according to previously coded pixels and the current R sample
- the previously coded pixels and the current R and G samples may also be taken into consideration.
- Neural networks were originally introduced for computer vision tasks and have been proven to be effective in regression and classification problems. Therefore, it has been proposed using neural networks to estimate the probability of ⁇ ( ⁇ ) given its context ⁇ , ⁇ , ... , ⁇ . 8 F1251247PCT Most of the methods directly model the probability distribution in the pixel domain.
- Fig.1B illustrates a typical transform coding scheme.
- the original image x is transformed by the analysis network g ⁇ to achieve the latent representation y.
- the latent representation y is quantized and compressed into bits.
- the number of bits R is used to measure the coding rate.
- the quantized latent representation y ⁇ is then inversely transformed by a synthesis network g ⁇ to obtain the reconstructed image x ⁇ .
- the distortion is calculated in a perceptual space by transforming x and x ⁇ with the function g ⁇ .
- a practical image coding scheme needs to support variable rate, scalability, encoding/decoding speed, interoperability.
- the prototype auto-encoder for image compression is in Fig. 1B, which can be regarded as a transform coding strategy.
- ⁇ is discrete-valued, it can be losslessly compressed using entropy coding techniques such as arithmetic coding and transmitted as a sequence of bits.
- entropy coding techniques such as arithmetic coding
- ⁇ there are significant spatial dependencies among the elements of ⁇ ⁇ . Notably, their scales (middle right image) appear to be coupled spatially.
- An additional set of random variables ⁇ can be introduced to capture the spatial dependencies and to further reduce the redundancies.
- the image compression network is depicted in Fig.3.
- the left hand of the models is the encoder ⁇ ⁇ and decoder ⁇ ⁇ (explained in section 2.3.2).
- the right-hand side is the additional hyper encoder h ⁇ and hyper decoder h ⁇ networks that are used to obtain ⁇ .
- the encoder subjects the input image x to ⁇ ⁇ , yielding the responses ⁇ with spatially varying standard deviations.
- the responses ⁇ are fed into h ⁇ , summarizing the distribution of standard deviations in ⁇ .
- ⁇ is then quantized ( ⁇ ), compressed, and transmitted as side information.
- the encoder uses the quantized vector ⁇ to estimate ⁇ , the spatial distribution of standard deviations, and uses it to compress and transmit the quantized image representation ⁇ ⁇ .
- the decoder first recovers ⁇ from the compressed signal. It then uses h ⁇ to obtain ⁇ , which provides it with the correct probability estimates to successfully recover ⁇ ⁇ as well.
- Fig. 2 Left: an image from the Kodak dataset. Middle left: visualization of a latent 10 F1251247PCT representation y of that image. Middle right: standard deviations ⁇ of the latent. Right: latents y after the hyper prior (hyper encoder and decoder) network is introduced.
- Fig. 3 illustrates Network architecture of an autoencoder implementing the hyperprior model.
- the left side shows an image autoencoder network, the right side corresponds to the hyperprior subnetwork.
- the analysis and synthesis transforms are denoted as ⁇ ⁇ and ⁇ ⁇ .
- Q represents quantization
- AE, AD represent arithmetic encoder and arithmetic decoder, respectively.
- the hyperprior model consists of two subnetworks, hyper encoder (denoted with h ⁇ ) and hyper decoder (denoted with h ⁇ ).
- the hyper prior model generates a quantized hyper latent ( ⁇ ) which comprises information about the probability distribution of the samples of the quantized latent ⁇ .
- ⁇ is included in the bitsteam and transmitted to the receiver (decoder) along with ⁇ .
- Context Model an autoregressive model that predicts quantized latents from their causal context.
- auto-regressive means that the output of a process is later used as input to it.
- the context model subnetwork generates one sample of a latent, which is later used as input to obtain the next sample.
- Fig. 4 is a schematic diagram illustrating an example combined model configured to jointly optimize a context model along with a hyperprior and the autoencoder. The following table illustrates meaning of different symbols.
- a joint architecture can be utilized where both hyper prior model subnetwork (hyper encoder and hyper decoder) and a context model subnetwork are utilized.
- the hyper prior and the context model are combined to learn a probabilistic model over quantized latents ⁇ , which is then used for entropy coding.
- the outputs of context subnetwork and hyper decoder subnetwork are combined by the subnetwork called Entropy Parameters, which generates the mean ⁇ and scale (or variance) ⁇ parameters for a Gaussian probability model.
- the gaussian probability model is then used to encode the samples of the quantized latents into bitstream with the help of the arithmetic encoder (AE) module.
- AE arithmetic encoder
- Fig 4 illustrates the combined model jointly optimizes an autoregressive component that estimates the probability distributions of latents from their causal context (Context Model) along with a hyperprior and the underlying autoencoder.
- Real-valued latent representations are quantized (Q) to create quantized latents ( ⁇ ) and quantized hyper-latents ( ⁇ ), which are compressed into a bitstream using an arithmetic encoder (AE) and decompressed by an arithmetic decoder (AD).
- the highlighted region corresponds to the components that are executed by the receiver (i.e.
- the latent samples are modeled as gaussian distribution or gaussian mixture models (not limited to).
- the context model and hyper prior are jointly used to estimate the probability distribution of the latent samples. Since a gaussian distribution can be defined by a mean and a variance (aka sigma or scale), the joint model is used to estimate the mean and variance (denoted as ⁇ and ⁇ ).
- G-VAE Gained variational autoencoders
- neural network-based image/video compression methodologies need to train multiple models to adapt to different rates.
- Gained variational autoencoders is the variational autoencoder with a pair of gain units , which is designed to achieve continuously variable rate adaptation using a single model. It comprises of a pair of gain units, which are typically inserted to the output of encoder and input of decoder.
- the output of the encoder is defined as the latent representation ⁇ ⁇ ⁇ , where ⁇ , h, ⁇ represent the number of channels, the height and width of the latent representation.
- a pair of gain units include a gain matrix ⁇ ⁇ ⁇ and an inverse gain matrix, where ⁇ is the number of gain vectors.
- the motivation of gain matrix is similar to the quantization table in JPEG by controlling the quantization loss based on the characteristics of different channels. To apply the gain matrix to the latent representation, each channel is multiplied with the corresponding value in a gain vector.
- ⁇ ⁇ ⁇ ⁇
- the inverse gain matrix used at the decoder side ca ⁇ ⁇ n be denoted as ⁇ ′ ⁇ ⁇ , which consists of ⁇ inverse gain vectors, i.e., ⁇ .
- interpolation is used between vectors. Given two pairs of gain vectors ⁇ ⁇ , ⁇ ⁇ ⁇ and ⁇ ⁇ , ⁇ ⁇ ⁇ ⁇ , the interpolated gain vector can be obtained via the following equations.
- ⁇ ⁇ ⁇ is an interpolation coefficient, which controls the corresponding bit rate of the generated gain vector pair. Since ⁇ is a real number, an arbitrary bit rate between the given two gain vector pairs can be achieved.
- ⁇ ⁇ ⁇ is an interpolation coefficient, which controls the corresponding bit rate of the generated gain vector pair. Since ⁇ is a real number, an arbitrary bit rate between the given two gain vector pairs can be achieved. 2.3.6
- the encoding process using joint auto-regressive hyper prior model The fig 4. corresponds to the state of the art compression method. In this section and the next, the encoding and decoding processes will be described separately.
- the Fig. 5 depicts the encoding process.
- the input image is first processed with an encoder subnetwork.
- the encoder transforms the input image into a transformed representation called latent, denoted by ⁇ .
- ⁇ is then input to a quantizer block, denoted by Q, to obtain the quantized latent ( ⁇ ).
- ⁇ ⁇ is then converted to a bitstream (bits1) using an arithmetic encoding module (denoted AE).
- AE arithmetic encoding module
- the arithmetic encoding block converts each sample of the ⁇ into a bitstream (bits1) one by one, in a sequential order.
- the modules hyper encoder, context, hyper decoder, and entropy parameters subnetworks are used to estimate the probability distributions of the samples of the quantized latent ⁇ ⁇ .
- the latent ⁇ is input to hyper encoder, which outputs the hyper latent (denoted by ⁇ ).
- the hyper latent is then quantized ( ⁇ ) and a second bitstream (bits2) is generated using arithmetic encoding (AE) module.
- AE arithmetic encoding
- the factorized entropy module generates the probability distribution, that is used to encode the quantized hyper latent into bitstream.
- the quantized hyper latent includes information about the 13 F1251247PCT probability distribution of the quantized latent ( ⁇ ).
- the Entropy Parameters subnetwork generates the probability distribution estimations, that are used to encode the quantized latent ⁇ ⁇ .
- the information that is generated by the Entropy Parameters typically include a mean ⁇ and scale (or variance) ⁇ parameters, that are together used to obtain a gaussian probability distribution.
- the mean and the variance need to be determined.
- the entropy parameters module are used to estimate the mean and the variance values.
- the subnetwork hyper decoder generates part of the information that is used by the entropy parameters subnetwork, the other part of the information is generated by the autoregressive module called context module.
- the context module generates information about the probability distribution of a sample of the quantized latent, using the samples that are already encoded by the arithmetic encoding (AE) module.
- the quantized latent ⁇ ⁇ is typically a matrix composed of many samples. The samples can be indicated using indices, such as ⁇ ⁇ [i,j,k] or ⁇ ⁇ [i,j] depending on the dimensions of the matrix ⁇ ⁇ .
- the samples ⁇ ⁇ [i,j] are encoded by AE one by one, typically using a raster scan order.
- the context module In a raster scan order the rows of a matrix are processed from top to bottom, wherein the samples in a row are processed from left to right.
- the context module In such a scenario (wherein the raster scan order is used by the AE to encode the samples into bitstream), the context module generates the information pertaining to a sample ⁇ ⁇ [i,j], using the samples encoded before, in raster scan order.
- the information generated by the context module and the hyper decoder are combined by the entropy parameters module to generate the probability distributions that are used to encode the quantized latent ⁇ ⁇ into bitstream (bits1).
- bits1 bitstream
- Fig. 5 The analysis transform that converts the input image into latent representation is also called an encoder (or auto-encoder). 2.3.7 The decoding process using joint auto-regressive hyper prior model
- the Fig.6 depicts the decoding process separately.
- the decoder first receives the first bitstream (bits1) and the second bitstream (bits2) that are generated by a corresponding encoder.
- the bits2 is first decoded by the 14 F1251247PCT arithmetic decoding (AD) module by utilizing the probability distributions generated by the factorized entropy subnetwork.
- AD arithmetic decoding
- the factorized entropy module typically generates the probability distributions using a predetermined template, for example using predetermined mean and variance values in the case of gaussian distribution.
- the output of the arithmetic decoding process of the bits2 is ⁇ , which is the quantized hyper latent.
- the AD process reverts to AE process that was applied in the encoder.
- the processes of AE and AD are lossless, meaning that the quantized hyper latent ⁇ that was generated by the encoder can be reconstructed at the decoder without any change. After obtaining of ⁇ , it is processed by the hyper decoder, whose output is fed to entropy parameters module.
- the three subnetworks, context, hyper decoder and entropy parameters that are employed in the decoder are identical to the ones in the encoder. Therefore the exact same probability distributions can be obtained in the decoder (as in encoder), which is essential for reconstructing the quantized latent ⁇ ⁇ without any loss. As a result the identical version of the quantized latent ⁇ that was obtained in the encoder can be obtained in the decoder.
- the arithmetic decoding module decodes the samples of the quantized latent one by one from the bitstream bits1.
- autoregressive model (the context model) is inherently serial, and therefore cannot be sped up using techniques such as parallelization.
- the fully reconstructed quantized latent ⁇ ⁇ is input to the synthesis transform (denoted as decoder in Fig.6) module to obtain the reconstructed image.
- decoder the all of the elements in Fig. 6 are collectively called decoder.
- the synthesis transform that converts the quantized latent into reconstructed image is also called a decoder (or auto-decoder).
- neural image compression serves as the foundation of intra compression in neural network-based video compression, thus development of neural network-based video compression technology comes later than neural network-based image compression but needs far more efforts to solve the challenges due to its complexity.
- 2017 a few researchers have been working on neural network-based video compression schemes.
- video compression needs efficient methods to remove inter- picture redundancy.
- Inter-picture prediction is then a crucial step in these works.
- Motion estimation and compensation is widely adopted but is not implemented by trained neural networks until recently.
- Studies on neural network-based video compression can be divided into two categories according 15 F1251247PCT to the targeted scenarios: random access and the low-latency.
- An uncompressed grayscale digital image has 8 bits-per-pixel (bpp), while compressed bits are definitely less.
- a color image is typically represented in multiple channels to record the color information. For example, in the RGB color space an image can be denoted by ⁇ ⁇ ⁇ with three separate channels storing Red, Green and Blue information. Similar to the 8-bit grayscale image, an uncompressed 8-bit RGB image has 24 bpp.
- Digital images/videos can be represented in different color spaces.
- the neural network-based video compression schemes are mostly developed in RGB color space while the traditional codecs typically use YUV color space to represent the video sequences.
- YUV color space an image is decomposed into three channels, namely Y, Cb and Cr, where Y is the luminance component and Cb/Cr are the chroma components.
- Cb and Cr are typically down sampled to achieve pre-compression since human vision system is less sensitive to chroma components.
- a color video sequence is composed of multiple color images, called frames, to record scenes at different timestamps.
- the lossless methods can achieve compression ratio of about 1.5 to 3 for natural images, which is clearly below requirement. Therefore, lossy compression is developed to achieve further compression ratio, but at the cost of incurred distortion.
- the distortion can be measured by calculating the average squared difference between the original image and the reconstructed image, i.e., mean-squared-error (MSE).
- MSE mean-squared-error
- MSE can be calculated with the following equation.
- 16 F1251247PCT ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ (4)
- PSNR peak signal-to-noise ratio
- ⁇ ( ⁇ ) is the maximal value in ⁇ , e.g., 255 for 8-bit grayscale images.
- quality evaluation metrics such as structural similarity (SSIM) and multi-scale SSIM (MS- SSIM).
- SSIM structural similarity
- MS- SSIM multi-scale SSIM
- BD-rate Bjontegaard delta- rate
- the luma component of the image is processed by the subnetwoks “Synthesis”, “Prediction fusion”, “Mask Conv”, “Hyper Decoder”, “Hyper scale decoder” etc.
- the chroma components are processed by the subnetworks: “Synthesis UV”, “Prediction fusion UV”, “Mask Conv UV”, “Hyper Decoder UV”, “Hyper scale decoder UV” etc.
- a benefit of the above separate processing is that the computational complexity of the processing of an image is reduced by application of separate processing. Typically in neural network based image and video decoding, the computational complexity is proportional to the square of the number of feature maps.
- the factorized entropy model is used to decode the quantized latents for luma and chroma, i.e., ⁇ and ⁇ ⁇ in Fig.7. 2.
- the probability parameters (e.g. variance) generated by the second network are used to generate a quantized residual latent by performing the arithmetic decoding process.
- the quantized residual latent is inversely gained with the inverse gain unit (iGain) as shown i n orange color in Fig. 7.
- the outputs of the inverse gain units are denoted as ⁇ and ⁇ for luma and chroma components, respectively. 4 .
- For the luma component the following steps are performed in a loop until all elements of ⁇ ⁇ are obtained: a.
- a first subnetwork is used to estimate a mean value parameter of a quantized latent ( ⁇ ), using the already obtained samples of ⁇ . b .
- the quantized residual latent ⁇ and the mean value are used to obtain the next element of ⁇ ⁇ . 5.
- a synthesis transform can be applied to obtain the reconstructed image. 6.
- step 4 and 5 are the same but with a separate set of networks.
- the decoded luma component is used as additional information to obtain the chroma com- ponent.
- the Inter Channel Correlation Information filter sub-network ICCI
- ICCI Inter Channel Correlation Information filter sub-network
- the luma is fed into the ICCI sub-network as additional information to assist the chroma component decoding.
- Adaptive color transform (ACT) is performed after the luma and chroma components are reconstructed.
- the module named ICCI is a neural-network based postprocessing module.
- the example embodiments of the present disclosure are not limited to the UCCI subnetwork, any other neural network based postprocessing module might also be used.
- An exemplary implementation of some example embodiments of the present disclosure is depicted in Fig. 7 (the decoding process).
- the framework comprises two branches for luma and chroma components respectively. In each of the branch, the first subnetwork comprises the context, prediction and optionally the hyper decoder modules.
- the second network comprises the hyper scale decoder module.
- the quantized hyper latent are ⁇ and ⁇ ⁇ .
- the arithmetic decoding process generates the quantized residual latents, which are further fed into the iGain units to obtain the gained quantized residual latents ⁇ and ⁇ .
- a recursive prediction operation is performed to obtain the latent ⁇ ⁇ and ⁇ ⁇ .
- the following steps describe how to obtain the samples of latent ⁇ ⁇ [: , ⁇ , ⁇ ], and 18 F1251247PCT the chroma component is processed in the same way but with different networks. 1.
- An autoregressive context module is used to generate first input of a prediction module u sing the samples ⁇ ⁇ [: , ⁇ , ⁇ ] where the (m, n) pair are the indices of the samples of the latent that are already obtained.
- the second input of the prediction module is obtained by using a hyper decoder and a quantized hyper latent ⁇ ⁇ ⁇ . 3 .
- the prediction module uses the first input and the second input, the prediction module generates the mean value ⁇ [: , ⁇ , ⁇ ]. 4.
- Whether to and/or how to apply at least one method disclosed in the document may be signaled from the encoder to the decoder, e.g. in the bitstream. Alternatively, whether to and/or how to apply at least one method disclosed in the document may be determined by the decoder based on coding information, such as dimensions, color format, etc.
- the modules named MS1, MS2 or MS3+O might be included in the processing flow. The said modules might perform an operation to their input by multiplying the input with a scalar or adding an adding an additive component to the input to obtain the output. The scalar or the additive component that are used by the said modules might be indicated in a bitstream.
- the module named RD or the module named AD in Fig.7 might be an entropy decoding module. It might be a range decoder or an arithmetic decoder or the like.
- the example embodiments of the present disclosure described herein are not limited to the specific combination of the units exemplified in Fig.7. Some of the modules might be missing and some of the modules might be displaced in processing order. Also additional modules might be included. For example: 1.
- the ICCI module might be removed. In that case the output of the Synthesis module and the Synthesis UV module might be combined by means of another module, that might be based on neural networks. 2.
- One or more of the modules named MS1, MS2 or MS3+O might be removed.
- the core of the proposed solution is not affected by the removing of one or more of the said scaling and adding modules.
- FIG. 7 other operations that are performed during the processing of the luma and chroma components are also indicated using the star symbol. These processes are denoted as MS1, MS2, 19 F1251247PCT MS3+O.
- These processing might be, but not limited to, adaptive quantization, latent sample scaling, and latent sample offsetting operations.
- an adaptive quantization process might correspond to scaling of a sample with multiplier before the prediction process, wherein the multiplier is predefined or whose value is indicated in the bitstream.
- the latent scaling process might correspond to the process where a sample is scaled with a multiplier after the prediction process, wherein the value of the multiplier is either predefined or indicated in the bitstream.
- the offsetting operation might correspond to adding an additive element to the sample, again wherein the value of the additive element might be indicated in the bitstream or inferred or predetermined.
- Another operation might be tiling operation, wherein samples are first tiled (grouped) into overlapping or non-overlapping regions, wherein each region is processed independently. For example the samples corresponding to the luma component might be divided into tiles with a tile height of 20 samples, whereas the chroma components might be divided into tiles with a tile height of 10 samples for processing.
- Another operation might be application of wavefront parallel processing.
- a number of samples might be processed in parallel, and the amount of samples that can be processed in parallel might be indicated by a control parameter.
- the said control parameter might be indicated in the bitstream, be inferred, or can be predetermined.
- the number of samples that can be processed in parallel might be different, hence different indicators can be signalled in the bitstream to control the operation of luma and chrome processing separately.
- Colors separation and conditional coding In one example the primary and secondary color components of an image are coded separately, using networks with similar architecture, but different number of channels as shown in 8. All boxes with same names are sub-networks with the similar architecture, only input-output tensor size and number of channels are different.
- the vertical arrows (with arrowhead pointing downwards) indicate data flow related to secondary color components coding. Vertical arrows show data exchange between primary and secondary components pipelines.
- the input signal to be encoded is notated as ⁇ , latent space tensor in bottleneck of variational auto- encoder is ⁇ .
- Subscript “Y” indicates primary component
- subscript “UV” is used for concatenated secondary components, there are chroma components.
- Fig.8 illustrates learning-based image codec architecture. First the input image that has RGB color format is converted to primary (Y) and secondary components(UV).
- the primary component ⁇ ⁇ is coded independently from secondary components 20 F1251247PCT ⁇ ⁇ and the coded picture size is equal to input/decoded picture size.
- the secondary components are coded conditionally, using ⁇ as auxiliary information from primary component for encoding ⁇ and using ⁇ as a latent tensor with auxiliary information from primary component for decoding ⁇ ⁇ reconstruction.
- the codec structure for primary component and secondary components are almost identical except the number of channels, size of the channels and the several entropy models for transforming latent tensor to bitstream, therefore primary and secondary latent tensor will generate two different bitstream based on two different entropy models.
- ⁇ ⁇ Prior to the encoding ⁇ ⁇ , ⁇ ⁇ goes through a module which adjusts the sample location by down- sampling (marked as “s ⁇ ” on Fig.1), this essentially means that coded picture size for secondary component is different from the coded picture size for primary component.
- the size of auxiliary input tensor in conditional coding is adjusted in order the encoder receives primary and secondary components tensor with the same picture size.
- the secondary component is rescaled to the original picture size with a neural-network based upsampling filter module (“NN-color filter s ⁇ ” on Fig. 1), which outputs secondary components up-sampled with factor ⁇ .
- a neural-network based upsampling filter module (“NN-color filter s ⁇ ” on Fig. 1)
- FIG. 9 illustrates synthesis transform example for learning based image coding.
- the example synthesis transform above includes a sequence of 4 convolutions with up-sampling with stride of 2.
- the synthesis transform sub-Net is depicted on Fig.9.
- the size of the tensor in different parts of synthesis transform before cropping layer is the diagram on Fig.9.
- the scale factor might be 2 for example, wherein the secondary component is downsampled by a factor of 2.
- the output of this cropping layer must be equal to H, W (the output size), if the size of the input of this cropping layer is greater than H or W in horizontal or vertical dimension respectively, cropping needs to be performed in that dimension.
- the second cropping layer counting from left to right has a depth of 1.
- the operation of cropping layers are controlled by the output size H,W. In one example if H and W are both equal to 16, then the cropping layers do not perform any cropping.
- bitwise shifting can be represented using the function ⁇ h ⁇ ( ⁇ , ⁇ ) , where n is an integer number. If n is greater than 0, it corresponds to right-shift operator (>>), which moves the bits of of the input to the right, and the left-shift operator ( ⁇ ), which moves the bits to the left.
- the output of the bitshift operation is an integer value.
- the floor() function might be added to the definition.
- Floor( x ) is equal to the largest integer less than or equal to x.
- x >> y Arithmetic right shift of a two's complement integer representation of x by y binary digits. 22 F1251247PCT This function is defined only for non-negative integer values of y. Bits shifted into the most significant bits (MSBs) as a result of the right shift have a value equal to the MSB of x prior to the shift operation. x ⁇ y Arithmetic left shift of a two's complement integer representation of x by y binary digits. This function is defined only for non-negative integer values of y. Bits shifted into the least significant bits (LSBs) as a result of the left shift have a value equal to 0.
- MSBs most significant bits
- the convolution is a fairly simple operation at heart: you start with a kernel, which is simply a small matrix of weights. This kernel “slides” over the input data, performing an elementwise multiplication with the part of the input it is currently on, and then summing up the results into a single output pixel.
- the convolution operation might comprise a “bias”, which is added to the output of the elementwise multiplication operation.
- the convolution operation might be described by the following mathematical formula.
- An output out1 can be obtained as: wherein w1 are the multiplication factors, K1 is called a bias (an additive term) and ⁇ ⁇ is the k th input, and N is the kernel size in one direction and P is the kernel size in another direction.
- the convolution layer might consist of convolution operations wherein more than one output might be generated. Other equivalent depictions of the convolution operation might be found below: In the above equations “c” indicates the channel number. It is equivalent to output number, out[1,x,y] is one output and out[2,x,y] is a second output. Wherein the k is the input number, ⁇ [1, ⁇ , ⁇ ] is one input and ⁇ [2, ⁇ , ⁇ ] is a second input. The w1, or w describe weights of the convolution operation. 2.11 Leaky_Relu activation function The leaky Relu activation function is depicted in Fig.10. According to the function, if the input is a positive value, the output is equal to the input.
- the output is equal to a*y.
- the a is typically (not limited to) a value that is smaller than 1 and greater than 0. Since the multiplier a is smaller than 1, it can be implemented either as a multiplication with a 23 F1251247PCT non-integer number, or with a division operation. The multiplier a might be called the negative slope of the leaky relu function. 2.12 Relu activation function The Relu activation function is depicted in Fig.11. According to the function, if the input is a positive value, the output is equal to the input. If the input (y) is a negative value, the output is equal to 0.
- Model header independent_beta_uv is a flag (false/true), independent_beta_uv equal to true indicates that the beta displacement parameter for primary and secondary components are different. If independent_beta_uv equal to false then beta displacement parameter for primary and secondary components are the same.
- beta_displacement_log_plus_2048[comp] minus 2048 is a – parameter indicating a displacement between the rate control parameter beta selected by encoder for comp component and the reference rate control parameter beta associated with the index of the used model (model_id). The displacement is in logarithmic scale.
- betaDisplacementLog[comp] beta_displacement_log_plus_2048[comp] – 2 11
- the reference rate control parameter beta ( ⁇ ) is the parameter used in the model training to control the ratio between the bitrate and distortion.
- the model here means the model associated with the index of the used model (model_id).
- betaDisplacementLog[1] betaDisplacementLog[0].
- model_id the syntax element coded at Picture header.
- the learnable model includes the reference forward gain vector for each component comp in logarithmic scale m log [model_id][comp][C], comprising 12-bit signed values.
- Each model_id is associated with the reference r ate control parameter ⁇ , which was used in the model training to control the ratio between the bitrate and distortion.
- the forward gain tensor m is used at encoder side in Gain Unit.
- Forward gain tensor in logarithmic scale ⁇ ⁇ is used in Sigma scale.
- the inverse gain tensor ⁇ ⁇ is used at decoder side in Inverse Gain Unit. All three forward ⁇ , inverse ⁇ ⁇ and logarithmics domain ⁇ ⁇ gain tensors have size [ ⁇ ,h ⁇ , ⁇ ] equal to the size of the residual tensor for the corresponding component. Component index comp is equal to 0 for the primary color component and 1 for the secondary component.
- the input of gain tensor derivation process is: ⁇ 12-bits signed variable betaDisplacementLog[comp]; ⁇ tensor with 3-bit elements ⁇ _ ⁇ _ ⁇ [ ⁇ ][h ⁇ , ⁇ ⁇ ]; ⁇ index model_id which indicates pretrained model to be used.
- the output of gain tensor derivation process is: ⁇ forward gain tensor in logarithmic scale ⁇ ⁇ [ ⁇ ,h ⁇ , ⁇ ⁇ ]; ⁇ forward gain tensor ⁇ [ ⁇ , h ⁇ , ⁇ ]; ⁇ inverse gain tensor ⁇ [ ⁇ , h ⁇ , ⁇ ]. Sizes of all tensors ⁇ , h ⁇ , ⁇ for primary and secondary components are defined in Table 1.
- the input of this process ⁇ inverse gain tensor ⁇ ⁇ , ⁇ reconstructed residual tensor ⁇ of size [ ⁇ , h ⁇ , ⁇ ] , which is the output of decoder-size SKIP process.
- the output of this process ⁇ scaled by inverse gain tensor residual tensor ⁇ ′ of size [ ⁇ ,h ⁇ , ⁇ ⁇ ] .
- On decoder side an inverse gain unit is placed after entropy decoder and takes reconstructed residual as an input .
- the input of this process are: ⁇ forward gain tensor in logarithmic scale m log , ⁇ standard deviation logarithm tensor I ⁇ of size [C, h4, w4] , which is and output of Hyper Scale Decoder.
- the output of this process is: ⁇ scaled by forward gain tensor standard deviation logarithm tensor I’ ⁇ of size [C, h 4 , w 4 ] which goes to the adaptive sigma scale.
- Sigma scale modifies standard deviation logarithm tensor I ⁇ [C, h 4 , w 4 ] as following: 2.14.6 Sigma quantization 27 F1251247PCT
- the input of this process is: – I’’ ⁇ [ C,h 4 ,w 4 ] which is an output of Adaptive Sigma Scale.
- the output of this process is: – sigma_Idx[C,h 4 ,w 4 ] which is further used in Entropy Decoder for ⁇ .
- Thread size signalling In an existing design, the thread size (substream size) signalling is as follows.
- ThreadMeanSizeQ floor(CodeStreamSizeQ / NumberOfThreadsQ) se(v): signed integer 0-th order Exp-Golomb-coded syntax element with the left bit first.
- the parsing process for this descriptor is specified in clause annex D with the order k equal to 0. 3.
- a codestream (a data unit) is included in the bitstream according to the following template (ordered set of elements): • Codestream marker (e.g. marker_id); • Codestream size indication (e.g.
- the codestream size indication indicates the total size of the following data, including the data unit size indication and data units.
- the data unit size indication comprises offsets (size information) pertaining to the chunks of the data units.
- the data units might comprise multiple chunks for parallel processing (e.g. multi-threading) and data unit size indication might comprise one or more size (or offset) indications that can help finding the starting point of each data chunk.
- the value of the data unit size indication is a function of codestream size indication.
- ThreadMeanSizeQ floor(CodeStreamSizeQ / NumberOfThreadsQ); • thread_ size_delta_q[i] is signed difference between ThreadMeanSizeQ and the number of bytes between the size of the quality map tensor sub-stream i.
- the data unit size indication is a difference between the actual data unit size, and a mean value (ThreadMeanSizeQ), wherein the mean value is a function of the codestream size indication.
- the data unit size indication is included in the bitstream using variable length coding (indicated by se(v) in the syntax table).
- Data unit o Data unit might be a substream. o Data unit might correspond to a thread, .e.g. a processing thread.
- Codestream o A codestream might be a residual substream, a quality map substream, a picture header, a tool header, a header, or alike.
- a codestream might start with a marker (a marker ID), and a codestream size indi- cation.
- a codestream might comprise one or more data units.
- Data unit size o The data unit size might be an offset. o The data unit might be signalled using variable length coding, e.g. ex-Golomb code. o The data unit size might be a thread offset, a thread size, or a delta thread size.
- o Data unit size might be signalled using fixed length coding.
- o Data unit size might pertain to a substream, or a tile stream, or a bitstream partition, or a region stream.
- Different codestreams o
- the data unit size indication might be signalled in a header (e.g. a picture header or a tool header or alike).
- o Data unit size indication might correspond to a data unit that is in a different codestream (e.g. the size indication might be inside a header codestream, whereas the data unit might be inside a second codestream).
- 31 F1251247PCT o
- the data unit might be inside a residual codestream, or a quality map codestream or a hyper information codestream.
- the value of the data unit size might depend on first codestream size information.
- the data unit size might be included in a second codestream. Therefore, the codestream size information does not depend on the value of the data unit size information.
- Solution 2 The indication about the data unit size does not depend on codestream size information.
- the data unit size information might depend on an additive value, wherein the additive value is included in the bitstream, and is different from the codestream size information.
- the additive value information might be included in a header.
- the additive value information might be included in a different codestream than where the data unit size information might be included.
- the additive value might be added to the data unit size information to obtain the size of a data unit.
- the additive value might be a mean value.
- Solution 3 The indication about the data unit size is coded with fixed length coding.
- Decoder/encoder solution Performing a conversion using a neural network between a bitstream and a reconstructed image or video based on an indicator in the bitstream, wherein, Including in (or obtaining from) the bitstream an indication about the data unit size and an indication about the codestream size n different codestreams.
- Decoder/encoder solution Performing a conversion using a neural network between a bitstream and a reconstructed image or video based on an indicator in the bitstream, wherein, Including in (or obtaining from) the bitstream an indication about the data unit size which does not depend on codestream size information. 3. Decoder/encoder solution: Performing a conversion using a neural network between a bitstream and a reconstructed image or video based on an indicator in the bitstream, wherein, Including in (or obtaining from) the bitstream an indication about the data unit size is coded 32 F1251247PCT with fixed length coding. 6. Example implementation of the proposed solutions Below are some example implementations for the solutions summarized above in Sections 4 and 5.
- the data unit size information (thread_size_delta_z, thread_size_delta and thread_size_delta_q) are removed from the z_stream, q_stream and residual_stream codestreams. Instead they are included in the picture header.
- the term “visual data” may refer to an image, a video, a picture in a video, or any other visual data suitable to be coded.
- Fig. 12 illustrates a flowchart of a method 1200 for visual data processing in accordance with some embodiments of the present disclosure.
- a conversion between the visual data and a codestream of the visual data is performed with a neural network (NN)-based model.
- NN neural network
- a codestream may comprise a sequence of bits.
- the codestream may further comprise associated codes which are used as markers.
- the codestream may also be referred to as a bitstream.
- the conversion may include encoding the visual data into the codestream. Additionally or alternatively, the conversion may include decoding the visual data from the codestream.
- the decoding model shown in Fig. 6 may be employed for decoding the visual data from the bitstream.
- an NN-based model may be a model based on neural network technologies.
- an NN-based model may specify sequence of neural network modules (also called architecture) and model parameters.
- the neural network module may comprise a set of neural network layers. Each neural network layer specifies a tensor operation which receives and outputs tensor, and each layer has trainable parameters.
- the codestream comprises a first indication indicating a size of a data unit in a first 39 F1251247PCT portion of the codestream.
- a data unit may be a substream of the codestream.
- the data unit may correspond to a thread, e.g., a processing thread.
- a thread may comprise a codestream segment.
- a portion of the codestream may be a substream of the codestream, such as, a residual substream, a quality map substream, a hyper tensor substream, and/or the like.
- a portion of the codestream may be a header, such as a picture header, or a tool header.
- a portion of the codestream may comprise one or more data units. It should be understood that the possible implementations of the data unit and the portion of the codestream described here are merely illustrative and therefore should not be construed as limiting the present disclosure in any way.
- a sample in a residual latent representation of the visual data may be referred to as a residual sample.
- an analysis transform may be employed to perform a transform from the visual data to a latent representation of the visual data, and the residual latent representation may be determined as a difference between the latent representation and a prediction of the latent representation.
- a hyper tensor carries information regarding a latent domain prediction for a latent representation of the visual data and/or an entropy parameter for residual of the latent representation.
- an analysis transform may generate latent representation of the visual data. This latent representation is further compressed, e.g. by using a hyper encoder, to obtain a hyper tensor z, which carries information about latent domain prediction and entropy parameters for residual.
- the hyper tensor z may be further quantized to obtain a quantized hyper tensor z-hat (i.e., ⁇ ).
- the quantized hyper tensor may also be referred to as a hyper tensor for short. Therefore, the above-mentioned hyper tensor substream (a.k.a., Z-stream) may comprise encoded data of unquantized hyper tensor z, and/or encoded data of quantized hyper tensor ⁇ .
- the scope of the present disclosure is not limited in this respect.
- the first portion comprises a second indication indicating a size of the first portion of the codestream
- the first indication is comprised in a second portion of the codestream different from the first portion. That is, the first indication and the second indication are comprised in different portions of the codestream.
- the size of the first portion of the codestream is not dependent on a value of the first indication.
- the proposed method can advantageously decouple the size of the first portion of the codestream from a value of the first indication. Thereby, a potential oscillation caused by a coupling relation between the size of the first portion of the codestream and the value of the 40 F1251247PCT first indication can be avoided, and thus the coding efficiency can be improved.
- the first indication may correspond to the above mentioned indication about the data unit size
- the second indication may correspond to the above mentioned indication about the codestream size.
- the second indication may be represented as substream_size.
- the second indication may indicate the size of the substream in bytes. It should be noted that the second indication may also be represented as any other suitable string, and thus the scope of the present disclosure is not limited in this respect.
- the first indication is independent from the size of the first portion of the codestream.
- the size of the data unit may be dependent on a first value independent from the size of the first portion of the codestream.
- the first value may be indicted in the codestream.
- the first value may be comprised in a header, such as a picture header, a tool header, or the like.
- the first value and the first indication may be comprised in different portions of the codestream, or the first value and the first indication may be comprised in the same portion of the codestream.
- the size of the data unit may be determined based on a sum of the first value and a value of the first indication. In one example embodiment, the sum of the first value and the value of the first indication is determined to be the size of the data unit.
- the first value may be a mean value of sizes of data units.
- the sum of the first value, the value of the first indication, and a further offset is determined to be the size of the data unit. It should be understood that the above illustrations are described merely for purpose of description. The scope of the present disclosure is not limited in this respect.
- the proposed method can advantageously decouple the size of the first portion of the codestream from a value of the first indication. Thereby, a potential oscillation caused by a coupling relation between the size of the first portion of the codestream and the value of the first indication can be avoided, and thus the coding efficiency can be improved.
- the first indication is coded with a fixed length coding.
- the number of bits used to signal the first indication may be predetermined to be a fixed number, such as 2, 8, 16, or the like. In this case, it is possible to know in advance the number of bits required to be included in the codestream. As such, the proposed method can advantageously decouple the size of the first portion of the codestream from a 41 F1251247PCT value of the first indication. Thereby, a potential oscillation caused by a coupling relation between the size of the first portion of the codestream and the value of the first indication can be avoided, and thus the coding efficiency can be improved.
- the first portion of the codestream may start with a marker and the second indication indicating a size of the first portion of the codestream.
- the second portion of the codestream may start with a marker and an indication indicating a size of the second portion.
- the marker for a portion of the codestream may be a marker identifier of the portion of the codestream.
- a marker of a residual substream for a primary component of the visual data may be SORp
- a marker of a residual substream for a secondary component of the visual data may be SORs
- a marker of a quality map substream of the visual data may be SOQ
- a marker of a hyper tensor substream of the visual data may be SOZ.
- a value of the first indication may be an offset.
- the value of the first indication may be a difference between the number of bytes of the data unit and a mean size of data units.
- a value of the first indication may be a thread offset.
- a value of the first indication may be a thread size.
- a value of the first indication may be a delta thread size.
- the first indication may be represented by thread_size_delta_z, thread_size_delta_r, thread_size_delta_q, or the like.
- the size of the data unit may be signaled using a variable length coding. Alternatively, the size of the data unit may be signaled using a fixed length coding. In some embodiments, the size of the data unit may be associated with one of the following: a substream, a tile stream, or a codestream partition, or a region stream.
- a tile stream may be a substream comprising coded data for a tile
- a region stream may be a substream comprising coded data for a region.
- the second portion of the codestream may be a header, such as a picture header, a tool header, or the like.
- the first portion of the 42 F1251247PCT codestream may comprise a residual substream, a quality map substream, a hyper information substream, and/or the like .
- a value of the first indication may be dependent on the size of the first portion of the codestream. In this case, the first indication may be comprised in a further portion of the codestream different from the first portion.
- a non-transitory computer-readable recording medium stores a codestream of visual data which is generated by a method performed by an apparatus for visual data processing.
- the method comprises: performing a conversion from the visual data to the codestream with a neural network (NN)-based model, the codestream comprising a first indication indicating a size of a data unit in a first portion of the codestream, wherein the first portion comprises a second indication indicating a size of the first portion of the codestream, and the first indication is comprised in a second portion of the codestream different from the first portion, or wherein the first indication is independent from the size of the first portion of the codestream, or wherein the first indication is coded with a fixed length coding.
- NN neural network
- the method comprises: performing a conversion from the visual data to the codestream with a neural network (NN)-based model, the codestream comprising a first indication indicating a size of a data unit in a first portion of the codestream; and storing the codestream in a non-transitory computer-readable recording medium, wherein the first portion comprises a second indication indicating a size of the first portion of the codestream, and the first indication is comprised in a second portion of the codestream different from the first portion, or wherein the first indication is independent from the size of the first portion of the codestream, or wherein the first indication is coded with a fixed length coding.
- NN neural network
- a method for visual data processing comprising: performing a conversion between visual data and a codestream of the visual data with a neural network (NN)-based model, the codestream comprising a first indication indicating a size of a data unit in a first portion of the codestream, wherein the first portion comprises a second indication indicating a size of the first portion of the codestream, and the first indication is 43 F1251247PCT comprised in a second portion of the codestream different from the first portion, or wherein the first indication is independent from the size of the first portion of the codestream, or wherein the first indication is coded with a fixed length coding.
- NN neural network
- Clause 2. The method of clause 1, wherein the data unit comprises a substream of the codestream, or the data unit corresponds to a thread. [0061] Clause 3. The method of any of clauses 1-2, wherein at least one of the first portion or the second portion of the codestream comprises one of the following: a residual substream, a quality map substream, a picture header, or a tool header. [0062] Clause 4. The method of any of clauses 1-3, wherein the first portion of the codestream starts with a marker and the second indication, or the second portion of the codestream starts with a marker and an indication indicating a size of the second portion. [0063] Clause 5.
- Clause 6 The method of any of clauses 1-5, wherein a value of the first indication is an offset, a thread offset, a thread size, or a delta thread size.
- Clause 7. The method of any of clauses 1-6, wherein the size of the data unit is signaled using a variable length coding or a fixed length coding.
- Clause 8. The method of any of clauses 1-7, wherein the size of the data unit is associated with one of the following: a substream, a tile stream, or a codestream partition, or a region stream.
- Clause 12 The method of clause 12, wherein the first value is comprised in a header.
- Clause 14 The method of any of clauses 12-13, wherein the first value and the first indication are comprised in different portions of the codestream.
- Clause 15. The method of any of clauses 12-14, wherein the size of the data unit is 44 F1251247PCT determined based on a sum of the first value and a value of the first indication.
- Clause 16. The method of any of clauses 12-15, wherein the first value is a mean value of sizes of data units.
- An apparatus for visual data processing comprising a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to perform a method in accordance with any of clauses 1-20.
- Clause 22 A non-transitory computer-readable storage medium storing instructions that cause a processor to perform a method in accordance with any of clauses 1-20.
- Clause 23 A non-transitory computer-readable storage medium storing instructions that cause a processor to perform a method in accordance with any of clauses 1-20.
- a non-transitory computer-readable recording medium storing a codestream of visual data which is generated by a method performed by an apparatus for visual data processing, wherein the method comprises: performing a conversion from the visual data to the codestream with a neural network (NN)-based model, the codestream comprising a first indication indicating a size of a data unit in a first portion of the codestream, wherein the first portion comprises a second indication indicating a size of the first portion of the codestream, and the first indication is comprised in a second portion of the codestream different from the first portion, or wherein the first indication is independent from the size of the first portion of the codestream, or wherein the first indication is coded with a fixed length coding.
- NN neural network
- a method for storing a codestream of visual data comprising: performing a conversion from the visual data to the codestream with a neural network (NN)- based model, the codestream comprising a first indication indicating a size of a data unit in a first portion of the codestream; and storing the codestream in a non-transitory computer- readable recording medium, wherein the first portion comprises a second indication indicating a size of the first portion of the codestream, and the first indication is comprised 45 F1251247PCT in a second portion of the codestream different from the first portion, or wherein the first indication is independent from the size of the first portion of the codestream, or wherein the first indication is coded with a fixed length coding.
- NN neural network
- FIG. 13 illustrates a block diagram of a computing device 1300 in which various embodiments of the present disclosure can be implemented.
- the computing device 1300 may be implemented as or included in the source device 110 (or the visual data encoder 114) or the destination device 120 (or the visual data decoder 124).
- the computing device 1300 shown in Fig.13 is merely for purpose of illustration, without suggesting any limitation to the functions and scopes of the embodiments of the present disclosure in any manner.
- the computing device 1300 includes a general-purpose computing device 1300.
- the computing device 1300 may at least comprise one or more processors or processing units 1310, a memory 1320, a storage unit 1330, one or more communication units 1340, one or more input devices 1350, and one or more output devices 1360. [0086] In some embodiments, the computing device 1300 may be implemented as any user terminal or server terminal having the computing capability.
- the server terminal may be a server, a large-scale computing device or the like that is provided by a service provider.
- the user terminal may for example be any type of mobile terminal, fixed terminal, or portable terminal, including a mobile phone, station, unit, device, multimedia computer, multimedia tablet, Internet node, communicator, desktop computer, laptop computer, notebook computer, netbook computer, tablet computer, personal communication system (PCS) device, personal navigation device, personal digital assistant (PDA), audio/video player, digital camera/video camera, positioning device, television receiver, radio broadcast receiver, E-book device, gaming device, or any combination thereof, including the accessories and peripherals of these devices, or any combination thereof.
- the computing device 1300 can support any type of interface to a user (such as “wearable” circuitry and the like).
- the processing unit 1310 may be a physical or virtual processor and can implement various processes based on programs stored in the memory 1320. In a multi-processor system, multiple processing units execute computer executable instructions in parallel so as to improve the parallel processing capability of the computing device 1300.
- the processing unit 1310 may also be referred to as a central processing unit (CPU), a microprocessor, a 46 F1251247PCT controller or a microcontroller.
- CPU central processing unit
- microprocessor a microprocessor
- 46 F1251247PCT controller or a microcontroller.
- the computing device 1300 typically includes various computer storage medium. Such medium can be any medium accessible by the computing device 1300, including, but not limited to, volatile and non-volatile medium, or detachable and non-detachable medium.
- the memory 1320 can be a volatile memory (for example, a register, cache, Random Access Memory (RAM)), a non-volatile memory (such as a Read-Only Memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), or a flash memory), or any combination thereof.
- the storage unit 1330 may be any detachable or non-detachable medium and may include a machine-readable medium such as a memory, flash memory drive, magnetic disk or another other media, which can be used for storing information and/or visual data and can be accessed in the computing device 1300.
- the computing device 1300 may further include additional detachable/non- detachable, volatile/non-volatile memory medium.
- each drive may be connected to a bus (not shown) via one or more visual data medium interfaces.
- the communication unit 1340 communicates with a further computing device via the communication medium.
- the functions of the components in the computing device 1300 can be implemented by a single computing cluster or multiple computing machines that can communicate via communication connections. Therefore, the computing device 1300 can operate in a networked environment using a logical connection with one or more other servers, networked personal computers (PCs) or further general network nodes.
- the input device 1350 may be one or more of a variety of input devices, such as a mouse, keyboard, tracking ball, voice-input device, and the like.
- the output device 1360 may be one or more of a variety of output devices, such as a display, loudspeaker, printer, and the like.
- the computing device 1300 can further communicate with one or more external devices (not shown) such as the storage devices and display device, with one or more devices enabling the user to interact with the computing device 1300, or any devices (such as a network card, a modem and the like) enabling the computing device 1300 to communicate with one or more other computing devices, if required.
- cloud computing instead of being integrated in a single device, some or all 47 F1251247PCT components of the computing device 1300 may also be arranged in cloud computing architecture.
- the components may be provided remotely and work together to implement the functionalities described in the present disclosure.
- cloud computing provides computing, software, visual data access and storage service, which will not require end users to be aware of the physical locations or configurations of the systems or hardware providing these services.
- the cloud computing provides the services via a wide area network (such as Internet) using suitable protocols.
- a cloud computing provider provides applications over the wide area network, which can be accessed through a web browser or any other computing components.
- the software or components of the cloud computing architecture and corresponding visual data may be stored on a server at a remote position.
- the computing resources in the cloud computing environment may be merged or distributed at locations in a remote visual data center.
- Cloud computing infrastructures may provide the services through a shared visual data center, though they behave as a single access point for the users. Therefore, the cloud computing architectures may be used to provide the components and functionalities described herein from a service provider at a remote location. Alternatively, they may be provided from a conventional server or installed directly or otherwise on a client device.
- the computing device 1300 may be used to implement visual data encoding/decoding in embodiments of the present disclosure.
- the memory 1320 may include one or more visual data coding modules 1325 having one or more program instructions. These modules are accessible and executable by the processing unit 1310 to perform the functionalities of the various embodiments described herein.
- the input device 1350 may receive visual data as an input 1370 to be encoded.
- the visual data may be processed, for example, by the visual data coding module 1325, to generate an encoded bitstream.
- the encoded bitstream may be provided via the output device 1360 as an output 1380.
- the input device 1350 may receive an encoded bitstream as the input 1370.
- the encoded bitstream may be processed, for example, by the visual data coding module 1325, to generate decoded visual data.
- the decoded visual data may be provided via the output device 1360 as the output 1380.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Embodiments of the present disclosure provide a solution for visual data processing. A method for visual data processing is proposed. The method comprises: performing a conversion between visual data and a codestream of the visual data with a neural network (NN)-based model, the codestream comprising a first indication indicating a size of a data unit in a first portion of the codestream, wherein the first portion comprises a second indication indicating a size of the first portion of the codestream, and the first indication is comprised in a second portion of the codestream different from the first portion, or wherein the first indication is independent from the size of the first portion of the codestream, or wherein the first indication is coded with a fixed length coding.
Description
METHOD, APPARATUS, AND MEDIUM FOR VISUAL DATA PROCESSING FIELDS [0001] Embodiments of the present disclosure relates generally to visual data processing techniques, and more particularly, to neural network-based visual data coding. BACKGROUND [0002] The past decade has witnessed the rapid development of deep learning in a variety of areas, especially in computer vision and image processing. Neural network was invented originally with the interdisciplinary research of neuroscience and mathematics. It has shown strong capabilities in the context of non-linear transform and classification. Neural network- based image/video compression technology has gained significant progress during the past half decade. It is reported that the latest neural network-based image compression algorithm achieves comparable rate-distortion (R-D) performance with Versatile Video Coding (VVC). With the performance of neural image compression continually being improved, neural network-based video compression has become an actively developing research area. However, coding efficiency of neural network-based image/video coding is generally expected to be further improved. SUMMARY [0003] Embodiments of the present disclosure provide a solution for visual data processing. [0004] In a first aspect, a method for visual data processing is proposed. The method comprises: performing a conversion between visual data and a codestream of the visual data with a neural network (NN)-based model, the codestream comprising a first indication indicating a size of a data unit in a first portion of the codestream, wherein the first portion comprises a second indication indicating a size of the first portion of the codestream, and the first indication is comprised in a second portion of the codestream different from the first portion, or wherein the first indication is independent from the size of the first portion of the codestream, or wherein the first indication is coded with a fixed length coding. [0005] Based on the method in accordance with the first aspect of the present disclosure, the first indication and the second indication are comprised in different portions of the codestream, or the first indication is independent from the size of the first portion of the codestream, or the first indication is coded with a fixed length coding. Compared with the conversion solution, the proposed method can advantageously decouple the size of the first portion of the codestream from a value of the first indication. Thereby, a potential oscillation caused by a coupling relation between the size of the first portion of the codestream and the 1 F1251247PCT
value of the first indication can be avoided, and thus the coding efficiency can be improved. [0006] In a second aspect, an apparatus for visual data processing is proposed. The apparatus comprises a processor and a non-transitory memory with instructions thereon. The instructions upon execution by the processor, cause the processor to perform a method in accordance with the first aspect of the present disclosure. [0007] In a third aspect, a non-transitory computer-readable storage medium is proposed. The non-transitory computer-readable storage medium stores instructions that cause a processor to perform a method in accordance with the first aspect of the present disclosure. [0008] In a fourth aspect, another non-transitory computer-readable recording medium is proposed. The non-transitory computer-readable recording medium stores a codestream of visual data which is generated by a method performed by an apparatus for visual data processing. The method comprises: performing a conversion from the visual data to the codestream with a neural network (NN)-based model, the codestream comprising a first indication indicating a size of a data unit in a first portion of the codestream, wherein the first portion comprises a second indication indicating a size of the first portion of the codestream, and the first indication is comprised in a second portion of the codestream different from the first portion, or wherein the first indication is independent from the size of the first portion of the codestream, or wherein the first indication is coded with a fixed length coding. [0009] In a fifth aspect, a method for storing a codestream of visual data is proposed. The method comprises: performing a conversion from the visual data to the codestream with a neural network (NN)-based model, the codestream comprising a first indication indicating a size of a data unit in a first portion of the codestream; and storing the codestream in a non- transitory computer-readable recording medium, wherein the first portion comprises a second indication indicating a size of the first portion of the codestream, and the first indication is comprised in a second portion of the codestream different from the first portion, or wherein the first indication is independent from the size of the first portion of the codestream, or wherein the first indication is coded with a fixed length coding. [0010] This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. BRIEF DESCRIPTION OF THE DRAWINGS [0011] Through the following detailed description with reference to the accompanying 2 F1251247PCT
drawings, the above and other objectives, features, and advantages of example embodiments of the present disclosure will become more apparent. In the example embodiments of the present disclosure, the same reference numerals usually refer to the same components. [0012] Fig. 1A illustrates a block diagram that illustrates an example visual data coding system in accordance with some embodiments of the present disclosure; [0013] Fig.1B is a schematic diagram illustrating an example transform coding scheme; [0014] Fig. 2 illustrates example latent representations of an image; [0015] Fig. 3 is a schematic diagram illustrating an example autoencoder implementing a hyperprior model; [0016] Fig. 4 is a schematic diagram illustrating an example combined model configured to jointly optimize a context model along with a hyperprior and the autoencoder; [0017] Fig.5 illustrates an example encoding process; [0018] Fig.6 illustrates an example decoding process; [0019] Fig. 7 illustrates an example decoding process according to some embodiments of the present disclosure; [0020] Fig.8 illustrates an example learning-based image codec architecture; [0021] Fig.9 illustrates an example synthesis transform for learning based image coding; [0022] Fig. 10 illustrates an example leaky Rectified Linear Unit (ReLU) activation function; [0023] Fig.11 illustrates an example ReLU activation function; [0024] Fig.12 illustrates a flowchart of a method for visual data processing in accordance with some embodiments of the present disclosure; and [0025] Fig. 13 illustrates a block diagram of a computing device in which various embodiments of the present disclosure can be implemented. [0026] Throughout the drawings, the same or similar reference numerals usually refer to the same or similar elements. DETAILED DESCRIPTION [0027] Principle of the present disclosure will now be described with reference to some embodiments. It is to be understood that these embodiments are described only for the purpose of illustration and help those skilled in the art to understand and implement the present disclosure, without suggesting any limitation as to the scope of the disclosure. The disclosure described herein can be implemented in various manners other than the ones described below. [0028] In the following description and claims, unless defined otherwise, all technical and 3 F1251247PCT
scientific terms used herein have the same meaning as commonly understood by one of ordinary skills in the art to which this disclosure belongs. [0029] References in the present disclosure to “one embodiment,” “an embodiment,” “an example embodiment,” and the like indicate that the embodiment described may include a particular feature, structure, or characteristic, but it is not necessary that every embodiment includes the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an example embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. [0030] It shall be understood that although the terms “first” and “second” etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and similarly, a second element could be termed a first element, without departing from the scope of example embodiments. As used herein, the term “and/or” includes any and all combinations of one or more of the listed terms. [0031] The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises”, “comprising”, “has”, “having”, “includes” and/or “including”, when used herein, specify the presence of stated features, elements, and/or components etc., but do not preclude the presence or addition of one or more other features, elements, components and/ or combinations thereof. Example Environment [0032] Fig. 1A is a block diagram that illustrates an example visual data coding system 100 that may utilize the techniques of this disclosure. As shown, the visual data coding system 100 may include a source device 110 and a destination device 120. The source device 110 can be also referred to as a visual data encoding device, and the destination device 120 can be also referred to as a visual data decoding device. In operation, the source device 110 can be configured to generate encoded visual data and the destination device 120 can be configured to decode the encoded visual data generated by the source device 110. The source device 110 may include a visual data source 112, a visual data encoder 114, and an input/output (I/O) interface 116. 4 F1251247PCT
[0033] The visual data source 112 may include a source such as a visual data capture device. Examples of the visual data capture device include, but are not limited to, an interface to receive visual data from a visual data provider, a computer graphics system for generating visual data, and/or a combination thereof. [0034] The visual data may comprise one or more pictures of a video or one or more images. The visual data encoder 114 encodes the visual data from the visual data source 112 to generate a bitstream. The bitstream may include a sequence of bits that form a coded representation of the visual data. The bitstream may include coded pictures and associated visual data. The coded picture is a coded representation of a picture. The associated visual data may include sequence parameter sets, picture parameter sets, and other syntax structures. The I/O interface 116 may include a modulator/demodulator and/or a transmitter. The encoded visual data may be transmitted directly to destination device 120 via the I/O interface 116 through the network 130A. The encoded visual data may also be stored onto a storage medium/server 130B for access by destination device 120. [0035] The destination device 120 may include an I/O interface 126, a visual data decoder 124, and a display device 122. The I/O interface 126 may include a receiver and/or a modem. The I/O interface 126 may acquire encoded visual data from the source device 110 or the storage medium/server 130B. The visual data decoder 124 may decode the encoded visual data. The display device 122 may display the decoded visual data to a user. The display device 122 may be integrated with the destination device 120, or may be external to the destination device 120 which is configured to interface with an external display device. [0036] The visual data encoder 114 and the visual data decoder 124 may operate according to a visual data coding standard, such as video coding standard or still picture coding standard and other current and/or further standards. [0037] Some example embodiments of the present disclosure will be described in detailed hereinafter. It should be understood that section headings are used in the present document to facilitate ease of understanding and do not limit the embodiments disclosed in a section to only that section. Furthermore, while certain embodiments are described with reference to Versatile Video Coding or other specific visual data codecs, the disclosed techniques are applicable to other coding technologies also. Furthermore, while some embodiments describe coding steps in detail, it will be understood that corresponding steps decoding that undo the coding will be implemented by a decoder. Furthermore, the term visual data processing encompasses visual data coding or compression, visual data decoding or decompression and visual data transcoding in which visual data are represented from one 5 F1251247PCT
compressed format into another compressed format or at a different compressed bitrate. 1. Brief Summary The present disclosure is related to neural network (NN)-based image and video coding. Specifically, it is related to the method of signaling tool header of skip mode in support of regional accessibility, wherein regional accessibility refers to the capability of correctly decoding only a regional part of an image (also referred to as a picture) or a video. In addition, this disclosure is related to a neural network-based image and video compression method comprising modification of components of an image using convolution layers. The weights of the convolution layer is included in the bitstream. The ideas may be applied individually or in various combinations, for image and/or video coding methods and specifications. 2. Introduction The past decade has witnessed the rapid development of deep learning in a variety of areas, especially in computer vision and image processing. Inspired from the great success of deep learning technology to computer vision areas, many researchers have shifted their attention from conventional image/video compression techniques to neural image/video compression technologies. Neural network was invented originally with the interdisciplinary research of neuroscience and mathematics. It has shown strong capabilities in the context of non-linear transform and classification. Neural network-based image/video compression technology has gained significant progress during the past half decade. It is reported that the latest neural network- based image compression algorithm achieves comparable R-D performance with Versatile Video Coding (VVC), the latest video coding standard developed by Joint Video Experts Team (JVET) with experts from MPEG and VCEG. With the performance of neural image compression continually being improved, neural network-based video compression has become an actively developing research area. However, neural network-based video coding still remains in its infancy due to the inherent difficulty of the problem. 2.1 Image/video compression Image/video compression (also referred to as image/video coding) usually refers to the computing technology that compresses image/video into binary code to facilitate storage and transmission. The binary codes may or may not support losslessly reconstructing the original image/video, termed lossless compression and lossy compression. Most of the efforts are devoted to lossy compression since lossless reconstruction is not necessary in most scenarios. Usually the performance of image/video compression algorithms is evaluated from two aspects, i.e. compression ratio and reconstruction quality. Compression ratio is directly related to the number of binary codes, the less the better; Reconstruction quality is measured by comparing the 6 F1251247PCT
reconstructed image/video with the original image/video, the higher the better. Image/video compression techniques can be divided into two branches, the classical video coding methods and the neural-network-based video compression methods. Classical video coding schemes adopt transform-based solutions, in which researchers have exploited statistical dependency in the latent variables (e.g., DCT or wavelet coefficients) by carefully hand- engineering entropy codes modeling the dependencies in the quantized regime. Neural network- based video compression is in two flavors, neural network-based coding tools and end-to-end neural network-based video compression. The former is embedded into existing classical video codecs as coding tools and only serves as part of the framework, while the latter is a separate framework developed based on neural networks without depending on classical video codecs. In the last three decades, a series of classical video coding standards have been developed to accommodate the increasing visual content. The international standardization organizations ISO/IEC has two expert groups namely Joint Photographic Experts Group (JPEG) and Moving Picture Experts Group (MPEG), and ITU-T also has its own Video Coding Experts Group (VCEG) which is for standardization of image/video coding technology. The influential video coding standards published by these organizations include JPEG, JPEG 2000, H.262, H.264/AVC and H.265/HEVC. After H.265/HEVC, the Joint Video Experts Team (JVET) formed by MPEG and VCEG has been working on a new video coding standard Versatile Video Coding (VVC). The first version of VVC was released in July 2020. An average of 50% bitrate reduction is reported by VVC under the same visual quality compared with HEVC. Neural network-based image/video compression is not a new invention since there were a number of researchers working on neural network-based image coding. But the network architectures were relatively shallow, and the performance was not satisfactory. Benefit from the abundance of data and the support of powerful computing resources, neural network-based methods are better exploited in a variety of applications. At present, neural network-based image/video compression has shown promising improvements, confirmed its feasibility. Nevertheless, this technology is still far from mature and a lot of challenges need to be addressed. 2.2 Neural networks Neural networks, also known as artificial neural networks (ANN), are the computational models used in machine learning technology which are usually composed of multiple processing layers and each layer is composed of multiple simple but non-linear basic computational units. One benefit of such deep networks is believed to be the capacity for processing data with multiple levels of abstraction and converting data into different kinds of representations. Note that these representations are not manually designed; instead, the deep network including the processing 7 F1251247PCT
layers is learned from massive data using a general machine learning procedure. Deep learning eliminates the necessity of handcrafted representations, and thus is regarded useful especially for processing natively unstructured data, such as acoustic and visual signal, whilst processing such data has been a longstanding difficulty in the artificial intelligence field. 2.3 Neural networks for image compression Existing neural networks for image compression methods can be classified in two categories, i.e., pixel probability modeling and auto-encoder. The former one belongs to the predictive coding strategy, while the latter one is the transform-based solution. Sometimes, these two methods are combined together in literature. 2.3.1 Pixel probability modeling According to Shannon’s information theory, the optimal method for lossless coding can reach the minimal coding rate − logଶ ^^(^^) where ^^(^^) is the probability of symbol ^^. A number of lossless coding methods were developed in literature and among them arithmetic coding is believed to be among the optimal ones. Given a probability distribution ^^(^^), arithmetic coding ensures that the coding rate to be as close as possible to its theoretical limit − logଶ ^^(^^) without considering the rounding error. Therefore, the remaining problem is to how to determine the probability, which is however very challenging for natural image/video due to the curse of dimensionality. Following the predictive coding strategy, one way to model ^^(^^) is to predict pixel probabilities one by one in a raster scan order based on previous observations, where ^^ is an image.
where ^^ and ^^ are the height and width of the image, respectively. The previous observation is also known as the context of the current pixel. When the image is large, it can be difficult to estimate the conditional probability, thereby a simplified method is to limit the range of its context. ^^(^^) = ^^(^^^)^^(^^ଶ|^^^) …^^(^^^|^^^ି^, … , ^^^ି^) … ^^(^^^×^|^^^×^ି^, … , ^^^×^ି^) (2) where ^^ is a pre-defined constant controlling the range of the context. It should be noted that the condition may also take the sample values of other color components into consideration. For example, when coding the RGB color component, R sample is dependent on previously coded pixels (including R/G/B samples), the current G sample may be coded according to previously coded pixels and the current R sample, while for coding the current B sample, the previously coded pixels and the current R and G samples may also be taken into consideration. Neural networks were originally introduced for computer vision tasks and have been proven to be effective in regression and classification problems. Therefore, it has been proposed using neural networks to estimate the probability of ^^(^^^) given its context ^^^, ^^ଶ, … , ^^^ି^. 8 F1251247PCT
Most of the methods directly model the probability distribution in the pixel domain. Some researchers also attempt to model the probability distribution as a conditional one upon explicit or latent representations. That being said, it may be estimated that
where ^^ is the additional condition and ^^(^^) = ^^(^^)^^(^^|^^), meaning the modeling is split into an unconditional one and a conditional one. The additional condition can be image label information or high-level representations. 2.3.2 Auto-encoder Auto-encoder originates from the well-known work proposed by Hinton and Salakhutdinov. The method is trained for dimensionality reduction and consists of two parts: encoding and decoding. The encoding part converts the high-dimension input signal to low-dimension representations, typically with reduced spatial size but a greater number of channels. The decoding part attempts to recover the high-dimension input from the low-dimension representation. Auto-encoder enables automated learning of representations and eliminates the need of hand-crafted features, which is also believed to be one of the most important advantages of neural networks. Fig.1B illustrates a typical transform coding scheme. The original image x is transformed by the analysis network gୟ to achieve the latent representation y. The latent representation y is quantized and compressed into bits. The number of bits R is used to measure the coding rate. The quantized latent representation y^ is then inversely transformed by a synthesis network g^ to obtain the reconstructed image x^. The distortion is calculated in a perceptual space by transforming x and x^ with the function g୮. It is intuitive to apply auto-encoder network to lossy image compression. It is only needed to encode the learned latent representation from the well-trained neural networks. However, it is not trivial to adapt auto-encoder to image compression since the original auto-encoder is not optimized for compression thereby not efficient by directly using a trained auto-encoder. In addition, there exist other major challenges: First, the low-dimension representation should be quantized before being encoded, but the quantization is not differentiable, which is required in backpropagation while training the neural networks. Second, the objective under compression scenario is different since both the distortion and the rate need to be take into consideration. Estimating the rate is challenging. Third, a practical image coding scheme needs to support variable rate, scalability, encoding/decoding speed, interoperability. In response to these challenges, a number of researchers have been actively contributing to this area. The prototype auto-encoder for image compression is in Fig. 1B, which can be regarded as a transform coding strategy. The original image ^^ is transformed with the analysis network ^^ = 9 F1251247PCT
^^^(^^), where ^^ is the latent representation which will be quantized and coded. The synthesis network will inversely transform the quantized latent representation ^ ^^ back to obtain the reconstructed image ^ ^^ = ^^^(^^^). The framework is trained with the rate-distortion loss function, i.e., ℒ = ^^ + ^^^^, where ^^ is the distortion between ^^ and ^ ^^, ^^ is the rate calculated or estimated from the quantized representation^ ^^, and ^^ is the Lagrange multiplier. It should be noted that ^^ can be calculated in either pixel domain or perceptual domain. All existing research works follow this prototype and the difference might only be the network structure or loss function. 2.3.3 Hyper prior model In the transform coding approach to image compression, the encoder subnetwork (section 2.3.2) transforms the image vector x using a parametric analysis transform ^^^(^^,∅^) into a latent representation ^^ , which is then quantized to form ^^^ . Because ^^^ is discrete-valued, it can be losslessly compressed using entropy coding techniques such as arithmetic coding and transmitted as a sequence of bits. As evident from the middle left and middle right image of Fig. 2, there are significant spatial dependencies among the elements of ^ ^^. Notably, their scales (middle right image) appear to be coupled spatially. An additional set of random variables ^^^ can be introduced to capture the spatial dependencies and to further reduce the redundancies. In this case the image compression network is depicted in Fig.3. In Fig 3, the left hand of the models is the encoder ^^^ and decoder ^^^ (explained in section 2.3.2). The right-hand side is the additional hyper encoder ℎ^ and hyper decoder ℎ^ networks that are used to obtain ^^^. In this architecture the encoder subjects the input image x to ^^^, yielding the responses ^^ with spatially varying standard deviations. The responses ^^ are fed into ℎ^ , summarizing the distribution of standard deviations in ^^. ^^ is then quantized (^^^), compressed, and transmitted as side information. The encoder then uses the quantized vector ^^^ to estimate ^^, the spatial distribution of standard deviations, and uses it to compress and transmit the quantized image representation^ ^^. The decoder first recovers ^^^ from the compressed signal. It then uses ℎ^ to obtain ^^, which provides it with the correct probability estimates to successfully recover^ ^^ as well. It then feeds^ ^^ into ^^^ to obtain the reconstructed image. When the hyper encoder and hyper decoder are added to the image compression network, the spatial redundancies of the quantized latent ^ ^^ are reduced. The rightmost image in Fig. 2 correspond to the quantized latent when hyper encoder/decoder are used. Compared to middle right image, the spatial redundancies are significantly reduced, as the samples of the quantized latent are less correlated. In Fig. 2: Left: an image from the Kodak dataset. Middle left: visualization of a latent 10 F1251247PCT
representation y of that image. Middle right: standard deviations ^^ of the latent. Right: latents y after the hyper prior (hyper encoder and decoder) network is introduced. Fig. 3 illustrates Network architecture of an autoencoder implementing the hyperprior model. The left side shows an image autoencoder network, the right side corresponds to the hyperprior subnetwork. The analysis and synthesis transforms are denoted as ^^^ and ^^^ . Q represents quantization, and AE, AD represent arithmetic encoder and arithmetic decoder, respectively. The hyperprior model consists of two subnetworks, hyper encoder (denoted with ℎ^ ) and hyper decoder (denoted with ℎ^). The hyper prior model generates a quantized hyper latent (^^^) which comprises information about the probability distribution of the samples of the quantized latent ^^^. ^^^ is included in the bitsteam and transmitted to the receiver (decoder) along with ^^^. 2.3.4 Context model Although the hyper prior model improves the modelling of the probability distribution of the quantized latent ^ ^^, additional improvement can be obtained by utilizing an autoregressive model that predicts quantized latents from their causal context (Context Model). The term auto-regressive means that the output of a process is later used as input to it. For example the context model subnetwork generates one sample of a latent, which is later used as input to obtain the next sample. Fig. 4 is a schematic diagram illustrating an example combined model configured to jointly optimize a context model along with a hyperprior and the autoencoder. The following table illustrates meaning of different symbols. Table – Illustration of symbols
11 F1251247PCT
A joint architecture can be utilized where both hyper prior model subnetwork (hyper encoder and hyper decoder) and a context model subnetwork are utilized. The hyper prior and the context model are combined to learn a probabilistic model over quantized latents ^^^, which is then used for entropy coding. As depicted in Fig. 4, the outputs of context subnetwork and hyper decoder subnetwork are combined by the subnetwork called Entropy Parameters, which generates the mean ^^ and scale (or variance) ^^ parameters for a Gaussian probability model. The gaussian probability model is then used to encode the samples of the quantized latents into bitstream with the help of the arithmetic encoder (AE) module. In the decoder the gaussian probability model is utilized to obtain the quantized latents ^ ^^ from the bitstream by arithmetic decoder (AD) module. Fig 4 illustrates the combined model jointly optimizes an autoregressive component that estimates the probability distributions of latents from their causal context (Context Model) along with a hyperprior and the underlying autoencoder. Real-valued latent representations are quantized (Q) to create quantized latents (^^^) and quantized hyper-latents (^^^ ), which are compressed into a bitstream using an arithmetic encoder (AE) and decompressed by an arithmetic decoder (AD). The highlighted region corresponds to the components that are executed by the receiver (i.e. a decoder) to recover an image from a compressed bitstream. Typically the latent samples are modeled as gaussian distribution or gaussian mixture models (not limited to). According to Fig. 4, the context model and hyper prior are jointly used to estimate the probability distribution of the latent samples. Since a gaussian distribution can be defined by a mean and a variance (aka sigma or scale), the joint model is used to estimate the mean and variance (denoted as ^^ and ^^). 2.3.5 Gained variational autoencoders (G-VAE) Typically, neural network-based image/video compression methodologies need to train multiple models to adapt to different rates. Gained variational autoencoders (G-VAE) is the variational autoencoder with a pair of gain units , which is designed to achieve continuously variable rate adaptation using a single model. It comprises of a pair of gain units, which are typically inserted to the output of encoder and input of decoder. The output of the encoder is defined as the latent representation ^^ ∈ ^^^∗^∗௪, where ^^, ℎ,^^ represent the number of channels, the height and width of the latent representation. Each channel of the latent representation is denoted as ^^ ^∗௪ (^) ∈ ^^ , where ^^ = 0, 1, … , ^^ − 1. A pair of gain units include a gain matrix ^^ ∈ ^^^∗^ and an inverse gain matrix, where ^^ is the number of gain vectors. The gain vector can be denoted as ^^^ =
^^ denotes the index of the gain vectors in the gain matrix. 12 F1251247PCT
The motivation of gain matrix is similar to the quantization table in JPEG by controlling the quantization loss based on the characteristics of different channels. To apply the gain matrix to the latent representation, each channel is multiplied with the corresponding value in a gain vector. ^ത^^ = ^^ ^ ^^^ where ^ is channel-wise multiplication, i.e., ^ത^^(^) = ^^(^) × ^^^(^), and ^^^(^) is the ^^-th gain value in the gain vector ^^ . The inverse gain matrix used at the decoder side ca ^∗^ ^ n be denoted as ^^′ ∈ ^^ , which consists of ^^ inverse gain vectors, i.e., ^^ᇱ
. The inverse gain process is expressed as: ^^′ ᇱ ^ = ^^^ ^ ^^^ where ^^^ is the decoded quantized latent representation and ^^^ ᇱ is the inversely gained quantized latent representation, which will be fed into the synthesis network. To achieve continuous variable rate adjustment, interpolation is used between vectors. Given two pairs of gain vectors {^^௧ ,^^ᇱ ௧} and {^^^ ,^^ᇱ ^ }, the interpolated gain vector can be obtained via the following equations.
where ^^ ∈ ^^ is an interpolation coefficient, which controls the corresponding bit rate of the generated gain vector pair. Since ^^ is a real number, an arbitrary bit rate between the given two gain vector pairs can be achieved. 2.3.6 The encoding process using joint auto-regressive hyper prior model The fig 4. corresponds to the state of the art compression method. In this section and the next, the encoding and decoding processes will be described separately. The Fig. 5 depicts the encoding process. The input image is first processed with an encoder subnetwork. The encoder transforms the input image into a transformed representation called latent, denoted by ^^. ^^ is then input to a quantizer block, denoted by Q, to obtain the quantized latent (^^^ ).^ ^^ is then converted to a bitstream (bits1) using an arithmetic encoding module (denoted AE). The arithmetic encoding block converts each sample of the ^^^ into a bitstream (bits1) one by one, in a sequential order. The modules hyper encoder, context, hyper decoder, and entropy parameters subnetworks are used to estimate the probability distributions of the samples of the quantized latent^ ^^. the latent ^^ is input to hyper encoder, which outputs the hyper latent (denoted by ^^). The hyper latent is then quantized (^^^) and a second bitstream (bits2) is generated using arithmetic encoding (AE) module. The factorized entropy module generates the probability distribution, that is used to encode the quantized hyper latent into bitstream. The quantized hyper latent includes information about the 13 F1251247PCT
probability distribution of the quantized latent (^^^). The Entropy Parameters subnetwork generates the probability distribution estimations, that are used to encode the quantized latent^ ^^. The information that is generated by the Entropy Parameters typically include a mean ^^ and scale (or variance) ^^ parameters, that are together used to obtain a gaussian probability distribution. A gaussian distribution of a random variable x is defined as భ ^షഋ మ ^^(^^) = ^ ^^ିమ^ ^ ^ wherein the parameter ^^ is the mean or expectation of the distribution (and also its median and mode), while the parameter ^^ is its standard deviation (or variance, or scale). In order to define a gaussian distribution, the mean and the variance need to be determined. In an existing design, the entropy parameters module are used to estimate the mean and the variance values. The subnetwork hyper decoder generates part of the information that is used by the entropy parameters subnetwork, the other part of the information is generated by the autoregressive module called context module. The context module generates information about the probability distribution of a sample of the quantized latent, using the samples that are already encoded by the arithmetic encoding (AE) module. The quantized latent ^ ^^ is typically a matrix composed of many samples. The samples can be indicated using indices, such as ^ ^^[i,j,k] or^ ^^[i,j] depending on the dimensions of the matrix^ ^^. The samples ^ ^^[i,j] are encoded by AE one by one, typically using a raster scan order. In a raster scan order the rows of a matrix are processed from top to bottom, wherein the samples in a row are processed from left to right. In such a scenario (wherein the raster scan order is used by the AE to encode the samples into bitstream), the context module generates the information pertaining to a sample^ ^^[i,j], using the samples encoded before, in raster scan order. The information generated by the context module and the hyper decoder are combined by the entropy parameters module to generate the probability distributions that are used to encode the quantized latent^ ^^ into bitstream (bits1). Finally the first and the second bitstream are transmitted to the decoder as result of the encoding process. It is noted that the other names can be used for the modules described above. In the above description, the all of the elements in Fig. 5 are collectively called encoder. The analysis transform that converts the input image into latent representation is also called an encoder (or auto-encoder). 2.3.7 The decoding process using joint auto-regressive hyper prior model The Fig.6 depicts the decoding process separately. In the decoding process, the decoder first receives the first bitstream (bits1) and the second bitstream (bits2) that are generated by a corresponding encoder. The bits2 is first decoded by the 14 F1251247PCT
arithmetic decoding (AD) module by utilizing the probability distributions generated by the factorized entropy subnetwork. The factorized entropy module typically generates the probability distributions using a predetermined template, for example using predetermined mean and variance values in the case of gaussian distribution. The output of the arithmetic decoding process of the bits2 is ^^^, which is the quantized hyper latent. The AD process reverts to AE process that was applied in the encoder. The processes of AE and AD are lossless, meaning that the quantized hyper latent ^^^ that was generated by the encoder can be reconstructed at the decoder without any change. After obtaining of ^^^ , it is processed by the hyper decoder, whose output is fed to entropy parameters module. The three subnetworks, context, hyper decoder and entropy parameters that are employed in the decoder are identical to the ones in the encoder. Therefore the exact same probability distributions can be obtained in the decoder (as in encoder), which is essential for reconstructing the quantized latent ^ ^^ without any loss. As a result the identical version of the quantized latent ^^^ that was obtained in the encoder can be obtained in the decoder. After the probability distributions (e.g. the mean and variance parameters) are obtained by the entropy parameters subnetwork, the arithmetic decoding module decodes the samples of the quantized latent one by one from the bitstream bits1. From a practical standpoint, autoregressive model (the context model) is inherently serial, and therefore cannot be sped up using techniques such as parallelization. Finally the fully reconstructed quantized latent^ ^^ is input to the synthesis transform (denoted as decoder in Fig.6) module to obtain the reconstructed image. In the above description, the all of the elements in Fig. 6 are collectively called decoder. The synthesis transform that converts the quantized latent into reconstructed image is also called a decoder (or auto-decoder). 2.4 Neural networks for video compression Similar to conventional video coding technologies, neural image compression serves as the foundation of intra compression in neural network-based video compression, thus development of neural network-based video compression technology comes later than neural network-based image compression but needs far more efforts to solve the challenges due to its complexity. Starting from 2017, a few researchers have been working on neural network-based video compression schemes. Compared with image compression, video compression needs efficient methods to remove inter- picture redundancy. Inter-picture prediction is then a crucial step in these works. Motion estimation and compensation is widely adopted but is not implemented by trained neural networks until recently. Studies on neural network-based video compression can be divided into two categories according 15 F1251247PCT
to the targeted scenarios: random access and the low-latency. In random access case, it requires the decoding can be started from any point of the sequence, typically divides the entire sequence into multiple individual segments and each segment can be decoded independently. In low-latency case, it aims at reducing decoding time thereby usually merely temporally previous frames can be used as reference frames to decode subsequent frames. 2.5 Preliminaries Almost all the natural image/video is in digital format. A grayscale digital image can be represented by ^^ ∈ ^^^×^, where ^^ is the set of values of a pixel, ^^ is the image height and ^^ is the image width. For example, ^^ = {0, 1, 2, … ,255} is a common setting and in this case |^^| = 256 = 2଼, thus the pixel can be represented by an 8-bit integer. An uncompressed grayscale digital image has 8 bits-per-pixel (bpp), while compressed bits are definitely less. A color image is typically represented in multiple channels to record the color information. For example, in the RGB color space an image can be denoted by ^^ ∈ ^^^×^×ଷ with three separate channels storing Red, Green and Blue information. Similar to the 8-bit grayscale image, an uncompressed 8-bit RGB image has 24 bpp. Digital images/videos can be represented in different color spaces. The neural network-based video compression schemes are mostly developed in RGB color space while the traditional codecs typically use YUV color space to represent the video sequences. In YUV color space, an image is decomposed into three channels, namely Y, Cb and Cr, where Y is the luminance component and Cb/Cr are the chroma components. The benefits come from that Cb and Cr are typically down sampled to achieve pre-compression since human vision system is less sensitive to chroma components. A color video sequence is composed of multiple color images, called frames, to record scenes at different timestamps. For example, in the RGB color space, a color video can be denoted by ^^ = {^^ , } ^×^ ^ ^^^, … , ^^௧ , … , ^^்ି^ where ^^ is the number of frames in this video sequence, ^^ ∈ ^^ . If ^^ = 1080, ^^ = 1920, |^^| = 2଼, and the video has 50 frames-per-second (fps), then the data rate of this uncompressed video is 1920 × 1080 × 8 × 3 × 50 = 2,488,320,000 bits-per-second (bps), about 2.32 Gbps, which needs a lot storage thereby definitely needs to be compressed before transmission over the internet. Usually the lossless methods can achieve compression ratio of about 1.5 to 3 for natural images, which is clearly below requirement. Therefore, lossy compression is developed to achieve further compression ratio, but at the cost of incurred distortion. The distortion can be measured by calculating the average squared difference between the original image and the reconstructed image, i.e., mean-squared-error (MSE). For a grayscale image, MSE can be calculated with the following equation. 16 F1251247PCT
^^^^^ = ‖ ^‖మ ^ ௫ି௫ ^×^ (4) Accordingly, the quality of the reconstructed image compared with the original image can be measured by peak signal-to-noise ratio (PSNR):
where ^^^^^^(^^) is the maximal value in ^^, e.g., 255 for 8-bit grayscale images. There are other quality evaluation metrics such as structural similarity (SSIM) and multi-scale SSIM (MS- SSIM). To compare different lossless compression schemes, it is sufficient to compare either the compression ratio given the resulting rate or vice versa. However, to compare different lossy compression methods, it has to take into account both the rate and reconstructed quality. For example, to calculate the relative rates at several different quality levels, and then to average the rates, is a commonly adopted method; the average relative rate is known as Bjontegaard’s delta- rate (BD-rate). There are other important aspects to evaluate image/video coding schemes, including encoding/decoding complexity, scalability, robustness, and so on. 2.6 Separate processing of luma and chroma components of an image Fig. 7 illustrates the decoding process according to some example embodiments of the present disclosure. According to one implementation, the luma and chroma components of an image can be decoded using separate subnetworks. In Fig.7, the luma component of the image is processed by the subnetwoks “Synthesis”, “Prediction fusion”, “Mask Conv”, “Hyper Decoder”, “Hyper scale decoder” etc. Whereas the chroma components are processed by the subnetworks: “Synthesis UV”, “Prediction fusion UV”, “Mask Conv UV”, “Hyper Decoder UV”, “Hyper scale decoder UV” etc. A benefit of the above separate processing is that the computational complexity of the processing of an image is reduced by application of separate processing. Typically in neural network based image and video decoding, the computational complexity is proportional to the square of the number of feature maps. If the number of total feature maps is equal to 192 for example, computational complexity will be proportional to 192x192. On the other hand if the feature maps are divided into 128 for luma and 64 for chroma (in the case of separate processing), the computational complexity is proportional to 128x128 + 64x64, which corresponds to a reduction in complexity by 45%. Typically the separate processing of luma and chroma components of an image does not result in a prohibitive reduction in performance, as the correlation between the luma and chroma components are typically very small. The processing (Decoding process) in Fig.7 can be explained below: 17 F1251247PCT
1. Firstly, the factorized entropy model is used to decode the quantized latents for luma and chroma, i.e., ^^^ and ^^^^^^^ in Fig.7. 2. The probability parameters (e.g. variance) generated by the second network are used to generate a quantized residual latent by performing the arithmetic decoding process. 3. The quantized residual latent is inversely gained with the inverse gain unit (iGain) as shown in orange color in Fig. 7. The outputs of the inverse gain units are denoted as ^^^ and ^^^^^^^ for luma and chroma components, respectively. 4. For the luma component, the following steps are performed in a loop until all elements of ^ ^^ are obtained: a. A first subnetwork is used to estimate a mean value parameter of a quantized latent (^^^), using the already obtained samples of ^^^. b. The quantized residual latent ^^^ and the mean value are used to obtain the next element of ^ ^^. 5. After all of the samples of ^ ^^ are obtained, a synthesis transform can be applied to obtain the reconstructed image. 6. For chroma component, step 4 and 5 are the same but with a separate set of networks. 7. The decoded luma component is used as additional information to obtain the chroma com- ponent. Specifically, the Inter Channel Correlation Information filter sub-network (ICCI) is used for chroma component restoration. The luma is fed into the ICCI sub-network as additional information to assist the chroma component decoding. 8. Adaptive color transform (ACT) is performed after the luma and chroma components are reconstructed. The module named ICCI is a neural-network based postprocessing module. The example embodiments of the present disclosure are not limited to the UCCI subnetwork, any other neural network based postprocessing module might also be used. An exemplary implementation of some example embodiments of the present disclosure is depicted in Fig. 7 (the decoding process). The framework comprises two branches for luma and chroma components respectively. In each of the branch, the first subnetwork comprises the context, prediction and optionally the hyper decoder modules. The second network comprises the hyper scale decoder module. The quantized hyper latent are ^^^ and ^^^^^^^. The arithmetic decoding process generates the quantized residual latents, which are further fed into the iGain units to obtain the gained quantized residual latents ^^^ and ^^^^^^^. After the residual latent is obtained, a recursive prediction operation is performed to obtain the latent ^ ^^ and ^ ^^^^^^. The following steps describe how to obtain the samples of latent ^ ^^[: , ^^, ^^], and 18 F1251247PCT
the chroma component is processed in the same way but with different networks. 1. An autoregressive context module is used to generate first input of a prediction module using the samples ^ ^^[: , ^^,^^] where the (m, n) pair are the indices of the samples of the latent that are already obtained. 2. Optionally the second input of the prediction module is obtained by using a hyper decoder and a quantized hyper latent^ ^^^^. 3. Using the first input and the second input, the prediction module generates the mean value ^^^^^^^^[: , ^^, ^^]. 4. The mean value ^^^^^^^^[: , ^^, ^^] and the quantized residual latent ^^^ [: , ^^, ^^]are added together to obtain the latent ^ ^^[: , ^^, ^^]. 5. The steps 1-4 are repeated for the next sample. Whether to and/or how to apply at least one method disclosed in the document may be signaled from the encoder to the decoder, e.g. in the bitstream. Alternatively, whether to and/or how to apply at least one method disclosed in the document may be determined by the decoder based on coding information, such as dimensions, color format, etc. Alternative or additionally, the modules named MS1, MS2 or MS3+O (in Fig. 7), might be included in the processing flow. The said modules might perform an operation to their input by multiplying the input with a scalar or adding an adding an additive component to the input to obtain the output. The scalar or the additive component that are used by the said modules might be indicated in a bitstream. The module named RD or the module named AD in Fig.7 might be an entropy decoding module. It might be a range decoder or an arithmetic decoder or the like. The example embodiments of the present disclosure described herein are not limited to the specific combination of the units exemplified in Fig.7. Some of the modules might be missing and some of the modules might be displaced in processing order. Also additional modules might be included. For example: 1. The ICCI module might be removed. In that case the output of the Synthesis module and the Synthesis UV module might be combined by means of another module, that might be based on neural networks. 2. One or more of the modules named MS1, MS2 or MS3+O might be removed. The core of the proposed solution is not affected by the removing of one or more of the said scaling and adding modules. In Fig. 7, other operations that are performed during the processing of the luma and chroma components are also indicated using the star symbol. These processes are denoted as MS1, MS2, 19 F1251247PCT
MS3+O. These processing might be, but not limited to, adaptive quantization, latent sample scaling, and latent sample offsetting operations. For example, in an adaptive quantization process might correspond to scaling of a sample with multiplier before the prediction process, wherein the multiplier is predefined or whose value is indicated in the bitstream. The latent scaling process might correspond to the process where a sample is scaled with a multiplier after the prediction process, wherein the value of the multiplier is either predefined or indicated in the bitstream. The offsetting operation might correspond to adding an additive element to the sample, again wherein the value of the additive element might be indicated in the bitstream or inferred or predetermined. Another operation might be tiling operation, wherein samples are first tiled (grouped) into overlapping or non-overlapping regions, wherein each region is processed independently. For example the samples corresponding to the luma component might be divided into tiles with a tile height of 20 samples, whereas the chroma components might be divided into tiles with a tile height of 10 samples for processing. Another operation might be application of wavefront parallel processing. In wavefront parallel processing, a number of samples might be processed in parallel, and the amount of samples that can be processed in parallel might be indicated by a control parameter. The said control parameter might be indicated in the bitstream, be inferred, or can be predetermined. In the case of separate luma and chroma processing, the number of samples that can be processed in parallel might be different, hence different indicators can be signalled in the bitstream to control the operation of luma and chrome processing separately. 2.7 Colors separation and conditional coding In one example the primary and secondary color components of an image are coded separately, using networks with similar architecture, but different number of channels as shown in 8. All boxes with same names are sub-networks with the similar architecture, only input-output tensor size and number of channels are different. Number of channels for primary component is ^^^ = 128, for secondary components is ^^^ = 64. The vertical arrows (with arrowhead pointing downwards) indicate data flow related to secondary color components coding. Vertical arrows show data exchange between primary and secondary components pipelines. The input signal to be encoded is notated as ^^, latent space tensor in bottleneck of variational auto- encoder is ^^. Subscript “Y” indicates primary component, subscript “UV” is used for concatenated secondary components, there are chroma components. Fig.8 illustrates learning-based image codec architecture. First the input image that has RGB color format is converted to primary (Y) and secondary components(UV). The primary component ^^^ is coded independently from secondary components 20 F1251247PCT
^^^^ and the coded picture size is equal to input/decoded picture size. The secondary components are coded conditionally, using ^^^ as auxiliary information from primary component for encoding ^^^^ and using ^^^^ as a latent tensor with auxiliary information from primary component for decoding ^^^^^ reconstruction. The codec structure for primary component and secondary components are almost identical except the number of channels, size of the channels and the several entropy models for transforming latent tensor to bitstream, therefore primary and secondary latent tensor will generate two different bitstream based on two different entropy models. Prior to the encoding ^^^, ^^^^ goes through a module which adjusts the sample location by down- sampling (marked as “s↓” on Fig.1), this essentially means that coded picture size for secondary component is different from the coded picture size for primary component. The scaling factor s is variable, but the default scaling factor is ^^ = 2. The size of auxiliary input tensor in conditional coding is adjusted in order the encoder receives primary and secondary components tensor with the same picture size. After reconstruction, the secondary component is rescaled to the original picture size with a neural-network based upsampling filter module (“NN-color filter s↑” on Fig. 1), which outputs secondary components up-sampled with factor ^^. The example in Fig. 8 exemplifies an image coding system, where the input image is first transformed into primary (Y) and secondary components (UV). The outputs ^^^^ , ^^^^^ are the reconstructed outputs corresponding to the primary and secondary components. At the and of the processing, ^^^^, ^^^^^ are converted back to RGB color format. Typically the ^^^^ is downsampled (resized) before processing with the encoding and decoding modules (neural networks). For example the size of the ^^^^ might be reduced by a factor of 50% in each of the vertical and horizontal dimensions. Therefore the processing of the secondary component includes approximately 50% x 50% = 25% less samples, therefore it is computationally less complex. 2.8 Cropping operation in neural network based coding Fig. 9 illustrates synthesis transform example for learning based image coding. The example synthesis transform above includes a sequence of 4 convolutions with up-sampling with stride of 2. The synthesis transform sub-Net is depicted on Fig.9. The size of the tensor in different parts of synthesis transform before cropping layer is the diagram on Fig.9. The cropping layer changes tensor size ℎௗ × ^^ௗ to ℎௗି^ × ^^ௗି^ , where ℎௗ = 2 ∙ ^^^^^^^^(^^/ 2ௗ); ^^ௗ = 2 ∙ ^^^^^^^^(^^/2ௗ) ; here ^^ is the depth of proceeding convolution in the codec architecture. For primary component Synthesis Transform receives input tensor with sizeℎ × ^^; ℎ = ^^^^^^^^(^^⁄ 16 );^^ = ^^^^^^^^(^^⁄ 16 ). The output of Synthesis Transform for primary component is 1 × ℎ^ × ^^^ , where ℎ^ = ^^;ℎ^ = ^^. For secondary component Synthesis Transform receives input tensor with size ℎ^^ × ^^^^ ; 21 F1251247PCT
ℎ^^ = ^^^^^^^^(^^^^^^^^(^^⁄ ^^ )⁄ 16 ); ^^^^ = ^^^^^^^^(^^^^^^^^(^^⁄ ^^ )⁄ 16 ). The output of the Synthesis Transform for primary component is 2 × ℎ^^^ × ^^^^^ , where ℎ^^^ = ^^^^^^^^(^^⁄ ^^ );ℎ^^^ = ^^^^^^^^(^^⁄ ^^ ). For secondary components input sizes are ℎ^ = ^^^^^^^^(^^/^^);^^^ = ^^^^^^^^(^^/^^), where ^^ is the scale factor. The scale factor might be 2 for example, wherein the secondary component is downsampled by a factor of 2. Based on the above explanation, the operation of the cropping layers depend on the output size H,W and the depth of the cropping layer. The depth of the left-most cropping layer in Fig. 9 is equal to 0. The output of this cropping layer must be equal to H, W (the output size), if the size of the input of this cropping layer is greater than H or W in horizontal or vertical dimension respectively, cropping needs to be performed in that dimension. The second cropping layer counting from left to right has a depth of 1. The output of the second cropping layer must be equal to ℎ^ = 2 ∙ ^^^^^^^^(^^/2^); ^^^ = 2 ∙ ^^^^^^^^(^^/2^), which means if the input of this second cropping layer is greater than h1, w1 in any dimension, than cropping is applied in that dimension. In summary, the operation of cropping layers are controlled by the output size H,W. In one example if H and W are both equal to 16, then the cropping layers do not perform any cropping. On the other hand if H and W are both equal to 17, then all 4 cropping layers are going to perform cropping. 2.9 Bitwise shifting The bitwise shift operator can be represented using the function ^^^^^^^^ℎ^^^^^^(^^,^^), where n is an integer number. If n is greater than 0, it corresponds to right-shift operator (>>), which moves the bits of of the input to the right, and the left-shift operator (<<), which moves the bits to the left. In another words the ^^^^^^^^ℎ^^^^^^(^^,^^) operation corresponds to: ^^^^^^^^ℎ^^^^^^(^^,^^) = ^^ ∗ 2^, or ^^^^^^^^ℎ^^^^^^(^^, ^^) = ^^^^^^^^^^(^^ ∗ 2^), or ^^^^^^^^ℎ^^^^^^(^^,^^) = ^^//2^. The output of the bitshift operation is an integer value. In some implementations, the floor() function might be added to the definition. Floor( x ) is equal to the largest integer less than or equal to x. The “//” operator or the integer division operator: It is an operation that comprises division and truncation of the result toward zero. For example, 7 / 4 and −7 / −4 are truncated to 1 and −7 / 4 and 7 / −4 are truncated to −1. ^^^^^^ℎ^^^^ℎ^^^^^^(^^, ^^) = ^^ ≫ ^^ or ^^^^^^^^^^ℎ^^^^^^(^^, ^^) = ^^ ≪ ^^ Equation 3: alternative implementation of the bitshift operator as rightshift or leftshift. x >> y Arithmetic right shift of a two's complement integer representation of x by y binary digits. 22 F1251247PCT
This function is defined only for non-negative integer values of y. Bits shifted into the most significant bits (MSBs) as a result of the right shift have a value equal to the MSB of x prior to the shift operation. x << y Arithmetic left shift of a two's complement integer representation of x by y binary digits. This function is defined only for non-negative integer values of y. Bits shifted into the least significant bits (LSBs) as a result of the left shift have a value equal to 0. 2.10 Convolution operation The convolution is a fairly simple operation at heart: you start with a kernel, which is simply a small matrix of weights. This kernel “slides” over the input data, performing an elementwise multiplication with the part of the input it is currently on, and then summing up the results into a single output pixel. In some cases the convolution operation might comprise a “bias”, which is added to the output of the elementwise multiplication operation. The convolution operation might be described by the following mathematical formula. An output out1 can be obtained as:
wherein w1 are the multiplication factors, K1 is called a bias (an additive term) and ^^^ is the kth input, and N is the kernel size in one direction and P is the kernel size in another direction. The convolution layer might consist of convolution operations wherein more than one output might be generated. Other equivalent depictions of the convolution operation might be found below:
In the above equations “c” indicates the channel number. It is equivalent to output number, out[1,x,y] is one output and out[2,x,y] is a second output. Wherein the k is the input number, ^^[1, ^^, ^^] is one input and ^^[2, ^^, ^^] is a second input. The w1, or w describe weights of the convolution operation. 2.11 Leaky_Relu activation function The leaky Relu activation function is depicted in Fig.10. According to the function, if the input is a positive value, the output is equal to the input. If the input (y) is a negative value, the output is equal to a*y. The a is typically (not limited to) a value that is smaller than 1 and greater than 0. Since the multiplier a is smaller than 1, it can be implemented either as a multiplication with a 23 F1251247PCT
non-integer number, or with a division operation. The multiplier a might be called the negative slope of the leaky relu function. 2.12 Relu activation function The Relu activation function is depicted in Fig.11. According to the function, if the input is a positive value, the output is equal to the input. If the input (y) is a negative value, the output is equal to 0. 2.13 Model header
independent_beta_uv is a flag (false/true), independent_beta_uv equal to true indicates that the beta displacement parameter for primary and secondary components are different. If independent_beta_uv equal to false then beta displacement parameter for primary and secondary components are the same. beta_displacement_log_plus_2048[comp] minus 2048 is a – parameter indicating a displacement between the rate control parameter beta selected by encoder for comp component and the reference rate control parameter beta associated with the index of the used model (model_id). The displacement is in logarithmic scale. betaDisplacementLog[comp] = beta_displacement_log_plus_2048[comp] – 211 NOTE – the reference rate control parameter beta (β), mentioned here, is the parameter used in the model training to control the ratio between the bitrate and distortion. “The model” here means the model associated with the index of the used model (model_id). When syntax element beta_displacement_log_plus_2048[1] is not present (independent_beta_uv is equal to 0), betaDisplacementLog[1] = betaDisplacementLog[0]. 2.14 Variable Rate Support 2.14.1 General This section 2.14 describes operations which are used for variable rate coding. 24 F1251247PCT
2.14.2 Control parameters and gain tensor derivation In total four models (defined by model_id =0…3) are trained for the different ranges of quality. Pre-trained model is selected base on model_id the syntax element coded at Picture header. The learnable model includes the reference forward gain vector for each component comp in logarithmic scale mlog[model_id][comp][C], comprising 12-bit signed values. NOTE – Each model_id is associated with the reference r ate control parameter ^^, which was used in the model training to control the ratio between the bitrate and distortion. The forward gain tensor m is used at encoder side in Gain Unit. Forward gain tensor in logarithmic scale ^^^^^ is used in Sigma scale. The inverse gain tensor ^^ି^ is used at decoder side in Inverse Gain Unit. All three forward ^^ , inverse ^^ି^ and logarithmics domain ^^^^^ gain tensors have size [^^,ℎସ, ^^ସ] equal to the size of the residual tensor for the corresponding component. Component index comp is equal to 0 for the primary color component and 1 for the secondary component. The input of gain tensor derivation process is: − 12-bits signed variable betaDisplacementLog[comp]; − tensor with 3-bit elements ^^^^^^^^^^^^^^_^^^^^^_^^^^^^^^^^[^^^^^^^^][ℎସ,^^ସ]; − index model_id which indicates pretrained model to be used. The output of gain tensor derivation process is: − forward gain tensor in logarithmic scale ^^^^^ [^^,ℎସ,^^ସ]; − forward gain tensor ^^ [^^, ℎସ,^^ସ]; − inverse gain tensor ^^ି^ [^^, ℎସ,^^ସ]. Sizes of all tensors ^^, ℎସ,^^ସ for primary and secondary components are defined in Table 1. Forward gain tensor in logarithmic scale ^^^^^ is computed as following: For 0 ≤ ^^ < ^^; 0 ≤ ^^ < ℎସ; 0 ≤ ^^ < ^^ସ: ^^^^^[^^, ^^, ^^] = ^^^^^[^^^^^^^^^^^ௗ][^^^^^^^^][^^] + ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^[^^^^^^^^] + ^^^^^^^^3^^[^^^^^^^^^^^^^^_^^^^^^[^^, ^^] ] where quality_map[i,j] is equal to 0 when gain_3D_enable_flag[comp] is equal to 0 and, otherwise, is equal to ^^^^^^^^(−8,8,^^^^^^^^^^^^^^_^^^^^^_^^^^^^^^[^^, ^^] + ^^^^^^^^^^^^^^_^^^^^^_^^^^^^^^^^[comp] [^^, ^^]), −16 ≤ ^^^^^^^^^^^^^^_^^^^^^_^^^^^^^^^^[^^^^^^^^] [^^, ^^] ≤ 16 o ^^^^^^^^^^^^^^_^^^^^^_^^^^^^^^[^^, ^^] = 0; ^^ = ^^ = 0 ì ^^^^^^^^^^^^^^_^^^^^^[^^ − 1, ^^]; ^^ > 0, ^^ = 0 í ^^^^^^^^^^^^^^_^^^^^^[^^, ^^ − 1]; ^^ = 0, ^^ > 0 î(^^^^^^^^^^^^^^_^^^^^^[^^ − 1, ^^] + ^^^^^^^^^^^^^^_^^^^^^[^^, ^^ − 1])⁄ 2 ; ^^ > 0, ^^ > 0. 25 F1251247PCT
The mapping between quality map and additional beta-displacement is define by ^^^^^^^^3^^ in Table 5. There are in total 17 quality levels. Table 5 Additional beta-displacement for local quality control
NOTE – if gain_3D_enable_flag is equal to 0, then mlog can be calculated as a vector with size C from the vector mlog [model_id] with the similar size and betaDisplacementLog[comp]. Later on this vector mlog can be expanded to the size C×h4×w4 filling the whole channel with the value mlog[c]. NOTE – The value of gain3D is computed from the factor according to the following equations:
Forward and backward gain tensors ^^ and ^^ି^ are computed as following:
are defined in section 2.14.3 Gain Unit This process is applied at encoder side only and so it is non-normative process, described here to simplify explanation of normative inverse gain unit and sigma scale processes. The input of this process: − forward gain tensor ^^, − residual tensor ^^ of size [^^,ℎସ,^^ସ ] , which is the result of subtraction mean value from 26 F1251247PCT
latent tensor ^^ = ^^ − ^^. The output of this process: − scaled by forward gain tensor residual tensor r’ of size [^^,ℎସ,^^ସ]. On encoder side a gain unit is placed after a residual calculation. An output scaled residual tensor is equal to the input tensor multiplied by gain tensor in a channel dimension:
0 ≤ ^^ < ℎସ, 0 ≤ ^^ < ^^ସ, 0 ≤ ^^ < ^^. All elements of the tensor is the same channel are scaled by the same multiplier. 2.14.4 Inverse Gain Unit This process is applied at decoder side and so normative. The input of this process: − inverse gain tensor ^^ି^, − reconstructed residual tensor ^̂^ of size [^^, ℎସ, ^^ସ] , which is the output of decoder-size SKIP process. The output of this process: − scaled by inverse gain tensor residual tensor ^̂^′ of size [^^,ℎସ,^^ସ ]. On decoder side an inverse gain unit is placed after entropy decoder and takes reconstructed residual as an input . The output is reconstructed residual tensor multiplied by inverse gain tensor: ^̂^′[^^, ^^, ^^] = ^^ି^[^^, ^^, ^^] ∙ ^̂^[^^, ^^, ^^]; 0 ≤ ^^ < ℎସ, 0 ≤ ^^ < ^^ସ, 0 ≤ ^^ < ^^. All elements of the tensor in the same channel are scaled by the same multiplier. 2.14.5 Sigma Scale This process is applied both at encoder and decoder sides, and so normative. The input of this process are: − forward gain tensor in logarithmic scale mlog, − standard deviation logarithm tensor Iσ of size [C, h4, w4] , which is and output of Hyper Scale Decoder. The output of this process is: − scaled by forward gain tensor standard deviation logarithm tensor I’σ of size [C, h4, w4] which goes to the adaptive sigma scale. Sigma scale modifies standard deviation logarithm tensor Iσ[C, h4, w4] as following:
2.14.6 Sigma quantization 27 F1251247PCT
The input of this process is: – I’’σ[ C,h4,w4] which is an output of Adaptive Sigma Scale. The output of this process is: – sigma_Idx[C,h4,w4] which is further used in Entropy Decoder for ^̂^ . The process is as follows: For c=0,…C-1 , i=0,…, ℎସ −1, j=0,…, ^^ସ −1:
While computing CDF table for sigma_Idx=i, 0 ≤i< Nσ =^^ it is assumed that: σmin=exp((ln(σmax)- ln(σmin))⋅i/( Nσ−1)+ln(σmin)). 2.15 Thread size signalling In an existing design, the thread size (substream size) signalling is as follows. 2.15.1 Syntax table of quality map information tensor
28 F1251247PCT
2.15.2 Syntax table
5 29 F1251247PCT
2.15.3 Semantics of quality map information tensor thread_ size_delta_q[i] is signed difference between ThreadMeanSizeQ and the number of bytes between the size of the quality map tensor sub-stream i. 2.15.4 Picture header semantics log2_num_threads_q_minus1 is an indicator for number of threads for CodeStreamQ decoding. The variable NumberOfThreadsQ is set as follows: If multi_threading_q = 0 then NumberOfThreadsQ = 1. If multi_threading_q = 1 then NumberOfThreadsQ= 2<<(log2_num_threads_q_minus1 +1). The variable ThreadMeanSizeQ derived as follows: ThreadMeanSizeQ = floor(CodeStreamSizeQ / NumberOfThreadsQ) se(v): signed integer 0-th order Exp-Golomb-coded syntax element with the left bit first. The parsing process for this descriptor is specified in clause annex D with the order k equal to 0. 3. Problems In an existing design, a codestream (a data unit) is included in the bitstream according to the following template (ordered set of elements): • Codestream marker (e.g. marker_id); • Codestream size indication (e.g. substream_size); • Data unit size indication (e.g. thread_ size_delta_q); • Data units (e.g. data comprising multiple chunks). The codestream size indication indicates the total size of the following data, including the data unit size indication and data units. The data unit size indication comprises offsets (size information) pertaining to the chunks of the data units. In other words, the data units might comprise multiple chunks for parallel processing (e.g. multi-threading) and data unit size indication might comprise one or more size (or offset) indications that can help finding the starting point of each data chunk. Furthermore the value of the data unit size indication is a function of codestream size indication. This can be seen from the equation below: • ThreadMeanSizeQ = floor(CodeStreamSizeQ / NumberOfThreadsQ); • thread_ size_delta_q[i] is signed difference between ThreadMeanSizeQ and the number of bytes between the size of the quality map tensor sub-stream i. In other words, the data unit size indication is a difference between the actual data unit size, and a mean value (ThreadMeanSizeQ), wherein the mean value is a function of the codestream size indication. And finally, it is noted that the data unit size indication is included in the bitstream using variable length coding (indicated by se(v) in the syntax table). This means that number of bits that are 30 F1251247PCT
included in the bitstream is a function of the value of the syntax element. Therefore the problem is as follows: - The “codestream size indication” is a function of the number of bits that are required by the “data unit size indication”. - The value and therefore the number of bits required by the “data unit size indication” is a function of “codestream size indication”. - As a result the calculation of the codestream size indication and data unit size indication make a loop. And the loop might be infinite (i.e. never converge and keep oscillating), therefore rendering the creation of the bitstream impossible. 4. Detailed Solutions Solution 1: The indication about the data unit size and the indication about the codestream size are included in different codestreams. • Data unit: o Data unit might be a substream. o Data unit might correspond to a thread, .e.g. a processing thread. • Codestream: o A codestream might be a residual substream, a quality map substream, a picture header, a tool header, a header, or alike. o A codestream might start with a marker (a marker ID), and a codestream size indi- cation. o A codestream might comprise one or more data units. • Data unit size: o The data unit size might be an offset. o The data unit might be signalled using variable length coding, e.g. ex-Golomb code. o The data unit size might be a thread offset, a thread size, or a delta thread size. o The data unit size might be signalled using fixed length coding. o Data unit size might pertain to a substream, or a tile stream, or a bitstream partition, or a region stream. • Different codestreams: o The data unit size indication might be signalled in a header (e.g. a picture header or a tool header or alike). o Data unit size indication might correspond to a data unit that is in a different codestream (e.g. the size indication might be inside a header codestream, whereas the data unit might be inside a second codestream). 31 F1251247PCT
o The data unit might be inside a residual codestream, or a quality map codestream or a hyper information codestream. • The value of the data unit size might depend on first codestream size information. The data unit size might be included in a second codestream. Therefore, the codestream size information does not depend on the value of the data unit size information. Solution 2: The indication about the data unit size does not depend on codestream size information. • The data unit size information might depend on an additive value, wherein the additive value is included in the bitstream, and is different from the codestream size information. o The additive value information might be included in a header. o The additive value information might be included in a different codestream than where the data unit size information might be included. o The additive value might be added to the data unit size information to obtain the size of a data unit. o The additive value might be a mean value. Solution 3: The indication about the data unit size is coded with fixed length coding. • If fixed length coding is used, it is possible to know in advance the amount of bits required to be included in the bitstream. Which means that the number of bits required to be included to code data unit size information does not depend on the codestream size information. 5. Embodiments 1. Decoder/encoder solution: Performing a conversion using a neural network between a bitstream and a reconstructed image or video based on an indicator in the bitstream, wherein, Including in (or obtaining from) the bitstream an indication about the data unit size and an indication about the codestream size n different codestreams. 2. Decoder/encoder solution: Performing a conversion using a neural network between a bitstream and a reconstructed image or video based on an indicator in the bitstream, wherein, Including in (or obtaining from) the bitstream an indication about the data unit size which does not depend on codestream size information. 3. Decoder/encoder solution: Performing a conversion using a neural network between a bitstream and a reconstructed image or video based on an indicator in the bitstream, wherein, Including in (or obtaining from) the bitstream an indication about the data unit size is coded 32 F1251247PCT
with fixed length coding. 6. Example implementation of the proposed solutions Below are some example implementations for the solutions summarized above in Sections 4 and 5. Some relevant parts that have been added or modified are shown by using bolded words (e.g., this format indicates added text), and some of the deleted parts are shown by using words in italics between double curly brackets (e.g., {{this format indicates deleted text}}). There may be some other changes that are not highlighted. It should be understood that only markings in this section are intended to emphasize at least part of proposed changes. The scope of the present disclosure is not limited in this respect.
33 F1251247PCT
34 F1251247PCT
35 F1251247PCT
36 F1251247PCT
37 F1251247PCT
38 F1251247PCT
In the above example implementation, the data unit size information (thread_size_delta_z, thread_size_delta and thread_size_delta_q) are removed from the z_stream, q_stream and residual_stream codestreams. Instead they are included in the picture header. [0038] More details of the embodiments of the present disclosure will be described below which are related to neural network-based visual data coding. As used herein, the term “visual data” may refer to an image, a video, a picture in a video, or any other visual data suitable to be coded. To solve the above problems and some other problems not mentioned, visual data processing solutions as described below are disclosed. The embodiments of the present disclosure should be considered as examples to explain the general concepts and should not be interpreted in a narrow way. Furthermore, these embodiments can be applied individually or combined in any manner. [0039] Fig. 12 illustrates a flowchart of a method 1200 for visual data processing in accordance with some embodiments of the present disclosure. At 1202, a conversion between the visual data and a codestream of the visual data is performed with a neural network (NN)-based model. For example, a codestream may comprise a sequence of bits. In addition, the codestream may further comprise associated codes which are used as markers. As used herein, the codestream may also be referred to as a bitstream. [0040] In some embodiments, the conversion may include encoding the visual data into the codestream. Additionally or alternatively, the conversion may include decoding the visual data from the codestream. By way of example rather than limitation, the decoding model shown in Fig. 6 may be employed for decoding the visual data from the bitstream. [0041] As used herein, an NN-based model may be a model based on neural network technologies. For example, an NN-based model may specify sequence of neural network modules (also called architecture) and model parameters. The neural network module may comprise a set of neural network layers. Each neural network layer specifies a tensor operation which receives and outputs tensor, and each layer has trainable parameters. It should be understood that the possible implementations of the NN-based model described here are merely illustrative and therefore should not be construed as limiting the present disclosure in any way. [0042] The codestream comprises a first indication indicating a size of a data unit in a first 39 F1251247PCT
portion of the codestream. For example, a data unit may be a substream of the codestream. Alternatively, the data unit may correspond to a thread, e.g., a processing thread. By way of example, a thread may comprise a codestream segment. In addition, a portion of the codestream may be a substream of the codestream, such as, a residual substream, a quality map substream, a hyper tensor substream, and/or the like. Alternatively, a portion of the codestream may be a header, such as a picture header, or a tool header. For example, a portion of the codestream may comprise one or more data units. It should be understood that the possible implementations of the data unit and the portion of the codestream described here are merely illustrative and therefore should not be construed as limiting the present disclosure in any way. [0043] As used herein, a sample in a residual latent representation of the visual data may be referred to as a residual sample. At the encoder side, an analysis transform may be employed to perform a transform from the visual data to a latent representation of the visual data, and the residual latent representation may be determined as a difference between the latent representation and a prediction of the latent representation. Moreover, a hyper tensor carries information regarding a latent domain prediction for a latent representation of the visual data and/or an entropy parameter for residual of the latent representation. By way of example, at the encoder side, an analysis transform may generate latent representation of the visual data. This latent representation is further compressed, e.g. by using a hyper encoder, to obtain a hyper tensor z, which carries information about latent domain prediction and entropy parameters for residual. In some example embodiments, the hyper tensor z may be further quantized to obtain a quantized hyper tensor z-hat (i.e., ^̂^). The quantized hyper tensor may also be referred to as a hyper tensor for short. Therefore, the above-mentioned hyper tensor substream (a.k.a., Z-stream) may comprise encoded data of unquantized hyper tensor z, and/or encoded data of quantized hyper tensor ^̂^. The scope of the present disclosure is not limited in this respect. [0044] In some embodiments, the first portion comprises a second indication indicating a size of the first portion of the codestream, and the first indication is comprised in a second portion of the codestream different from the first portion. That is, the first indication and the second indication are comprised in different portions of the codestream. As such, the size of the first portion of the codestream is not dependent on a value of the first indication. The proposed method can advantageously decouple the size of the first portion of the codestream from a value of the first indication. Thereby, a potential oscillation caused by a coupling relation between the size of the first portion of the codestream and the value of the 40 F1251247PCT
first indication can be avoided, and thus the coding efficiency can be improved. [0045] For example, the first indication may correspond to the above mentioned indication about the data unit size, and the second indication may correspond to the above mentioned indication about the codestream size. [0046] By way of example rather than limitation, in a case where the first portion of the codestream is a substream, the second indication may be represented as substream_size. In this case, the second indication may indicate the size of the substream in bytes. It should be noted that the second indication may also be represented as any other suitable string, and thus the scope of the present disclosure is not limited in this respect. [0047] In some embodiments, the first indication is independent from the size of the first portion of the codestream. For example, the size of the data unit may be dependent on a first value independent from the size of the first portion of the codestream. The first value may be indicted in the codestream. By way of example, the first value may be comprised in a header, such as a picture header, a tool header, or the like. Additionally or alternatively, the first value and the first indication may be comprised in different portions of the codestream, or the first value and the first indication may be comprised in the same portion of the codestream. [0048] The size of the data unit may be determined based on a sum of the first value and a value of the first indication. In one example embodiment, the sum of the first value and the value of the first indication is determined to be the size of the data unit. For example, the first value may be a mean value of sizes of data units. Alternatively, the sum of the first value, the value of the first indication, and a further offset is determined to be the size of the data unit. It should be understood that the above illustrations are described merely for purpose of description. The scope of the present disclosure is not limited in this respect. [0049] Since the first indication is independent from the size of the first portion of the codestream, the proposed method can advantageously decouple the size of the first portion of the codestream from a value of the first indication. Thereby, a potential oscillation caused by a coupling relation between the size of the first portion of the codestream and the value of the first indication can be avoided, and thus the coding efficiency can be improved. [0050] In some embodiments, the first indication is coded with a fixed length coding. By way of example, the number of bits used to signal the first indication may be predetermined to be a fixed number, such as 2, 8, 16, or the like. In this case, it is possible to know in advance the number of bits required to be included in the codestream. As such, the proposed method can advantageously decouple the size of the first portion of the codestream from a 41 F1251247PCT
value of the first indication. Thereby, a potential oscillation caused by a coupling relation between the size of the first portion of the codestream and the value of the first indication can be avoided, and thus the coding efficiency can be improved. [0051] In some embodiments, the first portion of the codestream may start with a marker and the second indication indicating a size of the first portion of the codestream. Additionally or alternatively, the second portion of the codestream may start with a marker and an indication indicating a size of the second portion. For example, the marker for a portion of the codestream may be a marker identifier of the portion of the codestream. By way of example rather than limitation, a marker of a residual substream for a primary component of the visual data may be SORp, a marker of a residual substream for a secondary component of the visual data may be SORs, a marker of a quality map substream of the visual data may be SOQ, and/or a marker of a hyper tensor substream of the visual data may be SOZ. It should be understood that the above examples are described merely for purpose of description. The scope of the present disclosure is not limited in this respect. In some embodiments, at least one of the first portion or the second portion of the codestream may comprise one or more data units. [0052] In one example embodiment, a value of the first indication may be an offset. By way of example rather than limitation, the value of the first indication may be a difference between the number of bytes of the data unit and a mean size of data units. In another example embodiment, a value of the first indication may be a thread offset. In a further example embodiment, a value of the first indication may be a thread size. In a still further example embodiment, a value of the first indication may be a delta thread size. It should be understood that the above examples are described merely for purpose of description. The scope of the present disclosure is not limited in this respect. In some embodiments, the first indication may be represented by thread_size_delta_z, thread_size_delta_r, thread_size_delta_q, or the like. [0053] In some embodiments, the size of the data unit may be signaled using a variable length coding. Alternatively, the size of the data unit may be signaled using a fixed length coding. In some embodiments, the size of the data unit may be associated with one of the following: a substream, a tile stream, or a codestream partition, or a region stream. By way of example rather than limitation, a tile stream may be a substream comprising coded data for a tile, and a region stream may be a substream comprising coded data for a region. [0054] In some embodiments, the second portion of the codestream may be a header, such as a picture header, a tool header, or the like. In some embodiments, the first portion of the 42 F1251247PCT
codestream may comprise a residual substream, a quality map substream, a hyper information substream, and/or the like . In some embodiments, a value of the first indication may be dependent on the size of the first portion of the codestream. In this case, the first indication may be comprised in a further portion of the codestream different from the first portion. [0055] In view of the above, the solutions in accordance with some embodiments of the present disclosure can advantageously improve coding efficiency and coding quality. [0056] According to further embodiments of the present disclosure, a non-transitory computer-readable recording medium is provided. The non-transitory computer-readable recording medium stores a codestream of visual data which is generated by a method performed by an apparatus for visual data processing. The method comprises: performing a conversion from the visual data to the codestream with a neural network (NN)-based model, the codestream comprising a first indication indicating a size of a data unit in a first portion of the codestream, wherein the first portion comprises a second indication indicating a size of the first portion of the codestream, and the first indication is comprised in a second portion of the codestream different from the first portion, or wherein the first indication is independent from the size of the first portion of the codestream, or wherein the first indication is coded with a fixed length coding. [0057] According to still further embodiments of the present disclosure, a method for storing codestream of visual data is provided. The method comprises: performing a conversion from the visual data to the codestream with a neural network (NN)-based model, the codestream comprising a first indication indicating a size of a data unit in a first portion of the codestream; and storing the codestream in a non-transitory computer-readable recording medium, wherein the first portion comprises a second indication indicating a size of the first portion of the codestream, and the first indication is comprised in a second portion of the codestream different from the first portion, or wherein the first indication is independent from the size of the first portion of the codestream, or wherein the first indication is coded with a fixed length coding. [0058] Implementations of the present disclosure can be described in view of the following clauses, the features of which can be combined in any reasonable manner. [0059] Clause 1. A method for visual data processing, comprising: performing a conversion between visual data and a codestream of the visual data with a neural network (NN)-based model, the codestream comprising a first indication indicating a size of a data unit in a first portion of the codestream, wherein the first portion comprises a second indication indicating a size of the first portion of the codestream, and the first indication is 43 F1251247PCT
comprised in a second portion of the codestream different from the first portion, or wherein the first indication is independent from the size of the first portion of the codestream, or wherein the first indication is coded with a fixed length coding. [0060] Clause 2. The method of clause 1, wherein the data unit comprises a substream of the codestream, or the data unit corresponds to a thread. [0061] Clause 3. The method of any of clauses 1-2, wherein at least one of the first portion or the second portion of the codestream comprises one of the following: a residual substream, a quality map substream, a picture header, or a tool header. [0062] Clause 4. The method of any of clauses 1-3, wherein the first portion of the codestream starts with a marker and the second indication, or the second portion of the codestream starts with a marker and an indication indicating a size of the second portion. [0063] Clause 5. The method of any of clauses 1-4, wherein at least one of the first portion or the second portion of the codestream comprises one or more data units. [0064] Clause 6. The method of any of clauses 1-5, wherein a value of the first indication is an offset, a thread offset, a thread size, or a delta thread size. [0065] Clause 7. The method of any of clauses 1-6, wherein the size of the data unit is signaled using a variable length coding or a fixed length coding. [0066] Clause 8. The method of any of clauses 1-7, wherein the size of the data unit is associated with one of the following: a substream, a tile stream, or a codestream partition, or a region stream. [0067] Clause 9. The method of any of clauses 1-8, wherein the second portion of the codestream is a picture header or a tool header. [0068] Clause 10. The method of any of clauses 1-9, wherein the first portion of the codestream is a residual substream, a quality map substream, or a hyper information substream. [0069] Clause 11. The method of any of clauses 1-10, wherein a value of the first indication is dependent on the size of the first portion of the codestream. [0070] Clause 12. The method of any of clauses 1-10, wherein the size of the data unit is dependent on a first value indicted in the codestream, and the first value is independent from the size of the first portion of the codestream. [0071] Clause 13. The method of clause 12, wherein the first value is comprised in a header. [0072] Clause 14. The method of any of clauses 12-13, wherein the first value and the first indication are comprised in different portions of the codestream. [0073] Clause 15. The method of any of clauses 12-14, wherein the size of the data unit is 44 F1251247PCT
determined based on a sum of the first value and a value of the first indication. [0074] Clause 16. The method of any of clauses 12-15, wherein the first value is a mean value of sizes of data units. [0075] Clause 17. The method of any of clauses 1-16, wherein the first indication is represented by one of the following: thread_size_delta_z, thread_size_delta_r, or thread_size_delta_q. [0076] Clause 18. The method of any of clauses 1-17, wherein the visual data is at least a part of a picture of a video or an image. [0077] Clause 19. The method of any of clauses 1-18, wherein the conversion includes encoding the visual data into the codestream. [0078] Clause 20. The method of any of clauses 1-18, wherein the conversion includes decoding the visual data from the codestream. [0079] Clause 21. An apparatus for visual data processing comprising a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to perform a method in accordance with any of clauses 1-20. [0080] Clause 22. A non-transitory computer-readable storage medium storing instructions that cause a processor to perform a method in accordance with any of clauses 1-20. [0081] Clause 23. A non-transitory computer-readable recording medium storing a codestream of visual data which is generated by a method performed by an apparatus for visual data processing, wherein the method comprises: performing a conversion from the visual data to the codestream with a neural network (NN)-based model, the codestream comprising a first indication indicating a size of a data unit in a first portion of the codestream, wherein the first portion comprises a second indication indicating a size of the first portion of the codestream, and the first indication is comprised in a second portion of the codestream different from the first portion, or wherein the first indication is independent from the size of the first portion of the codestream, or wherein the first indication is coded with a fixed length coding. [0082] Clause 24. A method for storing a codestream of visual data, comprising: performing a conversion from the visual data to the codestream with a neural network (NN)- based model, the codestream comprising a first indication indicating a size of a data unit in a first portion of the codestream; and storing the codestream in a non-transitory computer- readable recording medium, wherein the first portion comprises a second indication indicating a size of the first portion of the codestream, and the first indication is comprised 45 F1251247PCT
in a second portion of the codestream different from the first portion, or wherein the first indication is independent from the size of the first portion of the codestream, or wherein the first indication is coded with a fixed length coding. Example Device [0083] Fig. 13 illustrates a block diagram of a computing device 1300 in which various embodiments of the present disclosure can be implemented. The computing device 1300 may be implemented as or included in the source device 110 (or the visual data encoder 114) or the destination device 120 (or the visual data decoder 124). [0084] It would be appreciated that the computing device 1300 shown in Fig.13 is merely for purpose of illustration, without suggesting any limitation to the functions and scopes of the embodiments of the present disclosure in any manner. [0085] As shown in Fig. 13, the computing device 1300 includes a general-purpose computing device 1300. The computing device 1300 may at least comprise one or more processors or processing units 1310, a memory 1320, a storage unit 1330, one or more communication units 1340, one or more input devices 1350, and one or more output devices 1360. [0086] In some embodiments, the computing device 1300 may be implemented as any user terminal or server terminal having the computing capability. The server terminal may be a server, a large-scale computing device or the like that is provided by a service provider. The user terminal may for example be any type of mobile terminal, fixed terminal, or portable terminal, including a mobile phone, station, unit, device, multimedia computer, multimedia tablet, Internet node, communicator, desktop computer, laptop computer, notebook computer, netbook computer, tablet computer, personal communication system (PCS) device, personal navigation device, personal digital assistant (PDA), audio/video player, digital camera/video camera, positioning device, television receiver, radio broadcast receiver, E-book device, gaming device, or any combination thereof, including the accessories and peripherals of these devices, or any combination thereof. It would be contemplated that the computing device 1300 can support any type of interface to a user (such as “wearable” circuitry and the like). [0087] The processing unit 1310 may be a physical or virtual processor and can implement various processes based on programs stored in the memory 1320. In a multi-processor system, multiple processing units execute computer executable instructions in parallel so as to improve the parallel processing capability of the computing device 1300. The processing unit 1310 may also be referred to as a central processing unit (CPU), a microprocessor, a 46 F1251247PCT
controller or a microcontroller. [0088] The computing device 1300 typically includes various computer storage medium. Such medium can be any medium accessible by the computing device 1300, including, but not limited to, volatile and non-volatile medium, or detachable and non-detachable medium. The memory 1320 can be a volatile memory (for example, a register, cache, Random Access Memory (RAM)), a non-volatile memory (such as a Read-Only Memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), or a flash memory), or any combination thereof. The storage unit 1330 may be any detachable or non-detachable medium and may include a machine-readable medium such as a memory, flash memory drive, magnetic disk or another other media, which can be used for storing information and/or visual data and can be accessed in the computing device 1300. [0089] The computing device 1300 may further include additional detachable/non- detachable, volatile/non-volatile memory medium. Although not shown in Fig. 13, it is possible to provide a magnetic disk drive for reading from and/or writing into a detachable and non-volatile magnetic disk and an optical disk drive for reading from and/or writing into a detachable non-volatile optical disk. In such cases, each drive may be connected to a bus (not shown) via one or more visual data medium interfaces. [0090] The communication unit 1340 communicates with a further computing device via the communication medium. In addition, the functions of the components in the computing device 1300 can be implemented by a single computing cluster or multiple computing machines that can communicate via communication connections. Therefore, the computing device 1300 can operate in a networked environment using a logical connection with one or more other servers, networked personal computers (PCs) or further general network nodes. [0091] The input device 1350 may be one or more of a variety of input devices, such as a mouse, keyboard, tracking ball, voice-input device, and the like. The output device 1360 may be one or more of a variety of output devices, such as a display, loudspeaker, printer, and the like. By means of the communication unit 1340, the computing device 1300 can further communicate with one or more external devices (not shown) such as the storage devices and display device, with one or more devices enabling the user to interact with the computing device 1300, or any devices (such as a network card, a modem and the like) enabling the computing device 1300 to communicate with one or more other computing devices, if required. Such communication can be performed via input/output (I/O) interfaces (not shown). [0092] In some embodiments, instead of being integrated in a single device, some or all 47 F1251247PCT
components of the computing device 1300 may also be arranged in cloud computing architecture. In the cloud computing architecture, the components may be provided remotely and work together to implement the functionalities described in the present disclosure. In some embodiments, cloud computing provides computing, software, visual data access and storage service, which will not require end users to be aware of the physical locations or configurations of the systems or hardware providing these services. In various embodiments, the cloud computing provides the services via a wide area network (such as Internet) using suitable protocols. For example, a cloud computing provider provides applications over the wide area network, which can be accessed through a web browser or any other computing components. The software or components of the cloud computing architecture and corresponding visual data may be stored on a server at a remote position. The computing resources in the cloud computing environment may be merged or distributed at locations in a remote visual data center. Cloud computing infrastructures may provide the services through a shared visual data center, though they behave as a single access point for the users. Therefore, the cloud computing architectures may be used to provide the components and functionalities described herein from a service provider at a remote location. Alternatively, they may be provided from a conventional server or installed directly or otherwise on a client device. [0093] The computing device 1300 may be used to implement visual data encoding/decoding in embodiments of the present disclosure. The memory 1320 may include one or more visual data coding modules 1325 having one or more program instructions. These modules are accessible and executable by the processing unit 1310 to perform the functionalities of the various embodiments described herein. [0094] In the example embodiments of performing visual data encoding, the input device 1350 may receive visual data as an input 1370 to be encoded. The visual data may be processed, for example, by the visual data coding module 1325, to generate an encoded bitstream. The encoded bitstream may be provided via the output device 1360 as an output 1380. [0095] In the example embodiments of performing visual data decoding, the input device 1350 may receive an encoded bitstream as the input 1370. The encoded bitstream may be processed, for example, by the visual data coding module 1325, to generate decoded visual data. The decoded visual data may be provided via the output device 1360 as the output 1380. [0096] While this disclosure has been particularly shown and described with references to example embodiments thereof, it will be understood by those skilled in the art that various 48 F1251247PCT
changes in form and details may be made therein without departing from the spirit and scope of the present application as defined by the appended claims. Such variations are intended to be covered by the scope of this present application. As such, the foregoing description of embodiments of the present application is not intended to be limiting. 49 F1251247PCT
Claims
I/We Claim: 1. A method for visual data processing, comprising: performing a conversion between visual data and a codestream of the visual data with a neural network (NN)-based model, the codestream comprising a first indication indicating a size of a data unit in a first portion of the codestream, wherein the first portion comprises a second indication indicating a size of the first portion of the codestream, and the first indication is comprised in a second portion of the codestream different from the first portion, or wherein the first indication is independent from the size of the first portion of the codestream, or wherein the first indication is coded with a fixed length coding.
2. The method of claim 1, wherein the data unit comprises a substream of the codestream, or the data unit corresponds to a thread.
3. The method of any of claims 1-2, wherein at least one of the first portion or the second portion of the codestream comprises one of the following: a residual substream, a quality map substream, a picture header, or a tool header.
4. The method of any of claims 1-3, wherein the first portion of the codestream starts with a marker and the second indication, or the second portion of the codestream starts with a marker and an indication indicating a size of the second portion.
5. The method of any of claims 1-4, wherein at least one of the first portion or the second portion of the codestream comprises one or more data units.
6. The method of any of claims 1-5, wherein a value of the first indication is an offset, a thread offset, a thread size, or a delta thread size.
7. The method of any of claims 1-6, wherein the size of the data unit is signaled using a variable length coding or a fixed length coding. 50 F1251247PCT
8. The method of any of claims 1-7, wherein the size of the data unit is associated with one of the following: a substream, a tile stream, or a codestream partition, or a region stream.
9. The method of any of claims 1-8, wherein the second portion of the codestream is a picture header or a tool header. 10. The method of any of claims 1-9, wherein the first portion of the codestream is a residual substream, a quality map substream, or a hyper information substream. 11. The method of any of claims 1-10, wherein a value of the first indication is dependent on the size of the first portion of the codestream. 12. The method of any of claims 1-10, wherein the size of the data unit is dependent on a first value indicted in the codestream, and the first value is independent from the size of the first portion of the codestream. 13. The method of claim 12, wherein the first value is comprised in a header. 14. The method of any of claims 12-13, wherein the first value and the first indication are comprised in different portions of the codestream. 15. The method of any of claims 12-14, wherein the size of the data unit is determined based on a sum of the first value and a value of the first indication. 16. The method of any of claims 12-15, wherein the first value is a mean value of sizes of data units. 17. The method of any of claims 1-16, wherein the first indication is represented by one of the following: thread_size_delta_z, thread_size_delta_r, or thread_size_delta_q. 18. The method of any of claims 1-17, wherein the visual data is at least a part of a picture of a video or an image. 19. The method of any of claims 1-18, wherein the conversion includes encoding the visual 51 F1251247PCT
data into the codestream. 20. The method of any of claims 1-18, wherein the conversion includes decoding the visual data from the codestream. 21. An apparatus for visual data processing comprising a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to perform a method in accordance with any of claims 1-20. 22. A non-transitory computer-readable storage medium storing instructions that cause a processor to perform a method in accordance with any of claims 1-20. 23. A non-transitory computer-readable recording medium storing a codestream of visual data which is generated by a method performed by an apparatus for visual data processing, wherein the method comprises: performing a conversion from the visual data to the codestream with a neural network (NN)-based model, the codestream comprising a first indication indicating a size of a data unit in a first portion of the codestream, wherein the first portion comprises a second indication indicating a size of the first portion of the codestream, and the first indication is comprised in a second portion of the codestream different from the first portion, or wherein the first indication is independent from the size of the first portion of the codestream, or wherein the first indication is coded with a fixed length coding. 24. A method for storing a codestream of visual data, comprising: performing a conversion from the visual data to the codestream with a neural network (NN)-based model, the codestream comprising a first indication indicating a size of a data unit in a first portion of the codestream; and storing the codestream in a non-transitory computer-readable recording medium, wherein the first portion comprises a second indication indicating a size of the first portion of the codestream, and the first indication is comprised in a second portion of the codestream different from the first portion, or wherein the first indication is independent from the size of the first portion of the 52 F1251247PCT
codestream, or wherein the first indication is coded with a fixed length coding. 53 F1251247PCT
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202463638048P | 2024-04-24 | 2024-04-24 | |
| US63/638,048 | 2024-04-24 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025226697A1 true WO2025226697A1 (en) | 2025-10-30 |
Family
ID=97490858
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2025/025788 Pending WO2025226697A1 (en) | 2024-04-24 | 2025-04-22 | Method, apparatus, and medium for visual data processing |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025226697A1 (en) |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20230051066A1 (en) * | 2021-07-27 | 2023-02-16 | Lemon Inc. | Partitioning Information In Neural Network-Based Video Coding |
| WO2023241690A1 (en) * | 2022-06-16 | 2023-12-21 | Douyin Vision (Beijing) Co., Ltd. | Variable-rate neural network based compression |
| WO2024005659A1 (en) * | 2022-06-30 | 2024-01-04 | Huawei Technologies Co., Ltd. | Adaptive selection of entropy coding parameters |
-
2025
- 2025-04-22 WO PCT/US2025/025788 patent/WO2025226697A1/en active Pending
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20230051066A1 (en) * | 2021-07-27 | 2023-02-16 | Lemon Inc. | Partitioning Information In Neural Network-Based Video Coding |
| WO2023241690A1 (en) * | 2022-06-16 | 2023-12-21 | Douyin Vision (Beijing) Co., Ltd. | Variable-rate neural network based compression |
| WO2024005659A1 (en) * | 2022-06-30 | 2024-01-04 | Huawei Technologies Co., Ltd. | Adaptive selection of entropy coding parameters |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2024140849A1 (en) | Method, apparatus, and medium for visual data processing | |
| WO2025072500A1 (en) | Method, apparatus, and medium for visual data processing | |
| WO2025077746A1 (en) | Method, apparatus, and medium for visual data processing | |
| WO2025077742A1 (en) | Method, apparatus, and medium for visual data processing | |
| WO2024083202A1 (en) | Method, apparatus, and medium for visual data processing | |
| WO2025226697A1 (en) | Method, apparatus, and medium for visual data processing | |
| WO2024169959A1 (en) | Method, apparatus, and medium for visual data processing | |
| US20250379990A1 (en) | Method, apparatus, and medium for visual data processing | |
| WO2025082522A1 (en) | Method, apparatus, and medium for visual data processing | |
| WO2025082523A1 (en) | Method, apparatus, and medium for visual data processing | |
| WO2024193708A1 (en) | Method, apparatus, and medium for visual data processing | |
| WO2025153016A1 (en) | Method, apparatus, and medium for visual data processing | |
| WO2025044947A1 (en) | Method, apparatus, and medium for visual data processing | |
| WO2025149063A1 (en) | Method, apparatus, and medium for visual data processing | |
| WO2025157163A1 (en) | Method, apparatus, and medium for visual data processing | |
| WO2024193709A9 (en) | Method, apparatus, and medium for visual data processing | |
| WO2025077744A1 (en) | Method, apparatus, and medium for visual data processing | |
| WO2024193710A1 (en) | Method, apparatus, and medium for visual data processing | |
| WO2025002424A1 (en) | Method, apparatus, and medium for visual data processing | |
| WO2025200931A1 (en) | Method, apparatus, and medium for visual data processing | |
| WO2025087230A1 (en) | Method, apparatus, and medium for visual data processing | |
| WO2025131046A1 (en) | Method, apparatus, and medium for visual data processing | |
| WO2025146073A1 (en) | Method, apparatus, and medium for visual data processing | |
| WO2024083247A1 (en) | Method, apparatus, and medium for visual data processing | |
| WO2024149394A1 (en) | Method, apparatus, and medium for visual data processing |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 25794463 Country of ref document: EP Kind code of ref document: A1 |