WO2025078201A1 - Two-stage point cloud attribute encoding scheme with nested local and global transforms - Google Patents
Two-stage point cloud attribute encoding scheme with nested local and global transforms Download PDFInfo
- Publication number
- WO2025078201A1 WO2025078201A1 PCT/EP2024/077544 EP2024077544W WO2025078201A1 WO 2025078201 A1 WO2025078201 A1 WO 2025078201A1 EP 2024077544 W EP2024077544 W EP 2024077544W WO 2025078201 A1 WO2025078201 A1 WO 2025078201A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- geometry
- attribute
- transform
- bitstream
- point cloud
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/90—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
- H04N19/96—Tree coding, e.g. quad-tree coding
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
- G06T9/001—Model-based coding, e.g. wire frame
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
- G06T9/40—Tree coding, e.g. quadtree, octree
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/597—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/1883—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit relating to sub-band structure, e.g. hierarchical level, directional tree, e.g. low-high [LH], high-low [HL], high-high [HH]
Definitions
- Point clouds are data that may be used in numerous business domains, such as autonomous driving, robotics, ARA/R, civil engineering, computer graphics, to the animation / movie industry.
- 3D LiDAR sensors have been deployed in self-driving cars, and affordable LiDAR sensors include Velodyne Velabit, Apple iPad Pro 2020, and Intel RealSense LiDAR camera L515. With advances in sensing technologies, 3D point cloud data is becoming more widespread, such as in the applications and industries mentioned above.
- a first example method in accordance with some embodiments may include: obtaining a point cloud, the point cloud including: a first set of information describing a geometry of the point cloud, and a second set of information describing attributes of the point cloud; performing a geometry encoding of the first set of information to generate a geometry bitstream; performing a two-stage attribute compression process to generate an attribute bitstream, wherein a first stage of the two-stage attribute compression process includes performing a block transform on each of a set of nodes of the point cloud, and wherein a second stage of the two-stage attribute compression process includes performing a hierarchical encoding over the set of nodes of the point cloud, and outputting an output bitstream including the geometry bitstream and the attribute bitstream.
- Some embodiments of the first example method may further include: quantizing an output of the first stage of the two-stage attribute compression process to generate a first set of quantized bits; quantizing an output of the second stage of the two-stage attribute compression process to generate a second set of quantized bits; and arithmetic coding the first and second sets of quantized bits to generate the attribute bitstream.
- the block transform is a 3-dimensional Graph Fourier Transform (GFT).
- the block transform is a 3-dimensional Kahunen Loeve Transform (KLT).
- performing the hierarchical encoding over the set of nodes of the point cloud includes: obtaining, for each of the set of nodes, one or more respective transform coefficients; determining, for each of the set of nodes, an average transform coefficient of the one or more respective transform coefficients; and performing a hierarchical transform on at least the average transform coefficient.
- Some embodiments of the first example method may further include performing the hierarchical transform on at least two transform coefficients.
- the two-stage attribute compression process is performed on top of a reconstructed geometry of the point cloud.
- the two-stage attribute compression process is performed on a group of point cloud frames.
- the group of point cloud frames are consecutive frames.
- Some embodiments of the first example method may further include performing a geometry encode of one of more leaf nodes of the point cloud.
- the geometry encode of a first leaf node occurs in parallel with the geometry encode of a second leaf node.
- the geometry encoding is performed in parallel with at least a portion of the two-stage attribute compression process.
- a second example apparatus in accordance with some embodiments may include: a processor; and a non-transitory computer-readable medium storing instructions operative, when executed by the processor, to cause the apparatus to perform any one of the methods listed above.
- a fourth example apparatus in accordance with some embodiments may include: a computer- readable medium storing instructions for causing one or more processors to perform any one of the methods listed above.
- a fifth example apparatus in accordance with some embodiments may include: at least one processor and at least one non-transitory computer-readable medium storing instructions for causing the at least one processor to perform any one of the methods listed above.
- An example signal in accordance with some embodiments may include a bitstream generated according to any one of the methods listed above.
- FIG. 1 B is a system diagram illustrating an example wireless transmit/receive unit (WTRU) that may be used within the communications system illustrated in FIG. 1A according to some embodiments.
- WTRU wireless transmit/receive unit
- FIG. 6 is a schematic illustration showing an example TriSoup rasterization.
- FIG. 7 is a schematic illustration showing an example Dyadic RAHT transform process for a 2x2x2 octree node according to some embodiments.
- FIG. 8 is a schematic illustration showing an example adjacency graph and associated matrix for a Graph Fourier Transform (GFT).
- GFT Graph Fourier Transform
- FIG. 9A is a schematic illustration showing an example original point cloud used for a 3D shape- adaptive discrete cosine transform (SA-DCT).
- SA-DCT 3D shape- adaptive discrete cosine transform
- FIG. 9C is a schematic illustration showing an example horizontally reordered voxels and 1 D-DCTs of variable support size.
- FIG. 14 is a flowchart illustrating an example process for encoding a point cloud according to some embodiments.
- FIG. 15 is a flowchart illustrating an example process for decoding a point cloud according to some embodiments.
- the communications systems 100 may employ one or more channel access methods, such as code division multiple access (CDMA), time division multiple access (TDMA), frequency division multiple access (FDMA), orthogonal FDMA (OFDMA), single-carrier FDMA (SC-FDMA), zero-tail unique-word DFT-Spread OFDM (ZT UW DTS-s OFDM), unique word OFDM (UW-OFDM), resource block-filtered OFDM, filter bank multicarrier (FBMC), and the like.
- CDMA code division multiple access
- TDMA time division multiple access
- FDMA frequency division multiple access
- OFDMA orthogonal FDMA
- SC-FDMA single-carrier FDMA
- ZT UW DTS-s OFDM zero-tail unique-word DFT-Spread OFDM
- UW-OFDM unique word OFDM
- FBMC filter bank multicarrier
- the WTRUs 102a, 102b, 102c, 102d may be configured to transmit and/or receive wireless signals and may include a user equipment (UE), a mobile station, a fixed or mobile subscriber unit, a subscription-based unit, a pager, a cellular telephone, a personal digital assistant (PDA), a smartphone, a laptop, a netbook, a personal computer, a wireless sensor, a hotspot or Mi-Fi device, an Internet of Things (loT) device, a watch or other wearable, a head-mounted display (HMD), a vehicle, a drone, a medical device and applications (e.g., remote surgery), an industrial device and applications (e.g., a robot and/or other wireless devices operating in an industrial and/or an automated processing chain contexts), a consumer electronics device, a device operating on commercial and/or industrial wireless networks, and the like.
- UE user equipment
- PDA personal digital assistant
- HMD head-mounted display
- a vehicle a drone
- the base stations 114a, 114b may be a base transceiver station (BTS), a Node-B, an eNode B, a Home Node B, a Home eNode B, a gNB, a NR NodeB, a site controller, an access point (AP), a wireless router, and the like. While the base stations 114a, 114b are each depicted as a single element, it will be appreciated that the base stations 114a, 114b may include any number of interconnected base stations and/or network elements.
- the base station 114a may be part of the RAN 104/113, which may also include other base stations and/or network elements (not shown), such as a base station controller (BSC), a radio network controller (RNC), relay nodes, etc.
- BSC base station controller
- RNC radio network controller
- the base station 114a and/or the base station 114b may be configured to transmit and/or receive wireless signals on one or more carrier frequencies, which may be referred to as a cell (not shown). These frequencies may be in licensed spectrum, unlicensed spectrum, or a combination of licensed and unlicensed spectrum.
- a cell may provide coverage for a wireless service to a specific geographical area that may be relatively fixed or that may change over time. The cell may further be divided into cell sectors.
- the communications system 100 may be a multiple access system and may employ one or more channel access schemes, such as CDMA, TDMA, FDMA, OFDMA, SC-FDMA, and the like.
- the base station 114a in the RAN 104/113 and the WTRUs 102a, 102b, 102c may implement a radio technology such as Universal Mobile Telecommunications System (UMTS) Terrestrial Radio Access (UTRA), which may establish the air interface 116 using wideband CDMA (WCDMA).
- WCDMA may include communication protocols such as High-Speed Packet Access (HSPA) and/or Evolved HSPA (HSPA+).
- HSPA may include High-Speed Downlink (DL) Packet Access (HSDPA) and/or High-Speed UL Packet Access (HSUPA).
- the base station 114a and the WTRUs 102a, 102b, 102c may implement a radio technology such as Evolved UMTS Terrestrial Radio Access (E-UTRA), which may establish the air interface 116 using Long Term Evolution (LTE) and/or LTE-Advanced (LTE-A) and/or LTE-Advanced Pro (LTE-A Pro).
- E-UTRA Evolved UMTS Terrestrial Radio Access
- LTE Long Term Evolution
- LTE-A LTE-Advanced
- LTE-A Pro LTE-Advanced Pro
- the base station 114a and the WTRUs 102a, 102b, 102c may implement a radio technology such as NR Radio Access , which may establish the air interface 116 using New Radio (NR).
- NR New Radio
- the base station 114a and the WTRUs 102a, 102b, 102c may implement multiple radio access technologies.
- the base station 114a and the WTRUs 102a, 102b, 102c may implement LTE radio access and NR radio access together, for instance using dual connectivity (DC) principles.
- DC dual connectivity
- the base station 114b and the WTRUs 102c, 102d may implement a radio technology such as IEEE 802.15 to establish a wireless personal area network (WPAN).
- the base station 114b and the WTRUs 102c, 102d may utilize a cellular-based RAT (e.g., WCDMA, CDMA2000, GSM, LTE, LTE-A, LTE-A Pro, NR etc.) to establish a picocell or femtocell.
- the base station 114b may have a direct connection to the Internet 110.
- the base station 114b may not be required to access the Internet 110 via the CN 106.
- Some or all of the WTRUs 102a, 102b, 102c, 102d in the communications system 100 may include multi-mode capabilities (e.g., the WTRUs 102a, 102b, 102c, 102d may include multiple transceivers for communicating with different wireless networks over different wireless links).
- the WTRU 102c shown in FIG. 1A may be configured to communicate with the base station 114a, which may employ a cellular-based radio technology, and with the base station 114b, which may employ an IEEE 802 radio technology.
- the transmit/receive element 122 may be configured to transmit signals to, or receive signals from, a base station (e.g., the base station 114a) over the air interface 116.
- the transmit/receive element 122 may be an antenna configured to transmit and/or receive RF signals.
- the transmit/receive element 122 may be an emitter/detector configured to transmit and/or receive IR, UV, or visible light signals, for example.
- the transmit/receive element 122 may be configured to transmit and/or receive both RF and light signals. It will be appreciated that the transmit/receive element 122 may be configured to transmit and/or receive any combination of wireless signals.
- the power source 134 may include one or more dry cell batteries (e.g., nickel-cadmium (NiCd), nickel-zinc (NiZn), nickel metal hydride (NiMH), lithium- ion (Li-ion), etc.), solar cells, fuel cells, and the like.
- dry cell batteries e.g., nickel-cadmium (NiCd), nickel-zinc (NiZn), nickel metal hydride (NiMH), lithium- ion (Li-ion), etc.
- solar cells e.g., solar cells, fuel cells, and the like.
- the processor 118 may also be coupled to the GPS chipset 136, which may be configured to provide location information (e.g., longitude and latitude) regarding the current location of the WTRU 102.
- location information e.g., longitude and latitude
- the WTRU 102 may receive location information over the air interface 116 from a base station (e.g., base stations 114a, 114b) and/or determine its location based on the timing of the signals being received from two or more nearby base stations. It will be appreciated that the WTRU 102 may acquire location information by way of any suitable locationdetermination method while remaining consistent with an embodiment.
- the processor 118 may further be coupled to other peripherals 138, which may include one or more software and/or hardware modules that provide additional features, functionality and/or wired or wireless connectivity.
- the peripherals 138 may include an accelerometer, an e-compass, a satellite transceiver, a digital camera (for photographs and/or video), a universal serial bus (USB) port, a vibration device, a television transceiver, a hands free headset, a Bluetooth® module, a frequency modulated (FM) radio unit, a digital music player, a media player, a video game player module, an Internet browser, a Virtual Reality and/or Augmented Reality (VR/AR) device, an activity tracker, and the like.
- FM frequency modulated
- the peripherals 138 may include one or more sensors, the sensors may be one or more of a gyroscope, an accelerometer, a hall effect sensor, a magnetometer, an orientation sensor, a proximity sensor, a temperature sensor, a time sensor; a geolocation sensor; an altimeter, a light sensor, a touch sensor, a magnetometer, a barometer, a gesture sensor, a biometric sensor, and/or a humidity sensor.
- a gyroscope an accelerometer, a hall effect sensor, a magnetometer, an orientation sensor, a proximity sensor, a temperature sensor, a time sensor; a geolocation sensor; an altimeter, a light sensor, a touch sensor, a magnetometer, a barometer, a gesture sensor, a biometric sensor, and/or a humidity sensor.
- the WTRU 102 may include a half-duplex radio for which transmission and reception of some or all of the signals (e.g., associated with particular subframes for either the UL (e.g., for transmission) or the downlink (e.g., for reception)).
- the WTRU is described in FIGs. 1 A-1 B as a wireless terminal, it is contemplated that in certain representative embodiments that such a terminal may use (e.g., temporarily or permanently) wired communication interfaces with the communication network.
- the other network 112 may be a WLAN.
- one or more, or all, of the functions described herein may be performed by one or more emulation devices (not shown).
- the emulation devices may be one or more devices configured to emulate one or more, or all, of the functions described herein.
- the emulation devices may be used to test other devices and/or to simulate network and/or WTRU functions.
- the emulation devices may be designed to implement one or more tests of other devices in a lab environment and/or in an operator network environment.
- the one or more emulation devices may perform the one or more, or all, functions while being fully or partially implemented and/or deployed as part of a wired and/or wireless communication network in order to test other devices within the communication network.
- the one or more emulation devices may perform the one or more, or all, functions while being temporarily implemented/deployed as part of a wired and/or wireless communication network.
- the emulation device may be directly coupled to another device for purposes of testing and/or may performing testing using over-the-air wireless communications.
- the one or more emulation devices may perform the one or more, including all, functions while not being implemented/deployed as part of a wired and/or wireless communication network.
- the emulation devices may be utilized in a testing scenario in a testing laboratory and/or a non-deployed (e.g., testing) wired and/or wireless communication network in order to implement testing of one or more components.
- the one or more emulation devices may be test equipment. Direct RF coupling and/or wireless communications via RF circuitry (e.g., which may include one or more antennas) may be used by the emulation devices to transmit and/or receive data.
- RF circuitry e.g., which may include one or more antennas
- FIG. 1 C is a system diagram illustrating an example set of interfaces for a system according to some embodiments.
- An extended reality display device together with its control electronics, may be implemented for some embodiments.
- System 150 can be embodied as a device including the various components described below and is configured to perform one or more of the aspects described in this document. Examples of such devices, include, but are not limited to, various electronic devices such as personal computers, laptop computers, smartphones, tablet computers, digital multimedia set top boxes, digital television receivers, personal video recording systems, connected home appliances, and servers. Elements of system 150, singly or in combination, can be embodied in a single integrated circuit (IC), multiple ICs, and/or discrete components.
- IC integrated circuit
- the processing and encoder/decoder elements of system 150 are distributed across multiple ICs and/or discrete components.
- the system 150 is communicatively coupled to one or more other systems, or other electronic devices, via, for example, a communications bus or through dedicated input and/or output ports.
- the system 150 is configured to implement one or more of the aspects described in this document.
- Program code to be loaded onto processor 152 or encoder/decoder 156 to perform the various aspects described in this document can be stored in storage device 158 and subsequently loaded onto memory 154 for execution by processor 152.
- processor 152, memory 154, storage device 158, and encoder/decoder module 156 can store one or more of various items during the performance of the processes described in this document. Such stored items can include, but are not limited to, the input video, the decoded video or portions of the decoded video, the bitstream, matrices, variables, and intermediate or final results from the processing of equations, formulas, operations, and operational logic.
- memory inside of the processor 152 and/or the encoder/decoder module 156 is used to store instructions and to provide working memory for processing that is needed during encoding or decoding.
- a memory external to the processing device (for example, the processing device can be either the processor 152 or the encoder/decoder module 152) is used for one or more of these functions.
- the external memory can be the memory 154 and/or the storage device 158, for example, a dynamic volatile memory and/or a non-volatile flash memory.
- an external non-volatile flash memory is used to store the operating system of, for example, a television.
- a fast external dynamic volatile memory such as a RAM is used as working memory for video coding and decoding operations, such as for MPEG-2 (MPEG refers to the Moving Picture Experts Group, MPEG-2 is also referred to as ISO/IEC 13818, and 13818-1 is also known as H.222, and 13818-2 is also known as H.262), HEVC (HEVC refers to High Efficiency Video Coding, also known as H.265 and MPEG-H Part 2), or WC (Versatile Video Coding, a new standard being developed by JVET, the Joint Video Experts Team).
- MPEG-2 MPEG refers to the Moving Picture Experts Group
- MPEG-2 is also referred to as ISO/IEC 13818
- 13818-1 is also known as H.222
- 13818-2 is also known as H.262
- HEVC High Efficiency Video Coding
- WC Very Video Coding
- the input devices of block 172 have associated respective input processing elements as known in the art.
- the RF portion can be associated with elements suitable for (i) selecting a desired frequency (also referred to as selecting a signal, or band-limiting a signal to a band of frequencies), (ii) downconverting the selected signal, (iii) band-limiting again to a narrower band of frequencies to select (for example) a signal frequency band which can be referred to as a channel in certain embodiments, (iv) demodulating the downconverted and band-limited signal, (v) performing error correction, and (vi) demultiplexing to select the desired stream of data packets.
- the RF portion of various embodiments includes one or more elements to perform these functions, for example, frequency selectors, signal selectors, band-limiters, channel selectors, filters, downconverters, demodulators, error correctors, and demultiplexers.
- the RF portion can include a tuner that performs various of these functions, including, for example, downconverting the received signal to a lower frequency (for example, an intermediate frequency or a near-baseband frequency) or to baseband.
- the RF portion and its associated input processing element receives an RF signal transmitted over a wired (for example, cable) medium, and performs frequency selection by filtering, downconverting, and filtering again to a desired frequency band.
- Adding elements can include inserting elements in between existing elements, such as, for example, inserting amplifiers and an analog-to-digital converter.
- the RF portion includes an antenna.
- the USB and/or HDMI terminals can include respective interface processors for connecting system 150 to other electronic devices across USB and/or HDMI connections.
- various aspects of input processing for example, Reed-Solomon error correction, can be implemented, for example, within a separate input processing IC or within processor 152 as necessary.
- aspects of USB or HDMI interface processing can be implemented within separate interface ICs or within processor 152 as necessary.
- the demodulated, error corrected, and demultiplexed stream is provided to various processing elements, including, for example, processor 152, and encoder/decoder 156 operating in combination with the memory and storage elements to process the datastream as necessary for presentation on an output device.
- connection arrangement 174 for example, an internal bus as known in the art, including the Inter- IC (I2C) bus, wiring, and printed circuit boards.
- I2C Inter- IC
- the system 150 includes communication interface 160 that enables communication with other devices via communication channel 162.
- the communication interface 160 can include, but is not limited to, a transceiver configured to transmit and to receive data over communication channel 162.
- the communication interface 160 can include, but is not limited to, a modem or network card and the communication channel 162 can be implemented, for example, within a wired and/or a wireless medium.
- Data is streamed, or otherwise provided, to the system 150, in various embodiments, using a wireless network such as a Wi-Fi network, for example IEEE 802.11 (IEEE refers to the Institute of Electrical and Electronics Engineers).
- the Wi-Fi signal of these embodiments is received over the communications channel 162 and the communications interface 160 which are adapted for Wi-Fi communications.
- the communications channel 162 of these embodiments is typically connected to an access point or router that provides access to external networks including the Internet for allowing streaming applications and other over-the-top communications.
- Other embodiments provide streamed data to the system 150 using a set-top box that delivers the data over the HDMI connection of the input block 172.
- Still other embodiments provide streamed data to the system 150 using the RF connection of the input block 172.
- various embodiments provide data in a non-streaming manner.
- various embodiments use wireless networks other than Wi-Fi, for example a cellular network or a Bluetooth network.
- the system 150 can provide an output signal to various output devices, including a display 176, speakers 178, and other peripheral devices 180.
- the display 176 of various embodiments includes one or more of, for example, a touchscreen display, an organic light-emitting diode (OLED) display, a curved display, and/or a foldable display.
- the display 176 can be for a television, a tablet, a laptop, a cell phone (mobile phone), or other device.
- the display 176 can also be integrated with other components (for example, as in a smart phone), or separate (for example, an external monitor for a laptop).
- the other peripheral devices 180 include, in various examples of embodiments, one or more of a stand-alone digital video disc (or digital versatile disc) (DVR, for both terms), a disk player, a stereo system, and/or a lighting system.
- Various embodiments use one or more peripheral devices 180 that provide a function based on the output of the system 150. For example, a disk player performs the function of playing the output of the system 150.
- control signals are communicated between the system 150 and the display 176, speakers 178, or other peripheral devices 180 using signaling such as AV. Link, Consumer Electronics Control (CEC), or other communications protocols that enable device-to-device control with or without user intervention.
- the output devices can be communicatively coupled to system 150 via dedicated connections through respective interfaces 164, 166, and 168. Alternatively, the output devices can be connected to system 150 using the communications channel 162 via the communications interface 160.
- the display 176 and speakers 178 can be integrated in a single unit with the other components of system 150 in an electronic device such as, for example, a television.
- the display interface 164 includes a display driver, such as, for example, a timing controller (T Con) chip.
- the display 176 and speaker 178 can alternatively be separate from one or more of the other components, for example, if the RF portion of input 172 is part of a separate set-top box.
- the output signal can be provided via dedicated output connections, including, for example, HDMI ports, USB ports, or COMP outputs.
- other inputs may be used to determine the position and orientation of the user for the purpose of rendering content.
- a user may select and/or adjust a desired viewpoint and/or viewing direction with the use of a touch screen, keypad or keyboard, trackball, joystick, or other input.
- the display device has sensors such as accelerometers and/or gyroscopes, the viewpoint and orientation used for the purpose of rendering content may be selected and/or adjusted based on motion of the display device.
- FIG. 2A gives the block diagram of a block-based hybrid video encoding system 200. Variations of this encoder 200 are contemplated, but the encoder 200 is described below for purposes of clarity without describing all expected variations.
- the input video signal 202 including a picture to be encoded is partitioned (206) and processed block by block in units of, for example, CUs. Different CUs may have different sizes. In VTM-1.0, a CU can be up to 128x128 pixels. However, different from the HEVC which partitions blocks only based on quadtrees, in the VTM-1 .0, a coding tree unit (CTU) is split into CUs to adapt to varying local characteristics based on quad/binary/ternary-tree.
- CTU coding tree unit
- each CU is always used as the basic unit for both prediction and transform without further partitions.
- a CTU is firstly partitioned by a quad-tree structure.
- each quad-tree leaf node can be further partitioned by a binary and ternary tree structure.
- Different splitting types may be used, such as quaternary partitioning, vertical binary partitioning, horizontal binary partitioning, vertical ternary partitioning, and horizontal ternary partitioning.
- spatial prediction (208) and/or temporal prediction (210) may be performed.
- Spatial prediction (or “intra prediction”) uses pixels from the samples of already coded neighboring blocks (which are called reference samples) in the same video picture/slice to predict the current video block. Spatial prediction reduces spatial redundancy inherent in the video signal.
- Temporal prediction (also referred to as “inter prediction” or “motion compensated prediction”) uses reconstructed pixels from the already coded video pictures to predict the current video block. Temporal prediction reduces temporal redundancy inherent in the video signal.
- a temporal prediction signal for a given CU may be signaled by one or more motion vectors (MVs) which indicate the amount and the direction of motion between the current CU and its temporal reference. Also, if multiple reference pictures are supported, a reference picture index may additionally be sent, which is used to identify from which reference picture in the reference picture store (212) the temporal prediction signal comes.
- MVs motion vectors
- the mode decision block (214) in the encoder chooses the best prediction mode, for example based on a rate-distortion optimization method. This selection may be made after spatial and/or temporal prediction is performed.
- the intra/inter decision may be indicated by, for example, a prediction mode flag.
- the prediction block is subtracted from the current video block (216) to generate a prediction residual.
- the prediction residual is de-correlated using transform (218) and quantized (220).
- the encoder may bypass both transform and quantization, in which case the residual may be coded directly without the application of the transform or quantization processes.
- the quantized residual coefficients are inverse quantized (222) and inverse transformed (224) to form the reconstructed residual, which is then added back to the prediction block (226) to form the reconstructed signal of the CU.
- Further in-loop filtering such as deblocking/SAO (Sample Adaptive Offset) filtering, may be applied (228) on the reconstructed CU to reduce encoding artifacts before it is put in the reference picture store (212) and used to code future video blocks.
- coding mode inter or intra
- prediction mode information motion information
- quantized residual coefficients are all sent to the entropy coding unit (108) to be further compressed and packed to form the bit-stream.
- FIG. 2B gives a block diagram of a block-based video decoder 250.
- a bitstream is decoded by the decoder elements as described below.
- Video decoder 250 generally performs a decoding pass reciprocal to the encoding pass as described in FIG. 2A.
- the encoder 200 also generally performs video decoding as part of encoding video data.
- the input of the decoder includes a video bitstream 252, which can be generated by video encoder 200.
- the video bit-stream 252 is first unpacked and entropy decoded at entropy decoding unit 254 to obtain transform coefficients, motion vectors, and other coded information.
- Picture partition information indicates how the picture is partitioned.
- the decoder may therefore divide (256) the picture according to the decoded picture partitioning information.
- the coding mode and prediction information are sent to either the spatial prediction unit 258 (if intra coded) or the temporal prediction unit 260 (if inter coded) to form the prediction block.
- the residual transform coefficients are sent to inverse quantization unit 262 and inverse transform unit 264 to reconstruct the residual block.
- the prediction block and the residual block are then added together at 266 to generate the reconstructed block.
- the reconstructed block may further go through in-loop filtering 268 before it is stored in reference picture store 270 for use in predicting future video blocks.
- Light representing an image 312 generated by the image generator 302 is coupled into a waveguide 304 by a diffractive in-coupler 306.
- the in-coupler 306 diffracts the light representing the image 312 into one or more diffractive orders.
- light ray 308 which is one of the light rays representing a portion of the bottom of the image, is diffracted by the in-coupler 306, and one of the diffracted orders 310 (e.g. the second order) is at an angle that is capable of being propagated through the waveguide 304 by total internal reflection.
- the image generator 302 displays images as directed by a control module 324, which operates to render image data, video data, point cloud data, or other displayable data.
- At least a portion of the light 310 that has been coupled into the waveguide 304 by the diffractive in-coupler 306 is coupled out of the waveguide by a diffractive out-coupler 314.
- At least some of the light coupled out of the waveguide 304 replicates the incident angle of light coupled into the waveguide.
- out-coupled light rays 316a, 316b, and 316c replicate the angle of the in-coupled light ray 308. Because light exiting the out-coupler replicates the directions of light that entered the in-coupler, the waveguide substantially replicates the original image 312. A user's eye 318 can focus on the replicated image.
- the out-coupler 314 out-couples only a portion of the light with each reflection allowing a single input beam (such as beam 308) to generate multiple parallel output beams (such as beams 316a, 316b, and 316c). In this way, at least some of the light originating from each portion of the image is likely to reach the user's eye even if the eye is not perfectly aligned with the center of the out- coupler. For example, if the eye 318 were to move downward, beam 316c may enter the eye even if beams 316a and 316b do not, so the user can still perceive the bottom of the image 312 despite the shift in position.
- the out-coupler 314 thus operates in part as an exit pupil expander in the vertical direction.
- the waveguide may also include one or more additional exit pupil expanders (not shown in FIG. 3A) to expand the exit pupil in the horizontal direction.
- the waveguide 304 is at least partly transparent with respect to light originating outside the waveguide display.
- the light 320 from real-world objects such as object 322 traverses the waveguide 304, allowing the user to see the real-world objects while using the waveguide display.
- the diffraction grating 3114 As light 320 from real-world objects also goes through the diffraction grating 314, there will be multiple diffraction orders and hence multiple images.
- the diffraction order zero no deviation by 314 to have a great diffraction efficiency for light 320 and order zero, while higher diffraction orders are lower in energy.
- the out-coupler 314 is preferably configured to let through the zero order of the real image. In such embodiments, images displayed by the waveguide display may appear to be superimposed on the real world.
- FIG. 3B is a schematic side view illustrating an example alternative display type that may be used with extended reality applications according to some embodiments.
- a control module 332 controls a display 334, which may be an LCD, to display an image.
- the headmounted display includes a partly-reflective surface 336 that reflects (and in some embodiments, both reflects and focuses) the image displayed on the LCD to make the image visible to the user.
- the partly-reflective surface 336 also allows the passage of at least some exterior light, permitting the user to see their surroundings.
- FIG. 3C is a schematic side view illustrating an example alternative display type that may be used with extended reality applications according to some embodiments.
- a control module 342 controls a display 344, which may be an LCD, to display an image.
- the image is focused by one or more lenses of display optics 346 to make the image visible to the user.
- exterior light does not reach the user's eyes directly.
- an exterior camera 348 may be used to capture images of the exterior environment and display such images on the display 344 together with any virtual content that may also be displayed.
- Point clouds have arisen as one of the main 3D scene representations for such applications.
- a point cloud frame is a set of 3D points, each point being represented with its 3D position and possibly several attributes such as color, transparency, and reflectance.
- a standardization activity for point cloud compression is carried out by the ISO/IEC JTC1/SC29/WG7 "MPEG 3D Graphics and Haptics Coding” group. See Graziosi, Damillo, et al., An Overview of Ongoing Point Cloud Compression Standardization Activities: Video-Based (V-PCC) and Geometry-Based (G-PCC), 9:1 APSIPA TRANSACTIONS ON SIGNAL AND INFORMATION PROCESSING 1-15 (2020) (“Graz/os/”). The first edition of the Geometry-based Point Cloud Compression (G-PCC) standard, part 9 of the ISO/IEC 23090 series on the coded representation of immersive media has been published.
- G-PCC Geometry-based Point Cloud Compression
- FIG. 5 is a schematic illustration showing an example Geometry-based Point Cloud Compression (G-PCC) encoding for dense dynamic point clouds according to some embodiments.
- G-PCC Geometry-based Point Cloud Compression
- the geometry is encoded by a GeS-TM codec with an octree representation 504 for the N-T coarsest resolution levels 502 (beginning from the root node at level 0) followed by a surface approximation (triangle soup or "TriSoup” 506) of all occupied nodes at level N-T-1. See GeS TM 1.0.
- the left diagram is a 2D tree representation of the 3D octree recursive subdivision of each cube into 8 sub-cubes at the finer level.
- a parent node has 8 children.
- the 8 circles correspond to 8 sub-cubes.
- a recursive process begins at the root level, level 0. When incrementing a level by 1 , each occupied parent cube is divided into 8 child cubes. This recursive process may continue until the leaf level is met, where each occupied cube contains a single voxel, or may be stopped earlier at an arbitrary level for some embodiments.
- the Graph Fourier Transform is efficient to decorrelate the attributes on top of an adjacency graph of occupied voxels according to Zhang, Cha, et al., Point Cloud Attribute Compression with Graph Transform, IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, PARIS, FRANCE 2066-2070 (2014) (“Zhancf).
- the computational demand of the eigen value decomposition may become prohibitive when performed onto too large point cloud supports.
- a shape-adaptive DCT may be extended to 3 dimensions, with one-dimensional DCT transforms of variable length successively applied along x, y, and z directions on the attribute of occupied voxels. See Sikora, T. and Makai, B., Shape-Adaptive DCT for Generic Coding of Video, 5:1 IEEE TRANS. ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY (1995) (“SikorsT). In some embodiments, an SA-DCT may be implemented.
- FIGs. 9A-9C illustrate the first two steps in y and x directions, the extension to a 3 rd dimension follows these 2-dimensional examples.
- FIG. 9A shows the original point cloud.
- FIG. 9B shows vertically reordered (y-direction) voxels.
- FIG. 9C shows horizontally reordered (x-direction) voxels.
- a one dimensional (1-D) discrete cosine transform (DCT) performed based on the number of voxels is also indicated on the figures.
- a DCT transform is applied to input signals of varying dimensions, at each line or coumn.
- a DCT-3 transform may be applied when there are 3 input values.
- a problem to be solved is increasing both compression performance and computation efficiency of attribute compression on dense point clouds, thereby leveraging a nested 2-stage geometry compression scheme (such as the TriSoup modelling of leaf nodes of an octree currently implemented by G-PCC).
- FIG. 10 is a schematic illustration showing an example two-stage geometry representation for encoding of attributes according to some embodiments.
- a two-stage compression/decompression scheme for point cloud attributes may be based on an N-level octree representation of geometry.
- the attributes of points in each leaf node at level (N-T-1) (for leaf nodes of size 2 T *2 T *2 T for levels N-T to N-1 (1004)) are encoded with a transform encompassing all node points as inputs, for example the GFT or the 3D SA-DCT.
- the transform is performed once over all points belonging to the cubes/nodes at the chosen level N-T-1 .
- a hierarchical transform (such as RAHT) is applied on the (N-T) levels 1002 of the upper part of the octree, for the transform coefficients of each leaf node up to the root node (levels (N-T-1) to 0).
- levels (N-T-1) For the nodes at level N-T-1 , there are cubes of size 2 T .
- the octree representation is described from top to bottom, beginning with a single node encompassing the entire point cloud at level 0, and then dividing the point cloud by 2 along each direction at each successive level.
- only the average direct current (DC) transform coefficient of each leaf node is input to the hierarchical transform.
- the other alternating current (AC) coefficients are also inputted into the hierarchical transform.
- an attribute transform on the leaf nodes may be extended to a fourth dimension (time) over a group of consecutive point cloud frames. As a result, the four dimensions are x, y, z, and time.
- the lower portion of the model which is for leaf nodes of size 2 T *2 T *2 T (for example, 32 3 ), contain a set of 3D points (or occupied voxels) at full resolution in which the geometry is represented either with an octree or a mesh (triangle soup).
- the upper portion of the model shows an octree with (N-T) resolution levels. Attribute encoding is performed on top of the reconstructed geometry.
- FIG. 11 is a schematic illustration showing an example two-stage compression scheme according to some embodiments.
- a two-stage compression scheme is depicted in FIG. 11 for an octree 1100 with levels 0 to (N-T-1).
- an attribute encoding is performed independently within each leaf node, using a 3D-block transform tF( ) 1104 for decorrelating the input signal.
- a leaf node corresponding to a block b of size 2 T x 2 T x 2 r , with N b occupied voxels (N b ⁇ 2 T x 2 T x 2 r ).
- the N b transform coefficients w, for i G [0, N b — 1] are generated from the N b attributes a, according to Eq. 1 :
- the transform coefficients w are further quantized 1108 and arithmetically coded 1110.
- Second Stage Hierarchical Encoding of Leaf Node Atribute DC Values
- a second stage further leverages the correlation across leaf nodes by inputting the DC values of each transformed attribute representation to a hierarchical transform such as a RAHT transform or an octree node transform 1102, as illustrated in FIG. 11.
- the transformed representation of attributes includes each resolution level I G [1, 7V — T — 1], each occupied node n at level /, the AC coefficients hi, i G [1,7] ⁇ , and the DC value ⁇ A o ⁇ o of the root node.
- a geometry encode 1202 for octree levels O to N-T-1 is performed before the geometry encoding 1206, 1214, 1222 and attribute encoding 1204.
- the hierarchical encoding 1204 of the DC values of transformed attribute coefficients may be performed in a second step.
- the right side of FIG. 12 shows the outputting of DC and AC coefficients.
- calculation of the DC and AC coefficients shown in FIG. 11 may be applied to FIG. 12.
- the RAHT transform is shown in FIG. 12, other hierarchical transforms may be used in accordance with some embodiments.
- the attribute bitstream includes the concatenated AC coefficients of each leaf node, plus the hierarchically encoded DC values.
- FIG. 13 is a process diagram illustrating an example parallelized decoding according to some embodiments.
- the sequences of geometry decoding 1306, 1314, 1322, geometry reconstruction 1308, 1316, 1324, and AC attribute decoding 1310, 1318, 1326 may be performed independently on each leaf node.
- the corresponding geometry sub-bitstream is first decoded and the corresponding geometry of this sub-part of the point cloud is recontructed. Then, the attribute sub-bitstream corresponding to this same node is decoded, thereby building on the reconstructed geometry and yielding the high-frequency component of the attributes of the reconstructed points.
- the last step is to decode the RAHT-encoded leaf-DC sub-bitsream, yielding the average direct current (DC) attribute values of each lef node.
- the recovered DC value is finally added to the high-frequency attribute components of all belonging points.
- Geometry decoding 1306, 1314, 1322 and geometry reconstruction 1308, 1316, 1324 may be performed as shown in FIG. 4 for some embodiments.
- a geometry decode 1302 for octree levels 0 to N-T-1 is performed before the geometry decoding 1306, 1314, 1322.
- the decoding 1304 of DC coefficients also may be performed in parallel.
- the attribute values of each leaf node are added 1312, 1320, 1328 on the right side of FIG. 13, and the average value of the attribute values is calculated as the DC coefficient.
- the attribute encoding is performed jointly per group of point cloud frames (GOF).
- the attribute encoding may be performed per 8 consecutives frames.
- a 3D+T 4-dimensional point cloud is created by considering together the 8 blocks at the same position in the frames of the GOF.
- a transform tF( ) may be applied globally to the attributes of the points contained in such a sub-volume.
- a 4-dimensional GFT transform may be performed, thereby building on a graph connecting spatially and temporally neighboring points.
- the point cloud frames of the GOF are spatially aligned before transform, and the 3D motion information to register the point cloud frames is transmitted in parallel.
- FIG. 14 is a flowchart illustrating an example process for encoding a point cloud according to some embodiments.
- an example process 1400 may include obtaining 1402 a point cloud, the point cloud including: a first set of information describing a geometry of the point cloud, and a second set of information describing attributes of the point cloud.
- the example process 1400 may further include performing 1404 a geometry encoding of the first set of information to generate a geometry bitstream.
- the example process 1400 may further include performing 1406 a two-stage attribute compression process to generate an attribute bitstream, wherein a first stage of the two-stage attribute compression process includes performing a block transform on each of a set of nodes of the point cloud, and wherein a second stage of the two-stage attribute compression process includes performing a hierarchical encoding over the set of nodes of the point cloud.
- the example process 1400 may further include outputting 1408 an output bitstream including the geometry bitstream and the attribute bitstream.
- FIG. 15 is a flowchart illustrating an example process for decoding a point cloud according to some embodiments.
- an example process 1500 may include obtaining 1502 an input bitstream, wherein the input bitstream includes a geometry bitstream and an attribute bitstream.
- the example process 1500 may further include performing 1504 a geometry decode for one or more non-leaf, octree nodes within the geometry bitstream to generate one or more sets of decoded nonleaf node geometry data.
- the example process 1500 may further include performing 1506 a geometry decode for one or more leaf nodes within the geometry bitstream to generate one or more sets of decoded leaf node geometry data.
- the example process 1500 may further include performing 1508 a geometry reconstruction on the one or more sets of decoded leaf node geometry data.
- the example process 1500 may further include performing 1510 an attribute hierarchical decode for a first set of nodes within the attribute bitstream to generate a set of decoded attribute data.
- the example process 1500 may further include performing 1512 an attribute leaf decode for a second set of nodes within the attribute bitstream to generate the set of decoded attribute data.
- the example process 1500 may further include performing 1514 an arithmetic average using the first set of decoded attribute data and the second set of decoded attribute data to generate an output set of attribute data.
- XR extended reality
- some embodiments may be applied to any XR contexts such as, e.g., virtual reality (VR) / mixed reality (MR) / augmented reality (AR) contexts.
- VR virtual reality
- MR mixed reality
- AR augmented reality
- head mounted display HMD
- some embodiments may be applied to a wearable device (which may or may not be attached to the head) capable of, e.g., XR, VR, AR, and/or MR for some embodiments.
- the block transform is a 3-dimensional Graph Fourier Transform (GFT).
- the block transform is a 3-dimensional Shape- Adaptive Discrete Cosine Transform (SA-DCT).
- SA-DCT Shape- Adaptive Discrete Cosine Transform
- the block transform is a 4-dimensional Graph Fourier Transform (GFT), and wherein one of the 4 dimensions is time.
- GFT Graph Fourier Transform
- Some embodiments of the first example method may further include performing a geometry encode of one of more leaf nodes of the point cloud.
- a first example apparatus in accordance with some embodiments may include: a processor; and a non-transitory computer-readable medium storing instructions operative, when executed by the processor, to cause the apparatus to perform any one of the methods listed above.
- a second example method in accordance with some embodiments may include: obtaining an input bitstream, wherein the input bitstream includes a geometry bitstream and an attribute bitstream; performing a geometry decode for one or more non-leaf, octree nodes within the geometry bitstream to generate one or more sets of decoded non-leaf node geometry data; performing a geometry decode for one or more leaf nodes within the geometry bitstream to generate one or more sets of decoded leaf node geometry data; performing a geometry reconstruction on the one or more sets of decoded leaf node geometry data; performing an attribute hierarchical decode for a set of nodes within the attribute bitstream to generate a first set of decoded attribute data; performing an attribute leaf decode for the set of nodes within the attribute bitstream to generate a second set of decoded attribute data; and performing an arithmetic average using the first set of decoded attribute data and the second set of decoded attribute data to generate an output set of attribute data.
- a second example apparatus in accordance with some embodiments may include: a processor; and a non-transitory computer-readable medium storing instructions operative, when executed by the processor, to cause the apparatus to perform any one of the methods listed above.
- a third example apparatus in accordance with some embodiments may include at least one processor configured to perform any one of the methods listed above.
- This disclosure describes a variety of aspects, including tools, features, embodiments, models, approaches, etc. Many of these aspects are described with specificity and, at least to show the individual characteristics, are often described in a manner that may sound limiting. However, this is for purposes of clarity in description, and does not limit the disclosure or scope of those aspects. Indeed, all of the different aspects can be combined and interchanged to provide further aspects. Moreover, the aspects can be combined and interchanged with aspects described in earlier filings as well.
- At least one of the aspects generally relates to video encoding and decoding, and at least one other aspect generally relates to transmitting a bitstream generated or encoded.
- At least one of the aspects can be implemented as a method, an apparatus, a computer readable storage medium having stored thereon instructions for encoding or decoding video data according to any of the methods described, and/or a computer readable storage medium having stored thereon a bitstream generated according to any of the methods described.
- HDR high dynamic range
- SDR standard dynamic range
- each of the methods includes one or more steps or actions for achieving the described method. Unless a specific order of steps or actions is required for proper operation of the method, the order and/or use of specific steps and/or actions may be modified or combined. Additionally, terms such as “first”, “second”, etc. may be used in various embodiments to modify an element, component, step, operation, etc., such as, for example, a "first decoding” and a "second decoding”. Use of such terms does not imply an ordering to the modified operations unless specifically required. So, in this example, the first decoding need not be performed before the second decoding, and may occur, for example, before, during, or in an overlapping time period with the second decoding.
- Embodiments described herein may be carried out by computer software implemented by a processor or other hardware, or by a combination of hardware and software.
- the embodiments can be implemented by one or more integrated circuits.
- the processor can be of any type appropriate to the technical environment and can encompass one or more of microprocessors, general purpose computers, special purpose computers, and processors based on a multi-core architecture, as nonlimiting examples.
- Decoding can encompass all or part of the processes performed, for example, on a received encoded sequence in order to produce a final output suitable for display.
- processes include one or more of the processes typically performed by a decoder, for example, entropy decoding, inverse quantization, inverse transformation, and differential decoding.
- processes also, or alternatively, include processes performed by a decoder of various implementations described in this disclosure, for example, extracting a picture from a tiled (packed) picture, determining an upsampling filter to use and then upsampling a picture, and flipping a picture back to its intended orientation.
- decoding refers only to entropy decoding
- decoding refers only to differential decoding
- decoding refers to a combination of entropy decoding and differential decoding. Whether the phrase “decoding process” is intended to refer specifically to a subset of operations or generally to the broader decoding process will be clear based on the context of the specific descriptions.
- encoding can encompass all or part of the processes performed, for example, on an input video sequence in order to produce an encoded bitstream.
- processes include one or more of the processes typically performed by an encoder, for example, partitioning, differential encoding, transformation, quantization, and entropy encoding.
- processes also, or alternatively, include processes performed by an encoder of various implementations described in this disclosure.
- encoding refers only to entropy encoding
- encoding refers only to differential encoding
- encoding refers to a combination of differential encoding and entropy encoding.
- a TV, set-top box, cell phone, tablet, or other electronic device that selects (e.g. using a tuner) a channel to receive a signal including an encoded image, and performs adaptation of filter parameters according to any of the embodiments described.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Some embodiments of a method may include: obtaining a point cloud, the point cloud may include: a first set of information describing a geometry of the point cloud, and a second set of information describing attributes of the point cloud; performing a geometry encoding of the first set of information to generate a geometry bitstream; performing a two-stage attribute compression process to generate an attribute bitstream, wherein a first stage of the two-stage attribute compression process includes performing a block transform on each of a set of nodes of the point cloud, and wherein a second stage of the two-stage attribute compression process includes performing a hierarchical encoding over the set of nodes of the point cloud, and outputting an output bitstream including the geometry bitstream and the attribute bitstream.
Description
TWO-STAGE POINT CLOUD ATTRIBUTE ENCODING SCHEME WITH NESTED LOCAL AND GLOBAL TRANSFORMS
CROSS-REFERENCE TO OTHER APPLICATIONS
[0001] The present application claims benefit of European Patent Application No. EP23306739.6, entitled "TWO-STAGE POINT CLOUD ATTRIBUTE ENCODING SCHEME WITH NESTED LOCAL AND GLOBAL TRANSFORMS” and filed October 9, 2023, which is hereby incorporated by reference in its entirety.
INCORPORATION BY REFERENCE
[0002] The present application incorporates by reference in its entirety the following application: European Patent Application Serial No. EP23306740, entitled "HYBRID POINT CLOUD ENCODING METHOD WITH LOCAL SURFACE REPRESENTATION” and filed October 9, 2023.
BACKGROUND
[0003] Point clouds are data that may be used in numerous business domains, such as autonomous driving, robotics, ARA/R, civil engineering, computer graphics, to the animation / movie industry. 3D LiDAR sensors have been deployed in self-driving cars, and affordable LiDAR sensors include Velodyne Velabit, Apple iPad Pro 2020, and Intel RealSense LiDAR camera L515. With advances in sensing technologies, 3D point cloud data is becoming more widespread, such as in the applications and industries mentioned above.
SUMMARY
[0004] A first example method in accordance with some embodiments may include: obtaining a point cloud, the point cloud including: a first set of information describing a geometry of the point cloud, and a second set of information describing attributes of the point cloud; performing a geometry encoding of the first set of information to generate a geometry bitstream; performing a two-stage attribute compression process to generate an attribute bitstream, wherein a first stage of the two-stage attribute compression process includes performing a block transform on each of a set of nodes of the point cloud, and wherein a second stage of the two-stage attribute compression process includes performing a hierarchical encoding over the set of nodes of the point cloud, and outputting an output bitstream including the geometry bitstream and the attribute bitstream.
[0005] Some embodiments of the first example method may further include: quantizing an output of the first stage of the two-stage attribute compression process to generate a first set of quantized bits; quantizing an output of the second stage of the two-stage attribute compression process to generate a second set of quantized bits; and arithmetic coding the first and second sets of quantized bits to generate the attribute bitstream.
[0006] In some embodiments of the first example method, the block transform is a 3-dimensional Graph Fourier Transform (GFT).
[0007] In some embodiments of the first example method, the block transform is a 3-dimensional Shape- Adaptive Discrete Cosine Transform (SA-DCT).
[0008] In some embodiments of the first example method, the block transform is a 3-dimensional Kahunen Loeve Transform (KLT).
[0009] In some embodiments of the first example method, the block transform is a 4-dimensional Graph Fourier T ransform (GFT), and one of the 4 dimensions is time.
[0010] In some embodiments of the first example method,
7. The method of any one of claims 1-6, wherein the set of nodes include one or more leaf nodes.
[0011] In some embodiments of the first example method, the set of nodes include a level 1 node below a root level 0 node.
[0012] In some embodiments of the first example method, performing the hierarchical encoding over the set of nodes of the point cloud includes: obtaining, for each of the set of nodes, one or more respective transform coefficients; determining, for each of the set of nodes, an average transform coefficient of the one or more respective transform coefficients; and performing a hierarchical transform on at least the average transform coefficient.
[0013] Some embodiments of the first example method may further include performing the hierarchical transform on at least two transform coefficients.
[0014] In some embodiments of the first example method, the two-stage attribute compression process is performed on top of a reconstructed geometry of the point cloud.
[0015] In some embodiments of the first example method, the two-stage attribute compression process is performed on a group of point cloud frames.
[0016] In some embodiments of the first example method, the group of point cloud frames are consecutive frames.
[0017] Some embodiments of the first example method may further include performing a geometry encode of one of more leaf nodes of the point cloud.
[0018] In some embodiments of the first example method, the geometry encode of a first leaf node occurs in parallel with the geometry encode of a second leaf node.
[0019] In some embodiments of the first example method, the geometry encoding is performed in parallel with at least a portion of the two-stage attribute compression process.
[0020] In some embodiments of the first example method, the hierarchical transform is a Region-Adaptive Hierarchical Transform (RAHT).
[0021] A first example apparatus in accordance with some embodiments may include: a processor; and a non-transitory computer-readable medium storing instructions operative, when executed by the processor, to cause the apparatus to perform any one of the methods listed above.
[0022] A second example method in accordance with some embodiments may include: obtaining an input bitstream, wherein the input bitstream includes a geometry bitstream and an attribute bitstream; performing a geometry decode for one or more non-leaf, octree nodes within the geometry bitstream to generate one or more sets of decoded non-leaf node geometry data; performing a geometry decode for one or more leaf nodes within the geometry bitstream to generate one or more sets of decoded leaf node geometry data; performing a geometry reconstruction on the one or more sets of decoded leaf node geometry data; performing an attribute hierarchical decode for a set of nodes within the attribute bitstream to generate a first set of decoded attribute data; performing an attribute leaf decode for the set of nodes within the attribute bitstream to generate a second set of decoded attribute data; and performing an arithmetic average using the first set of decoded attribute data and the second set of decoded attribute data to generate an output set of attribute data.
[0023] A second example apparatus in accordance with some embodiments may include: a processor; and a non-transitory computer-readable medium storing instructions operative, when executed by the processor, to cause the apparatus to perform any one of the methods listed above.
[0024] A third example apparatus in accordance with some embodiments may include at least one processor configured to perform any one of the methods listed above.
[0025] A fourth example apparatus in accordance with some embodiments may include: a computer- readable medium storing instructions for causing one or more processors to perform any one of the methods listed above.
[0026] A fifth example apparatus in accordance with some embodiments may include: at least one processor and at least one non-transitory computer-readable medium storing instructions for causing the at least one processor to perform any one of the methods listed above.
[0027] An example signal in accordance with some embodiments may include a bitstream generated according to any one of the methods listed above. includBRIEF DESCRIPTION OF THE DRAWINGS
[0028] FIG. 1A is a system diagram illustrating an example communications system according to some embodiments.
[0029] FIG. 1 B is a system diagram illustrating an example wireless transmit/receive unit (WTRU) that may be used within the communications system illustrated in FIG. 1A according to some embodiments.
[0030] FIG. 1C is a system diagram illustrating an example set of interfaces for a system according to some embodiments.
[0031] FIG. 2A is a functional block diagram of block-based video encoder, such as an encoder used for Versatile Video Coding (VVC), according to some embodiments.
[0032] FIG. 2B is a functional block diagram of a block-based video decoder, such as a decoder used for VVC, according to some embodiments.
[0033] FIG. 3A is a schematic side view illustrating an example waveguide display that may be used with extended reality (XR) applications according to some embodiments.
[0034] FIG. 3B is a schematic side view illustrating an example alternative display type that may be used with extended reality applications according to some embodiments.
[0035] FIG. 3C is a schematic side view illustrating an example alternative display type that may be used with extended reality applications according to some embodiments.
[0036] FIG. 4 is a process diagram illustrating an example global point cloud compression according to some embodiments.
[0037] FIG. 5 is a schematic illustration showing an example Geometry-based Point Cloud Compression (G-PCC) encoding for dense dynamic point clouds according to some embodiments.
[0038] FIG. 6 is a schematic illustration showing an example TriSoup rasterization.
[0039] FIG. 7 is a schematic illustration showing an example Dyadic RAHT transform process for a 2x2x2 octree node according to some embodiments.
[0040] FIG. 8 is a schematic illustration showing an example adjacency graph and associated matrix for a Graph Fourier Transform (GFT).
[0041] FIG. 9A is a schematic illustration showing an example original point cloud used for a 3D shape- adaptive discrete cosine transform (SA-DCT).
[0042] FIG. 9B is a schematic illustration showing example vertically reordered voxels in a z-plane and 1 D-DCTs of variable support size of attributes.
[0043] FIG. 9C is a schematic illustration showing an example horizontally reordered voxels and 1 D-DCTs of variable support size.
[0044] FIG. 10 is a schematic illustration showing an example two-stage geometry representation for encoding of attributes according to some embodiments.
[0045] FIG. 11 is a schematic illustration showing an example two-stage compression scheme according to some embodiments.
[0046] FIG. 12 is a process diagram illustrating an example parallelized encoding according to some embodiments.
[0047] FIG. 13 is a process diagram illustrating an example parallelized decoding according to some embodiments.
[0048] FIG. 14 is a flowchart illustrating an example process for encoding a point cloud according to some embodiments.
[0049] FIG. 15 is a flowchart illustrating an example process for decoding a point cloud according to some embodiments.
[0050] The entities, connections, arrangements, and the like that are depicted in— and described in connection with— the various figures are presented by way of example and not by way of limitation. As such, any and all statements or other indications as to what a particular figure "depicts,” what a particular element or entity in a particular figure "is” or "has,” and any and all similar statements— that may in isolation and out of context be read as absolute and therefore limiting— may only properly be read as being constructively preceded by a clause such as "In at least one embodiment, ... " For brevity and clarity of presentation, this implied leading clause is not repeated ad nauseum in the detailed description.
DETAILED DESCRIPTION
[0051] FIG. 1A is a diagram illustrating an example communications system 100 in which one or more disclosed embodiments may be implemented. The communications system 100 may be a multiple access system that provides content, such as voice, data, video, messaging, broadcast, etc., to multiple wireless users. The communications system 100 may enable multiple wireless users to access such content through the sharing of system resources, including wireless bandwidth. For example, the communications systems 100 may employ one or more channel access methods, such as code division multiple access (CDMA), time division multiple access (TDMA), frequency division multiple access (FDMA), orthogonal FDMA (OFDMA), single-carrier FDMA (SC-FDMA), zero-tail unique-word DFT-Spread OFDM (ZT UW DTS-s OFDM), unique word OFDM (UW-OFDM), resource block-filtered OFDM, filter bank multicarrier (FBMC), and the like.
[0052] As shown in FIG. 1A, the communications system 100 may include wireless transmit/receive units (WTRUs) 102a, 102b, 102c, 102d, a RAN 104/113, a ON 106, a public switched telephone network (PSTN) 108, the Internet 110, and other networks 112, though it will be appreciated that the disclosed embodiments contemplate any number of WTRUs, base stations, networks, and/or network elements. Each of the WTRUs 102a, 102b, 102c, 102d may be any type of device configured to operate and/or communicate in a wireless environment. By way of example, the WTRUs 102a, 102b, 102c, 102d, any of which may be referred to as a "station” and/or a "STA”, may be configured to transmit and/or receive wireless signals and may include a user equipment (UE), a mobile station, a fixed or mobile subscriber unit, a subscription-based unit, a pager, a cellular telephone, a personal digital assistant (PDA), a smartphone, a laptop, a netbook, a personal computer, a wireless sensor, a hotspot or Mi-Fi device, an Internet of Things (loT) device, a watch or other wearable, a head-mounted display (HMD), a vehicle, a drone, a medical device and applications (e.g., remote surgery), an industrial device and applications (e.g., a robot and/or other wireless devices operating in an industrial and/or an automated processing chain contexts), a consumer electronics device, a device operating on commercial and/or industrial wireless networks, and the like. Any of the WTRUs 102a, 102b, 102c and 102d may be interchangeably referred to as a UE.
[0053] The communications systems 100 may also include a base station 114a and/or a base station 114b. Each of the base stations 114a, 114b may be any type of device configured to wirelessly interface with at least one of the WTRUs 102a, 102b, 102c, 102d to facilitate access to one or more communication networks, such as the CN 106, the Internet 110, and/or the other networks 112. By way of example, the base stations 114a, 114b may be a base transceiver station (BTS), a Node-B, an eNode B, a Home Node B, a Home eNode B, a gNB, a NR NodeB, a site controller, an access point (AP), a wireless router, and the like. While the base stations 114a, 114b are each depicted as a single element, it will be appreciated that the base stations 114a, 114b may include any number of interconnected base stations and/or network elements.
[0054] The base station 114a may be part of the RAN 104/113, which may also include other base stations and/or network elements (not shown), such as a base station controller (BSC), a radio network controller (RNC), relay nodes, etc. The base station 114a and/or the base station 114b may be configured to transmit and/or receive wireless signals on one or more carrier frequencies, which may be referred to as a cell (not shown). These frequencies may be in licensed spectrum, unlicensed spectrum, or a combination of licensed and unlicensed spectrum. A cell may provide coverage for a wireless service to a specific geographical area that may be relatively fixed or that may change over time. The cell may further be divided into cell sectors. For example, the cell associated with the base station 114a may be divided into three sectors. Thus, in one embodiment, the base station 114a may include three transceivers, i.e., one for each sector of the cell. In an embodiment, the base station 114a may employ multiple-input multiple output (MIMO) technology and may utilize multiple transceivers for each sector of the cell. For example, beamforming may be used to transmit and/or receive signals in desired spatial directions.
[0055] The base stations 114a, 114b may communicate with one or more of the WTRUs 102a, 102b, 102c, 102d over an air interface 116, which may be any suitable wireless communication link (e.g., radio frequency (RF), microwave, centimeter wave, micrometer wave, infrared (IR), ultraviolet (UV), visible light, etc.). The air interface 116 may be established using any suitable radio access technology (RAT).
[0056] More specifically, as noted above, the communications system 100 may be a multiple access system and may employ one or more channel access schemes, such as CDMA, TDMA, FDMA, OFDMA, SC-FDMA, and the like. For example, the base station 114a in the RAN 104/113 and the WTRUs 102a, 102b, 102c may implement a radio technology such as Universal Mobile Telecommunications System (UMTS) Terrestrial Radio Access (UTRA), which may establish the air interface 116 using wideband CDMA (WCDMA). WCDMA may include communication protocols such as High-Speed Packet Access (HSPA) and/or Evolved HSPA (HSPA+). HSPA may include High-Speed Downlink (DL) Packet Access (HSDPA) and/or High-Speed UL Packet Access (HSUPA).
[0057] In an embodiment, the base station 114a and the WTRUs 102a, 102b, 102c may implement a radio technology such as Evolved UMTS Terrestrial Radio Access (E-UTRA), which may establish the air interface 116 using Long Term Evolution (LTE) and/or LTE-Advanced (LTE-A) and/or LTE-Advanced Pro (LTE-A Pro).
[0058] In an embodiment, the base station 114a and the WTRUs 102a, 102b, 102c may implement a radio technology such as NR Radio Access , which may establish the air interface 116 using New Radio (NR).
[0059] In an embodiment, the base station 114a and the WTRUs 102a, 102b, 102c may implement multiple radio access technologies. For example, the base station 114a and the WTRUs 102a, 102b, 102c may implement LTE radio access and NR radio access together, for instance using dual connectivity (DC) principles. Thus, the air interface utilized by WTRUs 102a, 102b, 102c may be characterized by multiple types of radio access technologies and/or transmissions sent to/from multiple types of base stations (e.g., a eNB and a gNB).
[0060] In other embodiments, the base station 114a and the WTRUs 102a, 102b, 102c may implement radio technologies such as IEEE 802.11 (i.e., Wireless Fidelity (WiFi), IEEE 802.16 (i.e., Worldwide Interoperability for Microwave Access (WiMAX)), CDMA2000, CDMA2000 1X, CDMA2000 EV-DO, Interim Standard 2000 (IS-2000), Interim Standard 95 (IS-95), Interim Standard 856 (IS-856), Global System for Mobile communications (GSM), Enhanced Data rates for GSM Evolution (EDGE), GSM EDGE (GERAN), and the like.
[0061] The base station 114b in FIG. 1A may be a wireless router, Home Node B, Home eNode B, or access point, for example, and may utilize any suitable RAT for facilitating wireless connectivity in a localized area, such as a place of business, a home, a vehicle, a campus, an industrial facility, an air corridor (e.g., for use by drones), a roadway, and the like. In one embodiment, the base station 114b and the WTRUs 102c, 102d may implement a radio technology such as IEEE 802.11 to establish a wireless local area network (WLAN). In an embodiment, the base station 114b and the WTRUs 102c, 102d may implement a radio technology such as IEEE 802.15 to establish a wireless personal area network (WPAN). In yet another embodiment, the base station 114b and the WTRUs 102c, 102d may utilize a cellular-based RAT (e.g., WCDMA, CDMA2000, GSM, LTE, LTE-A, LTE-A Pro, NR etc.) to establish a picocell or femtocell. As shown in FIG. 1A, the base station 114b may have a direct connection to the Internet 110. Thus, the base station 114b may not be required to access the Internet 110 via the CN 106.
[0062] The RAN 104/113 may be in communication with the CN 106, which may be any type of network configured to provide voice, data, applications, and/or voice over internet protocol (VoIP) services to one or more of the WTRUs 102a, 102b, 102c, 102d. The data may have varying quality of service (QoS) requirements, such as differing throughput requirements, latency requirements, error tolerance requirements, reliability requirements, data throughput requirements, mobility requirements, and the like. The CN 106 may provide call control, billing services, mobile location-based services, pre-paid calling, Internet connectivity, video distribution, etc., and/or perform high-level security functions, such as user authentication. Although not shown in FIG. 1A, it will be appreciated that the RAN 104/113 and/or the CN 106 may be in direct or indirect communication with other RANs that employ the same RAT as the RAN 104/113 or a different RAT.
For example, in addition to being connected to the RAN 104/113, which may be utilizing a NR radio technology, the CN 106 may also be in communication with another RAN (not shown) employing a GSM, UMTS, CDMA 2000, WiMAX, E-UTRA, or WiFi radio technology.
[0063] The CN 106 may also serve as a gateway for the WTRUs 102a, 102b, 102c, 102d to access the PSTN 108, the Internet 110, and/or the other networks 112. The PSTN 108 may include circuit-switched telephone networks that provide plain old telephone service (POTS). The Internet 110 may include a global system of interconnected computer networks and devices that use common communication protocols, such as the transmission control protocol (TCP), user datagram protocol (UDP) and/or the internet protocol (IP) in the TCP/IP internet protocol suite. The networks 112 may include wired and/or wireless communications networks owned and/or operated by other service providers. For example, the networks 112 may include another CN connected to one or more RANs, which may employ the same RAT as the RAN 104/113 or a different RAT.
[0064] Some or all of the WTRUs 102a, 102b, 102c, 102d in the communications system 100 may include multi-mode capabilities (e.g., the WTRUs 102a, 102b, 102c, 102d may include multiple transceivers for communicating with different wireless networks over different wireless links). For example, the WTRU 102c shown in FIG. 1A may be configured to communicate with the base station 114a, which may employ a cellular-based radio technology, and with the base station 114b, which may employ an IEEE 802 radio technology.
[0065] FIG. 1 B is a system diagram illustrating an example WTRU 102. As shown in FIG. 1 B, the WTRU 102 may include a processor 118, a transceiver 120, a transmit/receive element 122, a speaker/microphone 124, a keypad 126, a display/touchpad 128, non-removable memory 130, removable memory 132, a power source 134, a global positioning system (GPS) chipset 136, and/or other peripherals 138, among others. It will be appreciated that the WTRU 102 may include any sub-combination of the foregoing elements while remaining consistent with an embodiment.
[0066] The processor 118 may be a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) circuits, any other type of integrated circuit (IC), a state machine, and the like. The processor 118 may perform signal coding, data processing, power control, input/output processing, and/or any other functionality that enables the WTRU 102 to operate in a wireless environment. The processor 118 may be coupled to the transceiver 120, which may be coupled to the transmit/receive element 122. While FIG. 1 B depicts the processor 118 and the transceiver 120 as
separate components, it will be appreciated that the processor 118 and the transceiver 120 may be integrated together in an electronic package or chip.
[0067] The transmit/receive element 122 may be configured to transmit signals to, or receive signals from, a base station (e.g., the base station 114a) over the air interface 116. For example, in one embodiment, the transmit/receive element 122 may be an antenna configured to transmit and/or receive RF signals. In an embodiment, the transmit/receive element 122 may be an emitter/detector configured to transmit and/or receive IR, UV, or visible light signals, for example. In yet another embodiment, the transmit/receive element 122 may be configured to transmit and/or receive both RF and light signals. It will be appreciated that the transmit/receive element 122 may be configured to transmit and/or receive any combination of wireless signals.
[0068] Although the transmit/receive element 122 is depicted in FIG. 1 B as a single element, the WTRU 102 may include any number of transmit/receive elements 122. More specifically, the WTRU 102 may employ MIMO technology. Thus, in one embodiment, the WTRU 102 may include two or more transmit/receive elements 122 (e.g., multiple antennas) for transmitting and receiving wireless signals over the air interface 116.
[0069] The transceiver 120 may be configured to modulate the signals that are to be transmitted by the transmit/receive element 122 and to demodulate the signals that are received by the transmit/receive element 122. As noted above, the WTRU 102 may have multi-mode capabilities. Thus, the transceiver 120 may include multiple transceivers for enabling the WTRU 102 to communicate via multiple RATs, such as NR and IEEE 802.11 , for example.
[0070] The processor 118 of the WTRU 102 may be coupled to, and may receive user input data from, the speaker/microphone 124, the keypad 126, and/or the display/touchpad 128 (e.g., a liquid crystal display (LCD) display unit or organic light-emitting diode (OLED) display unit). The processor 118 may also output user data to the speaker/microphone 124, the keypad 126, and/or the display/touchpad 128. In addition, the processor 118 may access information from, and store data in, any type of suitable memory, such as the non-removable memory 130 and/or the removable memory 132. The non-removable memory 130 may include random-access memory (RAM), read-only memory (ROM), a hard disk, or any other type of memory storage device. The removable memory 132 may include a subscriber identity module (SIM) card, a memory stick, a secure digital (SD) memory card, and the like. In other embodiments, the processor 118 may access information from, and store data in, memory that is not physically located on the WTRU 102, such as on a server or a home computer (not shown).
[0071] The processor 118 may receive power from the power source 134, and may be configured to distribute and/or control the power to the other components in the WTRU 102. The power source 134 may be any suitable device for powering the WTRU 102. For example, the power source 134 may include one or more dry cell batteries (e.g., nickel-cadmium (NiCd), nickel-zinc (NiZn), nickel metal hydride (NiMH), lithium- ion (Li-ion), etc.), solar cells, fuel cells, and the like.
[0072] The processor 118 may also be coupled to the GPS chipset 136, which may be configured to provide location information (e.g., longitude and latitude) regarding the current location of the WTRU 102. In addition to, or in lieu of, the information from the GPS chipset 136, the WTRU 102 may receive location information over the air interface 116 from a base station (e.g., base stations 114a, 114b) and/or determine its location based on the timing of the signals being received from two or more nearby base stations. It will be appreciated that the WTRU 102 may acquire location information by way of any suitable locationdetermination method while remaining consistent with an embodiment.
[0073] The processor 118 may further be coupled to other peripherals 138, which may include one or more software and/or hardware modules that provide additional features, functionality and/or wired or wireless connectivity. For example, the peripherals 138 may include an accelerometer, an e-compass, a satellite transceiver, a digital camera (for photographs and/or video), a universal serial bus (USB) port, a vibration device, a television transceiver, a hands free headset, a Bluetooth® module, a frequency modulated (FM) radio unit, a digital music player, a media player, a video game player module, an Internet browser, a Virtual Reality and/or Augmented Reality (VR/AR) device, an activity tracker, and the like. The peripherals 138 may include one or more sensors, the sensors may be one or more of a gyroscope, an accelerometer, a hall effect sensor, a magnetometer, an orientation sensor, a proximity sensor, a temperature sensor, a time sensor; a geolocation sensor; an altimeter, a light sensor, a touch sensor, a magnetometer, a barometer, a gesture sensor, a biometric sensor, and/or a humidity sensor.
[0074] The WTRU 102 may include a full duplex radio for which transmission and reception of some or all of the signals (e.g., associated with particular subframes for both the UL (e.g., for transmission) and downlink (e.g., for reception) may be concurrent and/or simultaneous. The full duplex radio may include an interference management unit to reduce and or substantially eliminate self-interference via either hardware (e.g., a choke) or signal processing via a processor (e.g., a separate processor (not shown) or via processor 118). In an embodiment, the WTRU 102 may include a half-duplex radio for which transmission and reception of some or all of the signals (e.g., associated with particular subframes for either the UL (e.g., for transmission) or the downlink (e.g., for reception)).
[0075] Although the WTRU is described in FIGs. 1 A-1 B as a wireless terminal, it is contemplated that in certain representative embodiments that such a terminal may use (e.g., temporarily or permanently) wired communication interfaces with the communication network.
[0076] In representative embodiments, the other network 112 may be a WLAN.
[0077] In view of FIGs. 1 A-1 B, and the corresponding description, one or more, or all, of the functions described herein may be performed by one or more emulation devices (not shown). The emulation devices may be one or more devices configured to emulate one or more, or all, of the functions described herein. For example, the emulation devices may be used to test other devices and/or to simulate network and/or WTRU functions.
[0078] The emulation devices may be designed to implement one or more tests of other devices in a lab environment and/or in an operator network environment. For example, the one or more emulation devices may perform the one or more, or all, functions while being fully or partially implemented and/or deployed as part of a wired and/or wireless communication network in order to test other devices within the communication network. The one or more emulation devices may perform the one or more, or all, functions while being temporarily implemented/deployed as part of a wired and/or wireless communication network. The emulation device may be directly coupled to another device for purposes of testing and/or may performing testing using over-the-air wireless communications.
[0079] The one or more emulation devices may perform the one or more, including all, functions while not being implemented/deployed as part of a wired and/or wireless communication network. For example, the emulation devices may be utilized in a testing scenario in a testing laboratory and/or a non-deployed (e.g., testing) wired and/or wireless communication network in order to implement testing of one or more components. The one or more emulation devices may be test equipment. Direct RF coupling and/or wireless communications via RF circuitry (e.g., which may include one or more antennas) may be used by the emulation devices to transmit and/or receive data.
[0080] FIG. 1 C is a system diagram illustrating an example set of interfaces for a system according to some embodiments. An extended reality display device, together with its control electronics, may be implemented for some embodiments. System 150 can be embodied as a device including the various components described below and is configured to perform one or more of the aspects described in this document. Examples of such devices, include, but are not limited to, various electronic devices such as personal computers, laptop computers, smartphones, tablet computers, digital multimedia set top boxes, digital television receivers, personal video recording systems, connected home appliances, and servers. Elements of system 150, singly or in combination, can be embodied in a single integrated circuit (IC), multiple
ICs, and/or discrete components. For example, in at least one embodiment, the processing and encoder/decoder elements of system 150 are distributed across multiple ICs and/or discrete components. In various embodiments, the system 150 is communicatively coupled to one or more other systems, or other electronic devices, via, for example, a communications bus or through dedicated input and/or output ports. In various embodiments, the system 150 is configured to implement one or more of the aspects described in this document.
[0081] The system 150 includes at least one processor 152 configured to execute instructions loaded therein for implementing, for example, the various aspects described in this document. Processor 152 may include embedded memory, input output interface, and various other circuitries as known in the art. The system 150 includes at least one memory 154 (e.g., a volatile memory device, and/or a non-volatile memory device). System 150 may include a storage device 158, which can include non-volatile memory and/or volatile memory, including, but not limited to, Electrically Erasable Programmable Read-Only Memory (EEPROM), Read-Only Memory (ROM), Programmable Read-Only Memory (PROM), Random Access Memory (RAM), Dynamic Random Access Memory (DRAM), Static Random Access Memory (SRAM), flash, magnetic disk drive, and/or optical disk drive. The storage device 158 can include an internal storage device, an attached storage device (including detachable and non-detachable storage devices), and/or a network accessible storage device, as non-limiting examples.
[0082] System 150 includes an encoder/decoder module 156 configured, for example, to process data to provide an encoded video or decoded video, and the encoder/decoder module 156 can include its own processor and memory. The encoder/decoder module 156 represents module(s) that can be included in a device to perform the encoding and/or decoding functions. As is known, a device can include one or both of the encoding and decoding modules. Additionally, encoder/decoder module 156 can be implemented as a separate element of system 150 or can be incorporated within processor 152 as a combination of hardware and software as known to those skilled in the art.
[0083] Program code to be loaded onto processor 152 or encoder/decoder 156 to perform the various aspects described in this document can be stored in storage device 158 and subsequently loaded onto memory 154 for execution by processor 152. In accordance with various embodiments, one or more of processor 152, memory 154, storage device 158, and encoder/decoder module 156 can store one or more of various items during the performance of the processes described in this document. Such stored items can include, but are not limited to, the input video, the decoded video or portions of the decoded video, the bitstream, matrices, variables, and intermediate or final results from the processing of equations, formulas, operations, and operational logic.
[0084] In some embodiments, memory inside of the processor 152 and/or the encoder/decoder module 156 is used to store instructions and to provide working memory for processing that is needed during encoding or decoding. In other embodiments, however, a memory external to the processing device (for example, the processing device can be either the processor 152 or the encoder/decoder module 152) is used for one or more of these functions. The external memory can be the memory 154 and/or the storage device 158, for example, a dynamic volatile memory and/or a non-volatile flash memory. In several embodiments, an external non-volatile flash memory is used to store the operating system of, for example, a television. In at least one embodiment, a fast external dynamic volatile memory such as a RAM is used as working memory for video coding and decoding operations, such as for MPEG-2 (MPEG refers to the Moving Picture Experts Group, MPEG-2 is also referred to as ISO/IEC 13818, and 13818-1 is also known as H.222, and 13818-2 is also known as H.262), HEVC (HEVC refers to High Efficiency Video Coding, also known as H.265 and MPEG-H Part 2), or WC (Versatile Video Coding, a new standard being developed by JVET, the Joint Video Experts Team).
[0085] The input to the elements of system 150 can be provided through various input devices as indicated in block 172. Such input devices include, but are not limited to, (i) a radio frequency (RF) portion that receives an RF signal transmitted, for example, over the air by a broadcaster, (ii) a Component (COMP) input terminal (or a set of COMP input terminals), (iii) a Universal Serial Bus (USB) input terminal, and/or (iv) a High Definition Multimedia Interface (HDMI) input terminal. Other examples, not shown in FIG. 1 C, include composite video.
[0086] In various embodiments, the input devices of block 172 have associated respective input processing elements as known in the art. For example, the RF portion can be associated with elements suitable for (i) selecting a desired frequency (also referred to as selecting a signal, or band-limiting a signal to a band of frequencies), (ii) downconverting the selected signal, (iii) band-limiting again to a narrower band of frequencies to select (for example) a signal frequency band which can be referred to as a channel in certain embodiments, (iv) demodulating the downconverted and band-limited signal, (v) performing error correction, and (vi) demultiplexing to select the desired stream of data packets. The RF portion of various embodiments includes one or more elements to perform these functions, for example, frequency selectors, signal selectors, band-limiters, channel selectors, filters, downconverters, demodulators, error correctors, and demultiplexers. The RF portion can include a tuner that performs various of these functions, including, for example, downconverting the received signal to a lower frequency (for example, an intermediate frequency or a near-baseband frequency) or to baseband. In one set-top box embodiment, the RF portion and its associated input processing element receives an RF signal transmitted over a wired (for example,
cable) medium, and performs frequency selection by filtering, downconverting, and filtering again to a desired frequency band. Various embodiments rearrange the order of the above-described (and other) elements, remove some of these elements, and/or add other elements performing similar or different functions. Adding elements can include inserting elements in between existing elements, such as, for example, inserting amplifiers and an analog-to-digital converter. In various embodiments, the RF portion includes an antenna.
[0087] Additionally, the USB and/or HDMI terminals can include respective interface processors for connecting system 150 to other electronic devices across USB and/or HDMI connections. It is to be understood that various aspects of input processing, for example, Reed-Solomon error correction, can be implemented, for example, within a separate input processing IC or within processor 152 as necessary. Similarly, aspects of USB or HDMI interface processing can be implemented within separate interface ICs or within processor 152 as necessary. The demodulated, error corrected, and demultiplexed stream is provided to various processing elements, including, for example, processor 152, and encoder/decoder 156 operating in combination with the memory and storage elements to process the datastream as necessary for presentation on an output device.
[0088] Various elements of system 150 can be provided within an integrated housing, Within the integrated housing, the various elements can be interconnected and transmit data therebetween using suitable connection arrangement 174, for example, an internal bus as known in the art, including the Inter- IC (I2C) bus, wiring, and printed circuit boards.
[0089] The system 150 includes communication interface 160 that enables communication with other devices via communication channel 162. The communication interface 160 can include, but is not limited to, a transceiver configured to transmit and to receive data over communication channel 162. The communication interface 160 can include, but is not limited to, a modem or network card and the communication channel 162 can be implemented, for example, within a wired and/or a wireless medium.
[0090] Data is streamed, or otherwise provided, to the system 150, in various embodiments, using a wireless network such as a Wi-Fi network, for example IEEE 802.11 (IEEE refers to the Institute of Electrical and Electronics Engineers). The Wi-Fi signal of these embodiments is received over the communications channel 162 and the communications interface 160 which are adapted for Wi-Fi communications. The communications channel 162 of these embodiments is typically connected to an access point or router that provides access to external networks including the Internet for allowing streaming applications and other over-the-top communications. Other embodiments provide streamed data to the system 150 using a set-top box that delivers the data over the HDMI connection of the input block 172. Still other embodiments provide streamed data to the system 150 using the RF connection of the input block 172. As indicated above, various
embodiments provide data in a non-streaming manner. Additionally, various embodiments use wireless networks other than Wi-Fi, for example a cellular network or a Bluetooth network.
[0091] The system 150 can provide an output signal to various output devices, including a display 176, speakers 178, and other peripheral devices 180. The display 176 of various embodiments includes one or more of, for example, a touchscreen display, an organic light-emitting diode (OLED) display, a curved display, and/or a foldable display. The display 176 can be for a television, a tablet, a laptop, a cell phone (mobile phone), or other device. The display 176 can also be integrated with other components (for example, as in a smart phone), or separate (for example, an external monitor for a laptop). The other peripheral devices 180 include, in various examples of embodiments, one or more of a stand-alone digital video disc (or digital versatile disc) (DVR, for both terms), a disk player, a stereo system, and/or a lighting system. Various embodiments use one or more peripheral devices 180 that provide a function based on the output of the system 150. For example, a disk player performs the function of playing the output of the system 150.
[0092] In various embodiments, control signals are communicated between the system 150 and the display 176, speakers 178, or other peripheral devices 180 using signaling such as AV. Link, Consumer Electronics Control (CEC), or other communications protocols that enable device-to-device control with or without user intervention. The output devices can be communicatively coupled to system 150 via dedicated connections through respective interfaces 164, 166, and 168. Alternatively, the output devices can be connected to system 150 using the communications channel 162 via the communications interface 160. The display 176 and speakers 178 can be integrated in a single unit with the other components of system 150 in an electronic device such as, for example, a television. In various embodiments, the display interface 164 includes a display driver, such as, for example, a timing controller (T Con) chip.
[0093] The display 176 and speaker 178 can alternatively be separate from one or more of the other components, for example, if the RF portion of input 172 is part of a separate set-top box. In various embodiments in which the display 176 and speakers 178 are external components, the output signal can be provided via dedicated output connections, including, for example, HDMI ports, USB ports, or COMP outputs.
[0094] The system 150 may include one or more sensor devices 168. Examples of sensor devices that may be used include one or more GPS sensors, gyroscopic sensors, accelerometers, light sensors, cameras, depth cameras, microphones, and/or magnetometers. Such sensors may be used to determine information such as user's position and orientation. Where the system 150 is used as the control module for an extended reality display (such as control modules 124, 132), the user's position and orientation may be used in determining how to render image data such that the user perceives the correct portion of a virtual object or virtual scene from the correct point of view. In the case of head-mounted display devices, the position and
orientation of the device itself may be used to determine the position and orientation of the user for the purpose of rendering virtual content. In the case of other display devices, such as a phone, a tablet, a computer monitor, or a television, other inputs may be used to determine the position and orientation of the user for the purpose of rendering content. For example, a user may select and/or adjust a desired viewpoint and/or viewing direction with the use of a touch screen, keypad or keyboard, trackball, joystick, or other input. Where the display device has sensors such as accelerometers and/or gyroscopes, the viewpoint and orientation used for the purpose of rendering content may be selected and/or adjusted based on motion of the display device.
[0095] The embodiments can be carried out by computer software implemented by the processor 152 or by hardware, or by a combination of hardware and software. As a non-limiting example, the embodiments can be implemented by one or more integrated circuits. The memory 154 can be of any type appropriate to the technical environment and can be implemented using any appropriate data storage technology, such as optical memory devices, magnetic memory devices, semiconductor-based memory devices, fixed memory, and removable memory, as non-limiting examples. The processor 152 can be of any type appropriate to the technical environment, and can encompass one or more of microprocessors, general purpose computers, special purpose computers, and processors based on a multi-core architecture, as non-limiting examples.
Block-Based Video Coding
[0096] Like HEVC, the VVC is built upon the block-based hybrid video coding framework. FIG. 2A gives the block diagram of a block-based hybrid video encoding system 200. Variations of this encoder 200 are contemplated, but the encoder 200 is described below for purposes of clarity without describing all expected variations.
[0097] Before being encoded, a video sequence may go through pre-encoding processing (204), for example, applying a color transform to an input color picture (e.g., conversion from RGB 4:4:4 to YCbCr 4:2:0), or performing a remapping of the input picture components in order to get a signal distribution more resilient to compression (for instance using a histogram equalization of one of the color components). Metadata can be associated with the pre-processing and attached to the bitstream.
[0098] The input video signal 202 including a picture to be encoded is partitioned (206) and processed block by block in units of, for example, CUs. Different CUs may have different sizes. In VTM-1.0, a CU can be up to 128x128 pixels. However, different from the HEVC which partitions blocks only based on quadtrees, in the VTM-1 .0, a coding tree unit (CTU) is split into CUs to adapt to varying local characteristics based on quad/binary/ternary-tree. Additionally, the concept of multiple partition unit type in the HEVC is removed, such that the separation of CU, prediction unit (PU) and transform unit (TU) does not exist in the VVC-1.0
anymore; instead, each CU is always used as the basic unit for both prediction and transform without further partitions. In the multi-type tree structure, a CTU is firstly partitioned by a quad-tree structure. Then, each quad-tree leaf node can be further partitioned by a binary and ternary tree structure. Different splitting types may be used, such as quaternary partitioning, vertical binary partitioning, horizontal binary partitioning, vertical ternary partitioning, and horizontal ternary partitioning.
[0099] In the encoder of FIG. 2A, spatial prediction (208) and/or temporal prediction (210) may be performed. Spatial prediction (or "intra prediction”) uses pixels from the samples of already coded neighboring blocks (which are called reference samples) in the same video picture/slice to predict the current video block. Spatial prediction reduces spatial redundancy inherent in the video signal. Temporal prediction (also referred to as "inter prediction” or "motion compensated prediction”) uses reconstructed pixels from the already coded video pictures to predict the current video block. Temporal prediction reduces temporal redundancy inherent in the video signal. A temporal prediction signal for a given CU may be signaled by one or more motion vectors (MVs) which indicate the amount and the direction of motion between the current CU and its temporal reference. Also, if multiple reference pictures are supported, a reference picture index may additionally be sent, which is used to identify from which reference picture in the reference picture store (212) the temporal prediction signal comes.
[0100] The mode decision block (214) in the encoder chooses the best prediction mode, for example based on a rate-distortion optimization method. This selection may be made after spatial and/or temporal prediction is performed. The intra/inter decision may be indicated by, for example, a prediction mode flag. The prediction block is subtracted from the current video block (216) to generate a prediction residual. The prediction residual is de-correlated using transform (218) and quantized (220). (For some blocks, the encoder may bypass both transform and quantization, in which case the residual may be coded directly without the application of the transform or quantization processes.) The quantized residual coefficients are inverse quantized (222) and inverse transformed (224) to form the reconstructed residual, which is then added back to the prediction block (226) to form the reconstructed signal of the CU. Further in-loop filtering, such as deblocking/SAO (Sample Adaptive Offset) filtering, may be applied (228) on the reconstructed CU to reduce encoding artifacts before it is put in the reference picture store (212) and used to code future video blocks. To form the output video bit-stream 230, coding mode (inter or intra), prediction mode information, motion information, and quantized residual coefficients are all sent to the entropy coding unit (108) to be further compressed and packed to form the bit-stream.
[0101] FIG. 2B gives a block diagram of a block-based video decoder 250. In the decoder 250, a bitstream is decoded by the decoder elements as described below. Video decoder 250 generally performs a decoding
pass reciprocal to the encoding pass as described in FIG. 2A. The encoder 200 also generally performs video decoding as part of encoding video data.
[0102] In particular, the input of the decoder includes a video bitstream 252, which can be generated by video encoder 200. The video bit-stream 252 is first unpacked and entropy decoded at entropy decoding unit 254 to obtain transform coefficients, motion vectors, and other coded information. Picture partition information indicates how the picture is partitioned. The decoder may therefore divide (256) the picture according to the decoded picture partitioning information. The coding mode and prediction information are sent to either the spatial prediction unit 258 (if intra coded) or the temporal prediction unit 260 (if inter coded) to form the prediction block. The residual transform coefficients are sent to inverse quantization unit 262 and inverse transform unit 264 to reconstruct the residual block. The prediction block and the residual block are then added together at 266 to generate the reconstructed block. The reconstructed block may further go through in-loop filtering 268 before it is stored in reference picture store 270 for use in predicting future video blocks.
[0103] The decoded picture 272 may further go through post-decoding processing (274), for example, an inverse color transform (e.g. conversion from YCbCr 4:2:0 to RGB 4:4:4) or an inverse remapping performing the inverse of the remapping process performed in the pre-encoding processing (204). The post-decoding processing can use metadata derived in the pre-encoding processing and signaled in the bitstream. The decoded, processed video may be sent to a display device 276. The display device 276 may be a separate device from the decoder 250, or the decoder 250 and the display device 276 may be components of the same device.
[0104] Various methods and other aspects described in this disclosure can be used to modify modules of a video encoder 200 or decoder 250. Moreover, the systems and methods disclosed herein are not limited to WC or HEVC, and can be applied, for example, to other standards and recommendations, whether preexisting or future-developed, and extensions of any such standards and recommendations (including WC and HEVC). Unless indicated otherwise, or technically precluded, the aspects described in this disclosure can be used individually or in combination.
[0105] FIG. 3A is a schematic side view illustrating an example waveguide display that may be used with extended reality (XR) applications according to some embodiments. An image is projected by an image generator 302. The image generator 302 may use one or more of various techniques for projecting an image. For example, the image generator 302 may be a laser beam scanning (LBS) projector, a liquid crystal display (LCD), a light-emitting diode (LED) display (including an organic LED (OLED) or micro LED (pi LED) display), a digital light processor (DLP), a liquid crystal on silicon (LCoS) display, or other type of image generator or light engine.
[0106] Light representing an image 312 generated by the image generator 302 is coupled into a waveguide 304 by a diffractive in-coupler 306. The in-coupler 306 diffracts the light representing the image 312 into one or more diffractive orders. For example, light ray 308, which is one of the light rays representing a portion of the bottom of the image, is diffracted by the in-coupler 306, and one of the diffracted orders 310 (e.g. the second order) is at an angle that is capable of being propagated through the waveguide 304 by total internal reflection. The image generator 302 displays images as directed by a control module 324, which operates to render image data, video data, point cloud data, or other displayable data.
[0107] At least a portion of the light 310 that has been coupled into the waveguide 304 by the diffractive in-coupler 306 is coupled out of the waveguide by a diffractive out-coupler 314. At least some of the light coupled out of the waveguide 304 replicates the incident angle of light coupled into the waveguide. For example, in the illustration, out-coupled light rays 316a, 316b, and 316c replicate the angle of the in-coupled light ray 308. Because light exiting the out-coupler replicates the directions of light that entered the in-coupler, the waveguide substantially replicates the original image 312. A user's eye 318 can focus on the replicated image.
[0108] In the example of FIG. 3A, the out-coupler 314 out-couples only a portion of the light with each reflection allowing a single input beam (such as beam 308) to generate multiple parallel output beams (such as beams 316a, 316b, and 316c). In this way, at least some of the light originating from each portion of the image is likely to reach the user's eye even if the eye is not perfectly aligned with the center of the out- coupler. For example, if the eye 318 were to move downward, beam 316c may enter the eye even if beams 316a and 316b do not, so the user can still perceive the bottom of the image 312 despite the shift in position. The out-coupler 314 thus operates in part as an exit pupil expander in the vertical direction. The waveguide may also include one or more additional exit pupil expanders (not shown in FIG. 3A) to expand the exit pupil in the horizontal direction.
[0109] In some embodiments, the waveguide 304 is at least partly transparent with respect to light originating outside the waveguide display. For example, at least some of the light 320 from real-world objects (such as object 322) traverses the waveguide 304, allowing the user to see the real-world objects while using the waveguide display. As light 320 from real-world objects also goes through the diffraction grating 314, there will be multiple diffraction orders and hence multiple images. To minimize the visibility of multiple images, it is desirable for the diffraction order zero (no deviation by 314) to have a great diffraction efficiency for light 320 and order zero, while higher diffraction orders are lower in energy. Thus, in addition to expanding and out-coupling the virtual image, the out-coupler 314 is preferably configured to let through the zero order
of the real image. In such embodiments, images displayed by the waveguide display may appear to be superimposed on the real world.
[0110] FIG. 3B is a schematic side view illustrating an example alternative display type that may be used with extended reality applications according to some embodiments. In an XR head-mounted display device 330, a control module 332 controls a display 334, which may be an LCD, to display an image. The headmounted display includes a partly-reflective surface 336 that reflects (and in some embodiments, both reflects and focuses) the image displayed on the LCD to make the image visible to the user. The partly-reflective surface 336 also allows the passage of at least some exterior light, permitting the user to see their surroundings.
[0111] FIG. 3C is a schematic side view illustrating an example alternative display type that may be used with extended reality applications according to some embodiments. In an XR head-mounted display device 340, a control module 342 controls a display 344, which may be an LCD, to display an image. The image is focused by one or more lenses of display optics 346 to make the image visible to the user. In the example of FIG. 3C, exterior light does not reach the user's eyes directly. However, in some such embodiments, an exterior camera 348 may be used to capture images of the exterior environment and display such images on the display 344 together with any virtual content that may also be displayed.
[0112] The embodiments described herein are not limited to any particular type or structure of XR display device.
[0113] Advances in 3D capturing and rendering technologies are enabling new applications and services in the fields of, for example, autonomous driving, cultural heritage archival, immersive telepresence, and virtual/augmented reality. Point clouds have arisen as one of the main 3D scene representations for such applications. A point cloud frame is a set of 3D points, each point being represented with its 3D position and possibly several attributes such as color, transparency, and reflectance.
[0114] A standardization activity for point cloud compression is carried out by the ISO/IEC JTC1/SC29/WG7 "MPEG 3D Graphics and Haptics Coding” group. See Graziosi, Damillo, et al., An Overview of Ongoing Point Cloud Compression Standardization Activities: Video-Based (V-PCC) and Geometry-Based (G-PCC), 9:1 APSIPA TRANSACTIONS ON SIGNAL AND INFORMATION PROCESSING 1-15 (2020) ("Graz/os/”). The first edition of the Geometry-based Point Cloud Compression (G-PCC) standard, part 9 of the ISO/IEC 23090 series on the coded representation of immersive media has been published. See Information Technology — Coded Representation of Immersive Media — Part 9: Geometry-Based Point Cloud Compression, INTERNATIONAL ORGANIZATION FOR STANDARDIZATION / INTERNATIONAL ELECTROTECHNICAL COMMISSION (ISO/IEC), ISO/IEC 23090-9:2023 (2023).
[0115] Within the G-PCC, Second Edition framework in construction, the compression of dense dynamic point clouds with a geometry-based approach has been identified as a separate target. Such a compression of dense dynamic point clouds with a geometry-based approach is done within a 3D space domain without a 3D-to-2D round trip process used to leverage existing 2D video codecs.
[0116] According to Test Model for Geometry-Based Solid Point Cloud - GeS TM 1.0, 141st MPEG Meeting, Tech. Rep. N00558, INTERNATIONAL ORGANIZATION FOR STANDARDIZATION / INTERNATIONAL ELECTROTECHNICAL COMMISSION (ISO/IEC), ISO/IEC JTC1/SC29/WG7 (Jan. 2023) ("GeS TM 1.0'), the current G-PCC encoder for dense dynamic point clouds combines the following tools: (1) pruned occupancy tree (octree) plus triangle soups for geometry coding; (2) Region-Adaptive Hierarchical Transform (RAHT) for color attribute coding; (3) motion compensated inter-frame prediction; and (4) context-adaptive arithmetic coding. Such a test model for a geometry-based solid point cloud may be abbreviated as GeS-TM.
[0117] FIG. 4 is a process diagram illustrating an example global point cloud compression according to some embodiments. Input geometry 402 and input attributes 404 may be sequentially compressed as illustrated by a process 400 in FIG. 4. More precisely, for some embodiments, the geometry is first compressed (encoded) by the geometry encoder 406 to generate a geometry bitstream 416. The encoded (compressed) geometry is then decoded by a geometry decoder 408, and a point cloud is reconstructed at full resolution by a geometry reconstruction process 410. The reconstructed geometry is used as an input into the attribute transfer process 412 and as an input into the attribute encoder 414. The input attributes 404 are transferred onto the reconstructed geometry (after decompression) and compressed. The output of the attribute encoder 414 is an attribute bitstream 418. For the decoder, the reconstructed geometry is available when decoding the attributes.
[0118] FIG. 5 is a schematic illustration showing an example Geometry-based Point Cloud Compression (G-PCC) encoding for dense dynamic point clouds according to some embodiments. As depicted in the process 500 of FIG. 5, the geometry is encoded by a GeS-TM codec with an octree representation 504 for the N-T coarsest resolution levels 502 (beginning from the root node at level 0) followed by a surface approximation (triangle soup or "TriSoup” 506) of all occupied nodes at level N-T-1. See GeS TM 1.0.
[0119] In FIG. 5, the left diagram is a 2D tree representation of the 3D octree recursive subdivision of each cube into 8 sub-cubes at the finer level. In each representation, a parent node has 8 children. The 8 circles correspond to 8 sub-cubes.
[0120] For an octree representation 504, a recursive process begins at the root level, level 0. When incrementing a level by 1 , each occupied parent cube is divided into 8 child cubes. This recursive process
may continue until the leaf level is met, where each occupied cube contains a single voxel, or may be stopped earlier at an arbitrary level for some embodiments.
[0121] FIG. 6 is a schematic illustration showing an example TriSoup rasterization. As depicted in FIG. 4, the compressed geometry is decoded and a point cloud is reconstructed at full resolution prior to attribute encoding. Hence, the decoded TriSoup surface representation 602 is rasterized to recover 3D sampled points 600, as illustrated in FIG. 6. After that, an attribute transfer step (or "recoloring”) is necessary to project the attributes from 3D points of the input geometry to 3D points of the reconstructed geometry, which may differ for a lossy compression.
[0122] FIG. 7 is a schematic illustration showing an example Dyadic RAHT transform process for a 2x2x2 octree node according to some embodiments. Attribute encoding may be performed using the region- adaptive hierarchical transform (RAHT). See Graziosi. The attributes of the point cloud may be propagated starting with the leaves of the octree (highest level 708) and proceeding backwards until the root (lowest level 702) is reached. Each node of the octree has an attribute with the sum of attributes of leaf nodes connected to the node. A RAHT transform may be performed starting at the root node (lowest level 702) and proceeding towards the leaf nodes (highest level 708). At each node, a transform is performed along each of the x, y, and z directions. This transform process may generate up to 8 low-pass and high-pass coefficients. LLL is the lowest frequency coefficient (low pass along all 3 directions), and the coefficients grow higher in frequency towards the rightward one, HHH, with a high pass along all 3 directions. In FIG. 7, a dyadic implementation 700 of a RAHT transform takes a 2x2x2 input block and outputs transform coefficients in the frequency domain. In FIG. 7, the arrows indicate which direction along which the 1 D transform is applied at each of the 3 steps, thereby generating low frequency and high frequency coefficients. In the example, the direction is first backward (level 0 (702)), then up (level 1 (704)), then right (level 2 (706)).
[0123] In G-PCC, the RAHT transform is embedded into a multiresolution coarse-to-fine encoding process with intra-frame prediction (predictive RAHT) in which the encoding of RAHT coefficients of a node at a given level d benefit from an intra-frame prediction coming from RAHT coefficients at coarser resolution level d-1.
[0124] FIG. 8 is a schematic illustration showing an example adjacency graph and associated matrix for a Graph Fourier Transform (GFT). The nodes are shown as n1, n2, ... , n6 in the adjacency graph 800. Weights assigned to the nodes are shown as w1( w2, w3. The weights are determined based on the similarity between the nodes. An adjacency matrix A (802) is created as shown on the right side of FIG. 8. According to page 2067 of Zhang, the adjacency matrix A is used as part of a process to calculate an eigenvector matrix, which is then "used to transform the color signal defined on the graph nodes.”
[0125] The current G-PCC approach for encoding the attributes with the RAHT transform is understood to have limitations, even if the basic RAHT is considerably improved by G-PCC during the standardization process (e.g., by using fixed point implementation, dyadic RAHT, and upsampled intra-frame prediction). A limitation is understood to come from a restricted (the smallest possible) horizon of two neighbouring nodes at each low/high pass filtering operation, which involves many iterations and prevents any local processing leveraging attribute correlations over clusters of neighbouring points belonging to the same surface part.
[0126] For some embodiments, a transform other than RAHT may be used for encoding the attributes of a point cloud. For example, a predicting/lifting transform is a distance-based prediction scheme for attribute coding which relies on a Level of Detail (LoD) representation that distributes the input points in sets of refinements levels using a deterministic Euclidean distance criterion. A predicting/lifting transform was proposed within the context of the G-PCC standard, but a predicting/lifting transform was found to be less performant than RAHT on dense point clouds.
[0127] As illustrated in FIG. 8, the Graph Fourier Transform (GFT) is efficient to decorrelate the attributes on top of an adjacency graph of occupied voxels according to Zhang, Cha, et al., Point Cloud Attribute Compression with Graph Transform, IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, PARIS, FRANCE 2066-2070 (2014) (“Zhancf). However, the computational demand of the eigen value decomposition may become prohibitive when performed onto too large point cloud supports.
[0128] A shape-adaptive DCT (SA-DCT) may be extended to 3 dimensions, with one-dimensional DCT transforms of variable length successively applied along x, y, and z directions on the attribute of occupied voxels. See Sikora, T. and Makai, B., Shape-Adaptive DCT for Generic Coding of Video, 5:1 IEEE TRANS. ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY (1995) (“SikorsT). In some embodiments, an SA-DCT may be implemented.
[0129] FIG. 9A is a schematic illustration showing an example original point cloud used for a 3D shape- adaptive discrete cosine transform (SA-DCT). FIG. 9B is a schematic illustration showing example vertically reordered voxels in a z-plane and 1 D-DCTs of variable support size of attributes. FIG. 9C is a schematic illustration showing an example horizontally reordered voxels and 1 D-DCTs of variable support size.
[0130] FIGs. 9A-9C illustrate the first two steps in y and x directions, the extension to a 3rd dimension follows these 2-dimensional examples. FIG. 9A shows the original point cloud. FIG. 9B shows vertically reordered (y-direction) voxels. FIG. 9C shows horizontally reordered (x-direction) voxels. For FIGs. 9B and 9C, a one dimensional (1-D) discrete cosine transform (DCT) performed based on the number of voxels is also indicated on the figures. A DCT transform is applied to input signals of varying dimensions, at each line or coumn. For example, a DCT-3 transform may be applied when there are 3 input values.
[0131] For some embodiments, start with a first configuration 900 of FIG. 9A and move a "rake" in the y- direction to "push" the occupied voxels to the top to arrive at the second configuration 930 shown in FIG. 9B. Next, start with the second configuration 930 shown in FIG. 9B and move the "rake" in the x-direction to push the occupied voxels to the left to arrive at the third configuration 960 shown in FIG. 9C.
[0132] Furthermore, from a global compression point of view, the classical two-step compression process (as depicted in FIG. 4), which requires a full compress, decode and reconstruct the point cloud geometry before beginning encoding of the attributes, which, as understood, may be improved with regards to scalability and parallelization of computation. Looking at the two-stage GeS-TM geometry compression scheme depicted in FIG. 5, more localized attribute encoding may be done without waiting for the whole geometry to be compressed.
[0133] A problem to be solved is increasing both compression performance and computation efficiency of attribute compression on dense point clouds, thereby leveraging a nested 2-stage geometry compression scheme (such as the TriSoup modelling of leaf nodes of an octree currently implemented by G-PCC).
Geometry Representation
[0134] FIG. 10 is a schematic illustration showing an example two-stage geometry representation for encoding of attributes according to some embodiments. For some embodiments, a two-stage compression/decompression scheme for point cloud attributes may be based on an N-level octree representation of geometry. In stage 1 , the attributes of points in each leaf node at level (N-T-1) (for leaf nodes of size 2T*2T*2T for levels N-T to N-1 (1004)) are encoded with a transform encompassing all node points as inputs, for example the GFT or the 3D SA-DCT. The transform is performed once over all points belonging to the cubes/nodes at the chosen level N-T-1 . In stage 2, a hierarchical transform (such as RAHT) is applied on the (N-T) levels 1002 of the upper part of the octree, for the transform coefficients of each leaf node up to the root node (levels (N-T-1) to 0). For the nodes at level N-T-1 , there are cubes of size 2T. The octree representation is described from top to bottom, beginning with a single node encompassing the entire point cloud at level 0, and then dividing the point cloud by 2 along each direction at each successive level.
[0135] For some embodiments, only the average direct current (DC) transform coefficient of each leaf node is input to the hierarchical transform. For some embodiments, the other alternating current (AC) coefficients are also inputted into the hierarchical transform. For some embodiments, an attribute transform on the leaf nodes may be extended to a fourth dimension (time) over a group of consecutive point cloud frames. As a result, the four dimensions are x, y, z, and time.
[0136] The reconstructed geometry is modelled as illustrated in FIG. 10 with a two-stage representation. The lower portion of the model, which is for leaf nodes of size 2T*2T*2T (for example, 323), contain a set of 3D points (or occupied voxels) at full resolution in which the geometry is represented either with an octree or a mesh (triangle soup). The upper portion of the model shows an octree with (N-T) resolution levels. Attribute encoding is performed on top of the reconstructed geometry.
Atribute Compression Scheme Overview
[0137] FIG. 11 is a schematic illustration showing an example two-stage compression scheme according to some embodiments. A two-stage compression scheme is depicted in FIG. 11 for an octree 1100 with levels 0 to (N-T-1).
First Stage: Encoding of Leaf Node Attributes
[0138] First, in the bottom part of FIG. 11 , an attribute encoding is performed independently within each leaf node, using a 3D-block transform tF( ) 1104 for decorrelating the input signal. In a leaf node corresponding to a block b of size 2T x 2T x 2r, with Nb occupied voxels (Nb < 2T x 2T x 2r). The Nb transform coefficients w, for i G [0, Nb — 1] are generated from the Nb attributes a, according to Eq. 1 :
The transform coefficients w, are further quantized 1108 and arithmetically coded 1110.
[0139] For some embodiments, the transform
) may be a Graph Fourier Transform (GFT). See, e.g., FIG. 8 and Zhang. For some embodiments, the transform
) may be a 3D extension of the Shape-Adaptive Discrete Cosine Transform (SA-DCT). See, e.g., FIGs. 9A-9C and Sikora. For some embodiments, the transform tF( ) may be a Kahunen Loeve Transform (KLT), according to the "Karhunen-Ldeve Transform” section in the reference Dony, D., The Transform and Data Compression Handbook, Ed. Rao, K. R. and P. C. Yip (Boca Raton: CRC Press LLC) (2001).
[0140] This first stage of attribute encoding may be performed in parallel and independently on each individual node. Furthermore, the first stage of attribute encoding may be done without waiting for the entire geometry of the whole point cloud to be compressed. For some embodiments, only the corresponding geometry of the current node is processed in parallel.
Second Stage: Hierarchical Encoding of Leaf Node Atribute DC Values
[0141] Once (all) the leaf nodes have been processed in the first stage, a second stage further leverages the correlation across leaf nodes by inputting the DC values of each transformed attribute representation to a hierarchical transform such as a RAHT transform or an octree node transform 1102, as illustrated in FIG. 11. The transformed representation of attributes includes each resolution level I G [1, 7V — T — 1], each occupied node n at level /, the AC coefficients hi, i G [1,7]}^ , and the DC value {Ao}o of the root node. The DC value {Ao}o of the root node is the average value of the attributes of the entire point cloud. The transformed representation of attributes (each resolution level I G [1, 7V — T — 1], each occupied node n at level /, the AC coefficients
i G [1,7]}^, and the DC value {Ao}o of the root node) are passed through a quantization process 1106 followed by an arithmetic coding process 1110.
[0142] In some embodiments, not only the DC transform coefficient {to0}b of each leaf node b is inputted to the hierarchical transform, but also the other (AC) transform coefficients {w,, i G [1, Nb — l]}b are inputted into the hierarchical transform. This second stage enables further compaction of the encoded attributes. This second stage has less computation because there are a limited number of resolution levels in the upper octree representation. Hence, there are a limited number of nodes and associated elementary octree node transforms.
Overall Encoding and Decoding
[0143] FIG. 12 is a process diagram illustrating an example parallelized encoding according to some embodiments. By processing the leaf nodes independently, the encoding and decoding processes may be parallelized, as illustrated in FIGs. 12 and 13. FIGs. 12 and 13 hint at a real-time encoding-decoding chain.
[0144] On the encoder side shown in the process 1200 of FIG. 12, the sequences of geometry encoding 1206, 1214, 1222, geometry reconstruction 1208, 1216, 1224, attribute transfer 1210, 1218, 1226 onto the reconstructed geometry, and attribute encoding 1212, 1220, 1228 of the transferred attributes may be performed in parallel and independently on each leaf node. Each of the processes of geometry encoding 1206, 1214, 1222, geometry reconstruction 1208, 1216, 1224, attribute transfer 1210, 1218, 1226 onto the reconstructed geometry, and attribute encoding 1212, 1220, 1228 of the transferred attributes may be performed as shown in FIG. 4 for some embodiments. For some embodiments, a geometry encode 1202 for octree levels O to N-T-1 is performed before the geometry encoding 1206, 1214, 1222 and attribute encoding 1204. The hierarchical encoding 1204 of the DC values of transformed attribute coefficients may be performed in a second step. The right side of FIG. 12 shows the outputting of DC and AC coefficients. For some embodiments, calculation of the DC and AC coefficients shown in FIG. 11 may be applied to FIG. 12. Although the RAHT transform is shown in FIG. 12, other hierarchical transforms may be used in accordance
with some embodiments. The attribute bitstream includes the concatenated AC coefficients of each leaf node, plus the hierarchically encoded DC values.
[0145] FIG. 13 is a process diagram illustrating an example parallelized decoding according to some embodiments. On the decoder side shown in the process 1300 of FIG. 13, the sequences of geometry decoding 1306, 1314, 1322, geometry reconstruction 1308, 1316, 1324, and AC attribute decoding 1310, 1318, 1326 may be performed independently on each leaf node. For a given leaf node, the corresponding geometry sub-bitstream is first decoded and the corresponding geometry of this sub-part of the point cloud is recontructed. Then, the attribute sub-bitstream corresponding to this same node is decoded, thereby building on the reconstructed geometry and yielding the high-frequency component of the attributes of the reconstructed points. The last step is to decode the RAHT-encoded leaf-DC sub-bitsream, yielding the average direct current (DC) attribute values of each lef node. For each leaf node, the recovered DC value is finally added to the high-frequency attribute components of all belonging points. Geometry decoding 1306, 1314, 1322 and geometry reconstruction 1308, 1316, 1324 may be performed as shown in FIG. 4 for some embodiments. For some embodiments, a geometry decode 1302 for octree levels 0 to N-T-1 is performed before the geometry decoding 1306, 1314, 1322. The decoding 1304 of DC coefficients also may be performed in parallel. The attribute values of each leaf node are added 1312, 1320, 1328 on the right side of FIG. 13, and the average value of the attribute values is calculated as the DC coefficient.
Extension to 3D+T
[0146] For some embodiments, the attribute encoding is performed jointly per group of point cloud frames (GOF). For example, the attribute encoding may be performed per 8 consecutives frames. For each position of a sub-block b of size 2T x 2T x 2r, for which there is a least one occupied voxel within a given frame position among the GOF, a 3D+T 4-dimensional point cloud is created by considering together the 8 blocks at the same position in the frames of the GOF. Then a transform tF( ) may be applied globally to the attributes of the points contained in such a sub-volume. For some embodiments, a 4-dimensional GFT transform may be performed, thereby building on a graph connecting spatially and temporally neighboring points. For some embodiments, the point cloud frames of the GOF are spatially aligned before transform, and the 3D motion information to register the point cloud frames is transmitted in parallel.
[0147] The processes described herein may enable increased point cloud attribute compression performance together with a decreased encoding and processing time due to a highly parallelizable encoding/decoding scheme. The processes described herein may be accepted in the G-PCC standard.
[0148] FIG. 14 is a flowchart illustrating an example process for encoding a point cloud according to some embodiments. For some embodiments, an example process 1400 may include obtaining 1402 a point cloud, the point cloud including: a first set of information describing a geometry of the point cloud, and a second set of information describing attributes of the point cloud. For some embodiments, the example process 1400 may further include performing 1404 a geometry encoding of the first set of information to generate a geometry bitstream. For some embodiments, the example process 1400 may further include performing 1406 a two-stage attribute compression process to generate an attribute bitstream, wherein a first stage of the two-stage attribute compression process includes performing a block transform on each of a set of nodes of the point cloud, and wherein a second stage of the two-stage attribute compression process includes performing a hierarchical encoding over the set of nodes of the point cloud. For some embodiments, the example process 1400 may further include outputting 1408 an output bitstream including the geometry bitstream and the attribute bitstream.
[0149] FIG. 15 is a flowchart illustrating an example process for decoding a point cloud according to some embodiments. For some embodiments, an example process 1500 may include obtaining 1502 an input bitstream, wherein the input bitstream includes a geometry bitstream and an attribute bitstream. For some embodiments, the example process 1500 may further include performing 1504 a geometry decode for one or more non-leaf, octree nodes within the geometry bitstream to generate one or more sets of decoded nonleaf node geometry data. For some embodiments, the example process 1500 may further include performing 1506 a geometry decode for one or more leaf nodes within the geometry bitstream to generate one or more sets of decoded leaf node geometry data. For some embodiments, the example process 1500 may further include performing 1508 a geometry reconstruction on the one or more sets of decoded leaf node geometry data. For some embodiments, the example process 1500 may further include performing 1510 an attribute hierarchical decode for a first set of nodes within the attribute bitstream to generate a set of decoded attribute data. For some embodiments, the example process 1500 may further include performing 1512 an attribute leaf decode for a second set of nodes within the attribute bitstream to generate the set of decoded attribute data. For some embodiments, the example process 1500 may further include performing 1514 an arithmetic average using the first set of decoded attribute data and the second set of decoded attribute data to generate an output set of attribute data.
[0150] While the methods and systems in accordance with some embodiments are generally discussed in context of extended reality (XR), some embodiments may be applied to any XR contexts such as, e.g., virtual reality (VR) / mixed reality (MR) / augmented reality (AR) contexts. Also, although the term "head mounted display (HMD)” is used herein in accordance with some embodiments, some embodiments may be
applied to a wearable device (which may or may not be attached to the head) capable of, e.g., XR, VR, AR, and/or MR for some embodiments.
[0151] A first example method in accordance with some embodiments may include: obtaining a point cloud, the point cloud including: a first set of information describing a geometry of the point cloud, and a second set of information describing attributes of the point cloud; performing a geometry encoding of the first set of information to generate a geometry bitstream; performing a two-stage attribute compression process to generate an attribute bitstream, wherein a first stage of the two-stage attribute compression process includes performing a block transform on each of a set of nodes of the point cloud, and wherein a second stage of the two-stage attribute compression process includes performing a hierarchical encoding over the set of nodes of the point cloud, and outputting an output bitstream including the geometry bitstream and the attribute bitstream.
[0152] Some embodiments of the first example method may further include: quantizing an output of the first stage of the two-stage attribute compression process to generate a first set of quantized bits; quantizing an output of the second stage of the two-stage attribute compression process to generate a second set of quantized bits; and arithmetic coding the first and second sets of quantized bits to generate the attribute bitstream.
[0153] For some embodiments of the first example method, the block transform is a 3-dimensional Graph Fourier Transform (GFT).
[0154] For some embodiments of the first example method, the block transform is a 3-dimensional Shape- Adaptive Discrete Cosine Transform (SA-DCT).
[0155] For some embodiments of the first example method, the block transform is a 3-dimensional Kahunen Loeve Transform (KLT).
[0156] For some embodiments of the first example method, wherein the block transform is a 4-dimensional Graph Fourier Transform (GFT), and wherein one of the 4 dimensions is time.
[0157] For some embodiments of the first example method, the set of nodes include one or more leaf nodes.
[0158] For some embodiments of the first example method, the set of nodes include a level 1 node below a root level 0 node.
[0159] For some embodiments of the first example method, performing the hierarchical encoding over the set of nodes of the point cloud may include: obtaining, for each of the set of nodes, one or more respective
transform coefficients; determining, for each of the set of nodes, an average transform coefficient of the one or more respective transform coefficients; and performing a hierarchical transform on at least the average transform coefficient.
[0160] Some embodiments of the first example method may further include performing the hierarchical transform on at least two transform coefficients.
[0161] For some embodiments of the first example method, the two-stage attribute compression process is performed on top of a reconstructed geometry of the point cloud.
[0162] For some embodiments of the first example method, the two-stage attribute compression process is performed on a group of point cloud frames.
[0163] For some embodiments of the first example method, the group of point cloud frames are consecutive frames.
[0164] Some embodiments of the first example method may further include performing a geometry encode of one of more leaf nodes of the point cloud.
[0165] For some embodiments of the first example method, the geometry encode of a first leaf node occurs in parallel with the geometry encode of a second leaf node.
[0166] For some embodiments of the first example method, the geometry encoding is performed in parallel with at least a portion of the two-stage attribute compression process.
[0167] For some embodiments of the first example method, the hierarchical transform is a Region- Adaptive Hierarchical Transform (RAHT).
[0168] A first example apparatus in accordance with some embodiments may include: a processor; and a non-transitory computer-readable medium storing instructions operative, when executed by the processor, to cause the apparatus to perform any one of the methods listed above.
[0169] A second example method in accordance with some embodiments may include: obtaining an input bitstream, wherein the input bitstream includes a geometry bitstream and an attribute bitstream; performing a geometry decode for one or more non-leaf, octree nodes within the geometry bitstream to generate one or more sets of decoded non-leaf node geometry data; performing a geometry decode for one or more leaf nodes within the geometry bitstream to generate one or more sets of decoded leaf node geometry data; performing a geometry reconstruction on the one or more sets of decoded leaf node geometry data; performing an attribute hierarchical decode for a set of nodes within the attribute bitstream to generate a first set of decoded attribute data; performing an attribute leaf decode for the set of nodes within the attribute
bitstream to generate a second set of decoded attribute data; and performing an arithmetic average using the first set of decoded attribute data and the second set of decoded attribute data to generate an output set of attribute data.
[0170] A second example apparatus in accordance with some embodiments may include: a processor; and a non-transitory computer-readable medium storing instructions operative, when executed by the processor, to cause the apparatus to perform any one of the methods listed above.
[0171] A third example apparatus in accordance with some embodiments may include at least one processor configured to perform any one of the methods listed above.
[0172] A fourth example apparatus in accordance with some embodiments may include a computer- readable medium storing instructions for causing one or more processors to perform any one of the methods listed above.
[0173] A fifth example apparatus in accordance with some embodiments may include at least one processor and at least one non-transitory computer-readable medium storing instructions for causing the at least one processor to perform any one of the methods listed above.
[0174] An example signal in accordance with some embodiments may include a bitstream generated according to any one of the methods listed above.
[0175] This disclosure describes a variety of aspects, including tools, features, embodiments, models, approaches, etc. Many of these aspects are described with specificity and, at least to show the individual characteristics, are often described in a manner that may sound limiting. However, this is for purposes of clarity in description, and does not limit the disclosure or scope of those aspects. Indeed, all of the different aspects can be combined and interchanged to provide further aspects. Moreover, the aspects can be combined and interchanged with aspects described in earlier filings as well.
[0176] The aspects described and contemplated in this disclosure can be implemented in many different forms. While some embodiments are illustrated specifically, other embodiments are contemplated, and the discussion of particular embodiments does not limit the breadth of the implementations. At least one of the aspects generally relates to video encoding and decoding, and at least one other aspect generally relates to transmitting a bitstream generated or encoded. These and other aspects can be implemented as a method, an apparatus, a computer readable storage medium having stored thereon instructions for encoding or decoding video data according to any of the methods described, and/or a computer readable storage medium having stored thereon a bitstream generated according to any of the methods described.
[0177] In the present disclosure, the terms "reconstructed” and "decoded” may be used interchangeably, the terms "pixel” and "sample” may be used interchangeably, the terms "image,” "picture” and "frame” may be used interchangeably. Usually, but not necessarily, the term "reconstructed” is used at the encoder side while "decoded” is used at the decoder side.
[0178] The terms HDR (high dynamic range) and SDR (standard dynamic range) often convey specific values of dynamic range to those of ordinary skill in the art. However, additional embodiments are also intended in which a reference to HDR is understood to mean "higher dynamic range” and a reference to SDR is understood to mean "lower dynamic range.” Such additional embodiments are not constrained by any specific values of dynamic range that might often be associated with the terms "high dynamic range” and "standard dynamic range.”
[0179] Various methods are described herein, and each of the methods includes one or more steps or actions for achieving the described method. Unless a specific order of steps or actions is required for proper operation of the method, the order and/or use of specific steps and/or actions may be modified or combined. Additionally, terms such as "first”, "second”, etc. may be used in various embodiments to modify an element, component, step, operation, etc., such as, for example, a "first decoding” and a "second decoding”. Use of such terms does not imply an ordering to the modified operations unless specifically required. So, in this example, the first decoding need not be performed before the second decoding, and may occur, for example, before, during, or in an overlapping time period with the second decoding.
[0180] Various numeric values may be used in the present disclosure, for example. The specific values are for example purposes and the aspects described are not limited to these specific values.
[0181] Embodiments described herein may be carried out by computer software implemented by a processor or other hardware, or by a combination of hardware and software. As a non-limiting example, the embodiments can be implemented by one or more integrated circuits. The processor can be of any type appropriate to the technical environment and can encompass one or more of microprocessors, general purpose computers, special purpose computers, and processors based on a multi-core architecture, as nonlimiting examples.
[0182] Various implementations involve decoding. "Decoding”, as used in this disclosure, can encompass all or part of the processes performed, for example, on a received encoded sequence in order to produce a final output suitable for display. In various embodiments, such processes include one or more of the processes typically performed by a decoder, for example, entropy decoding, inverse quantization, inverse transformation, and differential decoding. In various embodiments, such processes also, or alternatively, include processes performed by a decoder of various implementations described in this disclosure, for
example, extracting a picture from a tiled (packed) picture, determining an upsampling filter to use and then upsampling a picture, and flipping a picture back to its intended orientation.
[0183] As further examples, in one embodiment "decoding” refers only to entropy decoding, in another embodiment "decoding” refers only to differential decoding, and in another embodiment "decoding” refers to a combination of entropy decoding and differential decoding. Whether the phrase "decoding process” is intended to refer specifically to a subset of operations or generally to the broader decoding process will be clear based on the context of the specific descriptions.
[0184] Various implementations involve encoding. In an analogous way to the above discussion about "decoding”, "encoding” as used in this disclosure can encompass all or part of the processes performed, for example, on an input video sequence in order to produce an encoded bitstream. In various embodiments, such processes include one or more of the processes typically performed by an encoder, for example, partitioning, differential encoding, transformation, quantization, and entropy encoding. In various embodiments, such processes also, or alternatively, include processes performed by an encoder of various implementations described in this disclosure.
[0185] As further examples, in one embodiment "encoding” refers only to entropy encoding, in another embodiment "encoding” refers only to differential encoding, and in another embodiment "encoding” refers to a combination of differential encoding and entropy encoding. Whether the phrase "encoding process” is intended to refer specifically to a subset of operations or generally to the broader encoding process will be clear based on the context of the specific descriptions.
[0186] When a figure is presented as a flow diagram, it should be understood that it also provides a block diagram of a corresponding apparatus. Similarly, when a figure is presented as a block diagram, it should be understood that it also provides a flow diagram of a corresponding method/process.
[0187] Various embodiments refer to rate distortion optimization. In particular, during the encoding process, the balance or trade-off between the rate and distortion is usually considered, often given the constraints of computational complexity. The rate distortion optimization is usually formulated as minimizing a rate distortion function, which is a weighted sum of the rate and of the distortion. There are different approaches to solve the rate distortion optimization problem. For example, the approaches may be based on an extensive testing of all encoding options, including all considered modes or coding parameters values, with a complete evaluation of their coding cost and related distortion of the reconstructed signal after coding and decoding. Faster approaches may also be used, to save encoding complexity, in particular with computation of an approximated distortion based on the prediction or the prediction residual signal, not the reconstructed one. A mix of these two approaches can also be used, such as by using an approximated
distortion for only some of the possible encoding options, and a complete distortion for other encoding options. Other approaches only evaluate a subset of the possible encoding options. More generally, many approaches employ any of a variety of techniques to perform the optimization, but the optimization is not necessarily a complete evaluation of both the coding cost and related distortion.
[0188] The implementations and aspects described herein can be implemented in, for example, a method or a process, an apparatus, a software program, a data stream, or a signal. Even if only discussed in the context of a single form of implementation (for example, discussed only as a method), the implementation of features discussed can also be implemented in other forms (for example, an apparatus or program). An apparatus can be implemented in, for example, appropriate hardware, software, and firmware. The methods can be implemented in, for example, a processor, which refers to processing devices in general, including, for example, a computer, a microprocessor, an integrated circuit, or a programmable logic device. Processors also include communication devices, such as, for example, computers, cell phones, portable/personal digital assistants ("PDAs”), and other devices that facilitate communication of information between end-users.
[0189] Reference to "one embodiment” or "an embodiment” or "one implementation” or "an implementation”, as well as other variations thereof, means that a particular feature, structure, characteristic, and so forth described in connection with the embodiment is included in at least one embodiment. Thus, the appearances of the phrase "in one embodiment” or "in an embodiment” or "in one implementation” or "in an implementation”, as well any other variations, appearing in various places throughout this disclosure are not necessarily all referring to the same embodiment.
[0190] Additionally, this disclosure may refer to "determining” various pieces of information. Determining the information can include one or more of, for example, estimating the information, calculating the information, predicting the information, or retrieving the information from memory.
[0191] Further, this disclosure may refer to "accessing” various pieces of information. Accessing the information can include one or more of, for example, receiving the information, retrieving the information (for example, from memory), storing the information, moving the information, copying the information, calculating the information, determining the information, predicting the information, or estimating the information.
[0192] Additionally, this disclosure may refer to "receiving” various pieces of information. Receiving is, as with "accessing”, intended to be a broad term. Receiving the information can include one or more of, for example, accessing the information, or retrieving the information (for example, from memory). Further, "receiving” is typically involved, in one way or another, during operations such as, for example, storing the information, processing the information, transmitting the information, moving the information, copying the
information, erasing the information, calculating the information, determining the information, predicting the information, or estimating the information.
[0193] It is to be appreciated that the use of any of the following “/”, "and/or”, and "at least one of, for example, in the cases of “A/B”, "A and/or B” and "at least one of A and B”, is intended to encompass the selection of the first listed option (A) only, or the selection of the second listed option (B) only, or the selection of both options (A and B). As a further example, in the cases of "A, B, and/or C” and "at least one of A, B, and C”, such phrasing is intended to encompass the selection of the first listed option (A) only, or the selection of the second listed option (B) only, or the selection of the third listed option (C) only, or the selection of the first and the second listed options (A and B) only, or the selection of the first and third listed options (A and C) only, or the selection of the second and third listed options (B and C) only, or the selection of all three options (A and B and C). This may be extended for as many items as are listed.
[0194] Also, as used herein, the word "signal” refers to, among other things, indicating something to a corresponding decoder. For example, in certain embodiments the encoder signals a particular one of a plurality of parameters for region-based filter parameter selection for de-artifact filtering. In this way, in an embodiment the same parameter is used at both the encoder side and the decoder side. Thus, for example, an encoder can transmit (explicit signaling) a particular parameter to the decoder so that the decoder can use the same particular parameter. Conversely, if the decoder already has the particular parameter as well as others, then signaling can be used without transmitting (implicit signaling) to simply allow the decoder to know and select the particular parameter. By avoiding transmission of any actual functions, a bit savings is realized in various embodiments. It is to be appreciated that signaling can be accomplished in a variety of ways. For example, one or more syntax elements, flags, and so forth are used to signal information to a corresponding decoder in various embodiments. While the preceding relates to the verb form of the word "signal”, the word "signal” can also be used herein as a noun.
[0195] Implementations can produce a variety of signals formatted to carry information that can be, for example, stored or transmitted. The information can include, for example, instructions for performing a method, or data produced by one of the described implementations. For example, a signal can be formatted to carry the bitstream of a described embodiment. Such a signal can be formatted, for example, as an electromagnetic wave (for example, using a radio frequency portion of spectrum) or as a baseband signal. The formatting can include, for example, encoding a data stream and modulating a carrier with the encoded data stream. The information that the signal carries can be, for example, analog or digital information. The signal can be transmitted over a variety of different wired or wireless links, as is known. The signal can be stored on a processor-readable medium.
[0196] We describe a number of embodiments. Features of these embodiments can be provided alone or in any combination, across various claim categories and types. Further, embodiments can include one or more of the following features, devices, or aspects, alone or in any combination, across various claim categories and types:
• Adapting residues at an encoder according to any of the embodiments discussed.
• A bitstream or signal that includes one or more of the described syntax elements, or variations thereof.
• A bitstream or signal that includes syntax conveying information generated according to any of the embodiments described.
• Inserting in the signaling syntax elements that enable the decoder to adapt residues in a manner corresponding to that used by an encoder.
• Creating and/or transmitting and/or receiving and/or decoding a bitstream or signal that includes one or more of the described syntax elements, or variations thereof.
• Creating and/or transmitting and/or receiving and/or decoding according to any of the embodiments described.
• A method, process, apparatus, medium storing instructions, medium storing data, or signal according to any of the embodiments described.
• A TV, set-top box, cell phone, tablet, or other electronic device that performs adaptation of filter parameters according to any of the embodiments described.
• A TV, set-top box, cell phone, tablet, or other electronic device that performs adaptation of filter parameters according to any of the embodiments described, and that displays (e.g. using a monitor, screen, or other type of display) a resulting image.
• A TV, set-top box, cell phone, tablet, or other electronic device that selects (e.g. using a tuner) a channel to receive a signal including an encoded image, and performs adaptation of filter parameters according to any of the embodiments described.
• A TV, set-top box, cell phone, tablet, or other electronic device that receives (e.g. using an antenna) a signal over the air that includes an encoded image, and performs adaptation of filter parameters according to any of the embodiments described.
[0197] Note that various hardware elements of one or more of the described embodiments are referred to as "modules” that carry out (i.e., perform, execute, and the like) various functions that are described herein in connection with the respective modules. As used herein, a module includes hardware (e.g., one or more processors, one or more microprocessors, one or more microcontrollers, one or more microchips, one or
more application-specific integrated circuits (ASICs), one or more field programmable gate arrays (FPGAs), one or more memory devices) deemed suitable by those of skill in the relevant art for a given implementation. Each described module may also include instructions executable for carrying out the one or more functions described as being carried out by the respective module, and it is noted that those instructions could take the form of or include hardware (i.e., hardwired) instructions, firmware instructions, software instructions, and/or the like, and may be stored in any suitable non-transitory computer-readable medium or media, such as commonly referred to as RAM, ROM, etc.
[0198] Although features and elements are described above in particular combinations, one of ordinary skill in the art will appreciate that each feature or element can be used alone or in any combination with the other features and elements. In addition, the methods described herein may be implemented in a computer program, software, or firmware incorporated in a computer-readable medium for execution by a computer or processor. Examples of computer-readable storage media include, but are not limited to, a read only memory (ROM), a random access memory (RAM), a register, cache memory, semiconductor memory devices, magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD-ROM disks, and digital versatile disks (DVDs). A processor in association with software may be used to implement a radio frequency transceiver for use in a WTRU, UE, terminal, base station, RNC, or any host computer.
Claims
1. A method comprising: obtaining a point cloud, the point cloud comprising: a first set of information describing a geometry of the point cloud, and a second set of information describing attributes of the point cloud; performing a geometry encoding of the first set of information to generate a geometry bitstream; performing a two-stage attribute compression process to generate an attribute bitstream, wherein a first stage of the two-stage attribute compression process comprises performing a block transform on each of a set of nodes of the point cloud, and wherein a second stage of the two-stage attribute compression process comprises performing a hierarchical encoding over the set of nodes of the point cloud, and outputting an output bitstream comprising the geometry bitstream and the attribute bitstream.
2. The method of claiml , further comprising: quantizing an output of the first stage of the two-stage attribute compression process to generate a first set of quantized bits; quantizing an output of the second stage of the two-stage attribute compression process to generate a second set of quantized bits; and arithmetic coding the first and second sets of quantized bits to generate the attribute bitstream.
3. The method of any one of claims 1-2, wherein the block transform is a 3-dimensional Graph Fourier
Transform (GFT).
4. The method of any one of claims 1-2, wherein the block transform is a 3-dimensional Shape-Adaptive
Discrete Cosine Transform (SA-DCT).
5. The method of any one of claims 1-2, wherein the block transform is a 3-dimensional Kahunen Loeve
Transform (KLT).
6. The method of any one of claims 1-2, wherein the block transform is a 4-dimensional Graph Fourier Transform (GFT), and wherein one of the 4 dimensions is time.
7. The method of any one of claims 1-6, wherein the set of nodes comprise one or more leaf nodes.
8. The method of any one of claims 1-7, wherein the set of nodes comprise a level 1 node below a root level
0 node.
9. The method of any one of claims 1-8, wherein performing the hierarchical encoding over the set of nodes of the point cloud comprises: obtaining, for each of the set of nodes, one or more respective transform coefficients; determining, for each of the set of nodes, an average transform coefficient of the one or more respective transform coefficients; and performing a hierarchical transform on at least the average transform coefficient.
10. The method of claim 9, further comprising: performing the hierarchical transform on at least two transform coefficients.
11 . The method of any one of claims 1 -10, wherein the two-stage attribute compression process is performed on top of a reconstructed geometry of the point cloud.
12. The method of any one of claims 1-11 , wherein the two-stage attribute compression process is performed on a group of point cloud frames.
13. The method of claim 12, wherein the group of point cloud frames are consecutive frames.
14. The method of any one of claims 1-13, further comprising performing a geometry encode of one of more leaf nodes of the point cloud.
15. The method of claim 14, wherein the geometry encode of a first leaf node occurs in parallel with the geometry encode of a second leaf node.
16. The method of any one of claims 1-15, wherein the geometry encoding is performed in parallel with at least a portion of the two-stage attribute compression process.
17. The method of any one of claims 1-16, wherein the hierarchical transform is a Region-Adaptive Hierarchical Transform (RAHT).
18. An apparatus comprising: a processor; and a non-transitory computer-readable medium storing instructions operative, when executed by the processor, to cause the apparatus to perform the method of any one of claims 1 through 17.
19. A method comprising: obtaining an input bitstream, wherein the input bitstream comprises a geometry bitstream and an attribute bitstream; performing a geometry decode for one or more non-leaf, octree nodes within the geometry bitstream to generate one or more sets of decoded non-leaf node geometry data; performing a geometry decode for one or more leaf nodes within the geometry bitstream to generate one or more sets of decoded leaf node geometry data; performing a geometry reconstruction on the one or more sets of decoded leaf node geometry data; performing an attribute hierarchical decode for a set of nodes within the attribute bitstream to generate a first set of decoded attribute data; performing an attribute leaf decode for the set of nodes within the attribute bitstream to generate a second set of decoded attribute data; and performing an arithmetic average using the first set of decoded attribute data and the second set of decoded attribute data to generate an output set of attribute data.
20. An apparatus comprising: a processor; and a non-transitory computer-readable medium storing instructions operative, when executed by the processor, to cause the apparatus to perform the method of claim 19.
21 . An apparatus comprising at least one processor configured to perform the method of any one of claims
1-17 and 19.
22. An apparatus comprising a computer-readable medium storing instructions for causing one or more processors to perform the method of any one of claims 1-17 and 19.
23. An apparatus comprising at least one processor and at least one non-transitory computer-readable medium storing instructions for causing the at least one processor to perform the method of any one of claims 1-17 and 19.
24. A signal including a bitstream generated according to any one of claims 1-17 and 19.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| EP23306739 | 2023-10-09 | ||
| EP23306739.6 | 2023-10-09 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025078201A1 true WO2025078201A1 (en) | 2025-04-17 |
Family
ID=88600425
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/EP2024/077544 Pending WO2025078201A1 (en) | 2023-10-09 | 2024-10-01 | Two-stage point cloud attribute encoding scheme with nested local and global transforms |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025078201A1 (en) |
Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2022257968A1 (en) * | 2021-06-11 | 2022-12-15 | 维沃移动通信有限公司 | Point cloud coding method, point cloud decoding method, and terminal |
-
2024
- 2024-10-01 WO PCT/EP2024/077544 patent/WO2025078201A1/en active Pending
Patent Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2022257968A1 (en) * | 2021-06-11 | 2022-12-15 | 维沃移动通信有限公司 | Point cloud coding method, point cloud decoding method, and terminal |
Non-Patent Citations (7)
| Title |
|---|
| "G-PCC codec description", no. n19331, 25 June 2020 (2020-06-25), XP030289576, Retrieved from the Internet <URL:http://phenix.int-evry.fr/mpeg/doc_end_user/documents/130_Alpbach/wg11/w19331.zip w19331.docx> [retrieved on 20200625] * |
| DONY, D.: "The Transform and Data Compression Handbook", 2001, CRC PRESS LLC |
| FREITAS DAVI R ET AL: "Geometry-Based Compression of Plenoptic Point Clouds", 2022 IEEE 24TH INTERNATIONAL WORKSHOP ON MULTIMEDIA SIGNAL PROCESSING (MMSP), IEEE, 26 September 2022 (2022-09-26), pages 1 - 5, XP034231394, DOI: 10.1109/MMSP55362.2022.9949107 * |
| GRAZIOSI, DAMILLO ET AL.: "An Overview of Ongoing Point Cloud Compression Standardization Activities: Video-Based (V-PCC) and Geometry-Based (G-PCC", APSIPA TRANSACTIONS ON SIGNAL AND INFORMATION PROCESSING, 2020, pages 1 - 15 |
| SIKORA, TMAKAI, B.: "Shape-Adaptive DCT for Generic Coding of Video", IEEE TRANS. ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 1995 |
| YING ZIYU ET AL: "Pushing Point Cloud Compression to the Edge", 2022 55TH IEEE/ACM INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE (MICRO), IEEE, 1 October 2022 (2022-10-01), pages 282 - 299, XP034214551, DOI: 10.1109/MICRO56248.2022.00031 * |
| ZHANG, CHA ET AL.: "Point Cloud Attribute Compression with Graph Transform", IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, 2014 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20250056002A1 (en) | Precision refinement for motion compensation with optical flow | |
| WO2024015400A1 (en) | Deep distribution-aware point feature extractor for ai-based point cloud compression | |
| EP4588242A1 (en) | Context-aware voxel-based upsampling for point cloud processing | |
| WO2024220568A1 (en) | Generative-based predictive coding for point cloud compression | |
| KR20250108615A (en) | Heterogeneous mesh autoencoder | |
| WO2025049125A1 (en) | An enhanced feature processing for image compression based on feature distribution learning | |
| JP2025536907A (en) | Point-based attribute transfer for textured meshes | |
| WO2025078201A1 (en) | Two-stage point cloud attribute encoding scheme with nested local and global transforms | |
| WO2025078267A1 (en) | Hybrid point cloud encoding method with local surface representation | |
| CN115136602A (en) | 3D point cloud enhancement with multiple measurements | |
| WO2025080447A1 (en) | Implicit predictive coding for point cloud compression | |
| EP4636611A1 (en) | Raht learning based prediction | |
| WO2025149464A1 (en) | Alternative prediction methods for the lifting wavelet transform in subdivision mesh surfaces | |
| WO2025080594A1 (en) | Octree feature for deep-feature based point cloud compression | |
| US20250365427A1 (en) | Multi-resolution motion feature for dynamic pcc | |
| US20250337952A1 (en) | Providing segmentation information for immersive video | |
| WO2025080446A1 (en) | Explicit predictive coding for point cloud compression | |
| US20250343920A1 (en) | Rate control for point cloud coding with a hyperprior model | |
| EP4633166A1 (en) | Inter block multi-layer intra prediction for region-adaptive hierarchical transform | |
| WO2025080438A1 (en) | Intra frame dynamics for lidar point cloud compression | |
| WO2025078337A1 (en) | Avatar media representation for transmission | |
| WO2025153193A1 (en) | Geometry avatar media codec for transmission | |
| WO2025049126A1 (en) | An enhanced feature processing for point cloud compression based on feature distribution learning | |
| WO2025014553A1 (en) | Generative-based predictive coding for lidar point cloud compression | |
| WO2025168559A1 (en) | Micro patch convolutions for point cloud attribute compression |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24783244 Country of ref document: EP Kind code of ref document: A1 |