HK40001868B - Systems and methods for signaling missing or corrupted video data - Google Patents
Systems and methods for signaling missing or corrupted video data Download PDFInfo
- Publication number
- HK40001868B HK40001868B HK19124961.4A HK19124961A HK40001868B HK 40001868 B HK40001868 B HK 40001868B HK 19124961 A HK19124961 A HK 19124961A HK 40001868 B HK40001868 B HK 40001868B
- Authority
- HK
- Hong Kong
- Prior art keywords
- video
- sample
- segment
- frames
- frame
- Prior art date
Links
Description
Technical Field
This application relates to sending lost or corrupted video data signals. The signaling information may be part of a media file associated with one or more media file formats, such as an ISO base media file format (ISOBMFF) or other suitable file format, a file format associated with a streaming application, such as dynamic adaptive streaming over HTTP (DASH), HTTP real-time streaming (HLS), Common Media Application Format (CMAF), and/or other suitable streaming application.
Background
Video coding standards include ITU-T h.261; ISO/IEC MPEG-1 Visual; ITU-T H.262 or ISO/IEC MPEG-2 Visual; ITU-T H.263; ISO/IEC MPEG-4 Visual; ITU-T H.264 or ISO/IEC MPEG-4 AVC, including its scalable video coding extension known as Scalable Video Coding (SVC) and its multiview video coding extension (i.e., Multiview Video Coding (MVC) extension); and High Efficiency Video Coding (HEVC), also known as ITU-T h.265 and ISO/23008-2, including scalable coding extensions thereof (i.e., scalable high efficiency video coding SHVC) and multiview extensions thereof (i.e., multiview high efficiency video coding MV-HEVC).
Disclosure of Invention
In some embodiments, techniques are described for indicating that media content includes lost and/or corrupted video data. For example, an indication may be added to a file that indicates that the media content in the file includes lost and/or corrupted media data. In another example, lost and/or corrupted media data may be indicated by not allowing such media data to be included in a file and/or bitstream. For example, transmitter-side constraints may be defined that require an encoder or other transmitter-side device not to include corrupted media frames in a file (in the encapsulation) and/or in a segment (in the segmentation). The missing or corrupted video data may comprise video data for a video frame (referred to as a missing or corrupted video frame), video data for a video segment (referred to as a missing or corrupted video segment), or other missing video data. By sending the missing and/or corrupted video data signals, the video player device can appropriately render or otherwise handle the missing and/or corrupted video frames as the media content is processed.
According to at least one example, a method of processing video data is provided. The method includes obtaining a plurality of frames of video data, determining that at least one frame of the plurality of frames is corrupted, generating an indication that the at least one frame is corrupted, and generating a media file containing the indication.
In another example, an apparatus for processing video data is provided. The apparatus may include a memory configured to store video data and a processor (e.g., processing circuitry). The processor is configured to obtain a plurality of frames of video data, determine that at least one frame of the plurality of frames is corrupted, generate an indication that the at least one frame is corrupted, and generate a media file including the indication.
In another example, a non-transitory computer-readable medium is provided having instructions stored thereon, which when executed by one or more processors, cause the one or more processors to: obtaining a plurality of frames of video data; determining that at least one frame of the plurality of frames is corrupted; generating an indication of the at least one frame corruption; and generating a media file comprising the indication.
In another example, an apparatus for processing video data is provided. The apparatus comprises: means for obtaining a plurality of frames of video data; means for determining that at least one frame of the plurality of frames is damaged; means for generating an indication of the at least one frame corruption; and means for generating a media file including the indication.
In some aspects, the video data includes first data corresponding to at least one frame of the plurality of frames. In such cases, the first data is insufficient to correctly decode the at least one frame.
In some aspects, the at least one frame is part of an inter prediction chain, and the video data includes first data corresponding to the inter prediction chain. In such cases, the first data is insufficient to correctly decode the at least one frame.
In some aspects, the video data may include a plurality of video samples. Each of the plurality of video samples includes one or more frames of the plurality of frames. The plurality of video samples includes a first video sample including at least one frame that is corrupted. The first video sample is associated with a type identifier that identifies a type of content included in the first video sample. The indication may include a type identifier.
In some aspects, the type identifier may indicate that the first video sample includes at least one of the corrupted frames. The type identifier may also indicate the media type and the type of decoder used to process the media file. In some aspects, the type identifier includes a sample entry type.
In some aspects, the media file is based on the International Standards Organization (ISO) base media file format (ISOBMFF).
In some aspects, a media file may include a list representation of a plurality of video data segments. The plurality of segments may include a first segment and a second segment. Each of the first and second segments may include one or more frames of the plurality of frames. The second segment may further include one or more missing frames of the plurality of frames. The indication may be a first indication. In some aspects, the methods, apparatus, and computer-readable media may further comprise: determining that the second segment includes one or more missing frames; generating a second indication of the one or more missed frames; and including the second indication in a media file.
In some aspects, the media file is based on a Media Presentation Description (MPD) format. The list representation may include one or more adaptation sets. Each of the one or more adaptation sets includes at least one or more of one or more sub-representations or one or more representations that include one or more missing frames. One or more representations or each of one or more sub-representations are associated with one or more segments. The second indication includes one or more elements associated with one or more missing frames included in the one or more representations or the one or more sub-representations. The one or more elements are associated with a set of attributes that includes a timestamp and a duration of the second segment.
In some aspects, the list representation includes information for retrieving the first segment but not the second segment. The second indication contains an omission of information for retrieving the second segment.
In some aspects, the list representation includes a text indicator associated with the second segment. The text indicator may indicate that the second segment includes one or more missing frames. The second indication may include a textual indicator.
In some aspects, the media file is based on an HTTP real-time streaming (HLS) playlist format. Each of the plurality of clips is associated with a Transport Stream (TS) file. The list representation contains a set of tags. The text indicator is a tag in the set of tags that is associated with the second segment.
In some aspects, the media files are based on a common media program application format (CMAF) and include playlists. Each of the plurality of fragments is associated with an ISOBMFF. The list representation may include a set of tags and the text indicator is a tag in the set of tags associated with the second segment.
In some aspects, an apparatus includes a mobile device having a camera for capturing pictures.
According to at least one other example, a method of processing a media file is provided. The method includes obtaining a media file containing media content, the media content containing a plurality of frames of video data. The method further includes determining that the plurality of frames contains at least one corrupted frame based on the indication in the media file. The method further includes processing at least one corrupted frame based on the indication.
According to another example, an apparatus for processing a media file is provided. An apparatus may include a memory configured to store a media file and a processor. The processor is configured to obtain a media file including media content. The media content comprises a plurality of frames of video data. The processor is further configured to determine, based on the indication in the media file, that the plurality of frames includes at least one corrupted frame. The processor is further configured to process at least one corrupted frame based on the indication.
In another example, a non-transitory computer-readable medium is provided having instructions stored thereon, which when executed by one or more processors, cause the one or more processors to: obtaining a media file containing media content, the media content comprising a plurality of frames of video data; determining that the plurality of frames contains at least one corrupted frame based on the indication in the media file; and processing at least one corrupted frame based on the indication.
In another example, an apparatus for processing video data is provided. The apparatus comprises: means for obtaining a media file comprising media content, the media content comprising a plurality of frames of video data; means for determining that the plurality of frames includes at least one corrupted frame based on an indication in a media file; and means for processing the at least one corrupted frame based on the indication.
In some aspects, the video data includes first data corresponding to at least one frame of the plurality of frames. In such aspects, the first data is insufficient to correctly decode the at least one frame.
In some aspects, the at least one frame is part of an inter prediction chain, and the video data includes first data corresponding to the inter prediction chain. In such cases, the first data is insufficient to correctly decode the at least one frame.
In some aspects, the media content includes a plurality of video samples, wherein each of the plurality of video samples includes one or more frames of the plurality of frames. The plurality of video samples includes a first video sample including at least one frame that is corrupted. The first video sample is associated with a type identifier that identifies a type of content included in the first video sample. In such aspects, the indication includes a type identifier.
In some aspects, the type identifier indicates that the video sample includes the at least one of the corrupted frames. In some cases, the type identifier indicates the media type and the type of decoder used to process the media file. In some aspects, the type identifier includes a sample entry type.
In some aspects, the media file is based on the International Standards Organization (ISO) base media file format (ISOBMFF).
In some aspects, processing the at least one corrupted frame based on the indication comprises: identifying a portion of the media content corresponding to the corrupted at least one frame based on the indication; and skipping processing of the portion of the media content.
In some aspects, the media file contains a tabular representation of a plurality of video data segments. The plurality of segments includes a first segment and a second segment. Each of the first and second segments includes one or more frames of the plurality of frames. The second segment further includes one or more missing frames of the plurality of frames. The indication is a first indication and the media file further includes a second indication to indicate that the second segment includes one or more missing frames of the plurality of frames.
In some aspects, the media file is based on a Media Presentation Description (MPD) format, and the list representation includes one or more adaptation sets. Each of the one or more adaptation sets includes at least one or more of one or more sub-representations or one or more representations that include one or more missing frames. The one or more representations or each of the one or more sub-representations is associated with one or more segments. The second indication includes one or more elements associated with one or more missing frames included in one or more representations or sub-representations associated with the second segment. The one or more elements are associated with a set of attributes that includes a timestamp and a duration of the second segment.
In some aspects, the list representation includes information for retrieving the first segment but not the second segment. In such aspects, the second indication includes an omission of information for retrieving the second segment.
In some aspects, the list representation includes a text indicator associated with the second segment. The text indicator indicates that the second segment includes one or more missing frames. In such aspects, the second indication includes a textual indicator.
In some aspects, the media file is based on an HTTP real-time streaming (HLS) playlist format, and each of the plurality of segments is associated with a Transport Stream (TS) file. In such aspects, the list representation includes a set of tags, and the text indicator is a tag of the set of tags that is associated with the second segment.
In some aspects, the media files are based on a common media program application format (CMAF) and include playlists. Each fragment of the plurality of fragments is associated with an ISOBMFF. In such aspects, the list representation includes a set of tags, and the text indicator is a tag of the set of tags that is associated with the second segment.
In some aspects, processing the at least one corrupted frame based on the indication comprises transmitting a request to the streaming server to request the third fragment to replace the second fragment.
In some aspects, the apparatus further comprises a display for displaying one or more of the plurality of frames of video data.
In some aspects, an apparatus includes a mobile device having a camera for capturing pictures.
This summary is not intended to identify key or essential features of the claimed subject matter, nor is it intended to be used in isolation to determine the scope of the claimed subject matter. The subject matter should be understood by reference to the appropriate portions of the entire specification of this patent, any or all of the drawings, and each claim.
The foregoing, along with other features and embodiments, will become more apparent when referring to the following description, claims, and accompanying drawings.
Drawings
Illustrative embodiments of the invention are described in detail below with reference to the following drawings:
FIG. 1 is a block diagram illustrating an example of a system including an encoding device and a decoding device.
Fig. 2 and 3 illustrate examples of ISO base media files containing data and metadata for video presentation formatted according to ISOBMFF.
Fig. 4 illustrates an example system for media streaming.
FIG. 5 provides a graphical representation of an example of a media presentation description.
Fig. 6 provides a graphical representation of an example of a playlist.
Fig. 7A and 7B illustrate an example of sending corrupted frame signals in an ISOBMFF file.
Fig. 8A and 8B illustrate an example of sending a lost frame signal in an ISOBMFF file.
Fig. 9 illustrates an example of sending a missing frame signal in an ISOBMFF file.
Fig. 10 illustrates an example of providing a unified send lost video frame or corrupted video frame signal in an ISOBMFF file.
Fig. 11 and 12 illustrate examples of sending a missing file segment signal for media streaming.
Fig. 13 illustrates an example of a process for processing video data.
FIG. 14 illustrates an example of a process for processing a media file.
Fig. 15 is a block diagram illustrating an example encoding device that may implement one or more of the techniques described in this disclosure.
FIG. 16 is a block diagram illustrating an example decoding device.
Detailed Description
Certain aspects and embodiments of the disclosure are provided below. Some of these aspects and embodiments are applicable independently and some of them may be applied in combinations that will be apparent to those skilled in the art. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, that various embodiments may be practiced without these specific details. The drawings and description are not intended to be limiting.
The following description provides exemplary embodiments only, and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It being understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the invention as set forth in the appended claims.
Specific details are given in the following description to provide a thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without such specific details. For example, circuits, systems, networks, processes, and other components may be shown in block diagram form as components in order not to obscure the embodiments in unnecessary detail. In other instances, well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments.
Also, it is noted that individual embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. Additionally, the order of the operations may be rearranged. A process terminates when its operations are completed, but may have additional steps not included in the figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a procedure corresponds to a function, its termination corresponds to a return of the function to the calling function or the main function.
The term "computer-readable medium" includes, but is not limited to portable or non-portable storage devices, optical storage devices, and various other media capable of storing, containing, or carrying instruction(s) and/or data. Computer-readable media may include non-transitory media in which data may be stored and does not include carrier waves and/or transitory electronic signals that propagate wirelessly or via a wired connection. Examples of non-transitory media may include, but are not limited to, magnetic disks or tapes, optical storage media such as Compact Discs (CDs) or Digital Versatile Discs (DVDs), flash memory, or memory devices. A computer-readable medium may have code and/or machine-executable instructions stored thereon that may represent programs, functions, subroutines, programs, routines, subroutines, modules, suites, classes or instructions, data structures, or any combination of program statements. A code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory contents. Information, arguments, parameters, data, etc. may be passed, forwarded, or transmitted via any suitable means including memory sharing, message passing, token passing, network transmission, or the like.
Furthermore, embodiments may be implemented by hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware or microcode, the program code or code segments (e.g., computer program products) to perform the necessary tasks may be stored in a computer-readable or machine-readable medium. One or more processors may perform the necessary tasks.
Video frames may be encoded and/or compressed for storage and/or transmission. The encoding and/or compression may be accomplished using a video codec (e.g., an h.265/HEVC compliant codec, an h.264/AVC compliant codec, or other suitable codec) and a compressed video bitstream or group of bitstreams is generated. Encoding video data using a video codec is described in further detail below.
One or more encoded video bitstreams may be stored and/or encapsulated in a media format or a file format. The one or more stored bitstreams may be transmitted, e.g., over a network, to a receiver device, which may decode and render the video for display. Such a receiver device may be referred to herein as a video display device. For example, a streaming server may generate an encapsulated file from encoded video data (e.g., using an International Standards Organization (ISO) base media file format, and/or other file formats optimized for streaming). For example, a video codec may encode video data and an encapsulation engine may generate a media file by encapsulating the video data in one or more ISO format media files. Alternatively or additionally, the one or more stored bitstreams may be provided directly from the storage medium to the receiver device.
The receiver device may also implement a codec to decode and/or decompress the encoded video bitstream. A receiver device may support a media or file format to encapsulate a video bitstream into a file (or multiple files), extract video (and possibly also audio) data to generate encoded video data. For example, a receiver device parses a media file having encapsulated video data to generate encoded video data, and a codec in the receiver device may decode the encoded video data. The receiver device may then send the decoded video signal to a rendering device (e.g., a video display device). The reproduction apparatus may capture the same frame rate of the video or reproduce the video at a different frame rate.
The file format standard may define a format for encapsulating and decapsulating video (and possibly audio) data into one or more files. The file format standard includes the International organization for standardization (ISO) base media File Format (ISOBMFF defined in ISO/IEC 14496-12); and other file formats derived from ISOBMFF, including the Moving Picture Experts Group (MPEG) MPEG-4 file format (defined in ISO/IEC 14496-15), the third generation partnership project (3GPP) file format (defined in 3GPP TS 26.244), and the Advanced Video Coding (AVC) and High Efficiency Video Coding (HEVC) file formats (all defined in ISO/IEC 14496-15). The latest version of draft text for ISO/IEC 14496-12 and 14496-15 may be found in http: int-evry fr mpeg doc end user documents 111 Geneva/wg11/w15177-v6-w15177 zip and http: int-evry.fr/mpeg/doc _ end _ user/documents/112_ Warsaw/wg11/w15479-v2-w15479.
ISOBMFF serves as a basis for many codec encapsulation formats (e.g., AVC file format or any other suitable codec encapsulation format), as well as for many multimedia container formats (e.g., MPEG-4 file format, 3GPP file format (3GP), DVB file format, or any other suitable multimedia container format). The ISOBMFF base file format may be used for continuous media, which is also referred to as streaming media.
In addition to continuous media (e.g., audio and video), static media (e.g., images) and metadata may be stored in files that comply with ISOBMFF. Files constructed according to ISOBMFF may be used for many purposes (including local media file playback, progressive downloading of remote files), as segments of a media streaming scheme, such as dynamic adaptive streaming over HTTP (DASH), a media streaming scheme using Common Media Application Format (CMAF), etc., as containers for content to be streamed (in this case, containers containing packetization instructions), for recording of received real-time media bitstreams, or other uses.
A media file or media bitstream may include damaged or missing video frames in the encoded data. A lost frame may occur when the encoded data for that lost frame is lost altogether. Corrupted frames may occur in different ways. For example, a frame may become corrupted when portions of the encoded data for that frame are lost. As another example, a frame may become corrupted when the frame is part of an inter-prediction chain, and some other encoded data of the inter-prediction chain is lost so that the frame is not correctly decodable.
Encoded data may be attributed to a variety of reasons including corrupted or missing video frames. For example, data loss may occur during transmission of a media bitstream (compressed, encapsulated bitstream). As a result, media files may be partially received and recorded, and thus there are missing or corrupted video frames in the recorded file. As mentioned previously, a lost frame is a frame whose coded data is lost entirely, and a corrupted frame is a frame whose coded data is lost partially or whose some coded data of frames in the inter-prediction chain is lost so that the corrupted frame cannot be decoded correctly. As another example, encoded media data may become corrupted (e.g., due to media file corruption) or even lost before being encapsulated for transmission at a server. As another example, an encoder (or transcoder) may crash or fail in encoding the media data. Encoder failure may result in some frames not being encoded (and not included) in the encoded data, such that the encoded data includes a missing frame. Encoder failures may also result in partially encoded frames and the inclusion of partial data in the encoded data. The encoded data may also include corrupted frames if the partial data is not sufficient to correctly decode the frame.
As mentioned above, data loss or absence may occur before the media encoder handles the video data. In some cases, frames may be skipped by the encoder during encoding. In such cases, the encoder may encode the bitstream without losing or skipping frames, and the bitstream may have a non-constant frame rate. In fact, for video, the frame immediately preceding the missing or skipped frame will have a longer play duration, and for audio, the missing or skipped frame is considered a silent frame. In some cases, for each lost or skipped frame of video, the encoder may encode the virtual video frame using a minimum number of bits, in which case the decoding result of the virtual video frame is exactly the same as the previous frame in output order. For speech/audio, silent frames are encoded, thus keeping the bit stream at a constant frame rate. In either case, the coded media bitstream is considered to have no missing or corrupted frames, and the media encapsulation and/or streaming format in the file may remain the same as if there was no such data loss/absence or frame skipping.
As mentioned previously, data loss or absence may occur during encoding and/or transcoding. In such cases, there may also be lost and/or corrupted media frames, depending on when a crash or failure occurs. For media processing functionality after the media encoder (encapsulation and/or segmentation), the handling may be the same as for the case of data loss that occurs after the encoder.
ISOBMFF and its derived file format (e.g., AVC file format or other derived file format) are widely used for storage and encapsulation of media content (e.g., including video, audio, and timing text) in many multimedia applications. However, ISOBMFF and file formats derived from ISOBMFF do not include specifications for sending corrupted video frame signals. In addition, there is also a lack of a mechanism to transmit missing or corrupted video frame signals in a media streaming scheme.
The lack of a signaling scheme may result in undesirable behavior in the receiver device when processing encoded data with missing or corrupted video frames. For example, a receiver device may attempt to decode a frame that cannot be decoded because the encoded data for the frame is lost or corrupted. As a result, the decoder may crash or shut down. Further, during the media streaming session, the receiver device may attempt to retrieve and play media segment files that are missing or contain empty frames. When the receiver device fails to retrieve and play the media segment file, the media streaming session will become interrupted. Both conditions cause disturbances in the reproduced media streaming process, resulting in a poor user experience.
In various implementations, modifications and/or additions to the ISOBMFF may indicate/signal that a file that has been formatted according to the ISOBMFF or a format derived from the ISOBMFF includes corrupted video frames. For example, in some implementations, a media file may include an indicator to indicate that one or more video frames associated with a particular play time stamp and play duration are corrupted. The indicator may also be configured as a uniform indicator associated with both the corrupted video frame and the missing video frame. In some implementations, sending the presence of corrupted video frames may occur in the form of one or more omissions of those corrupted video frames from the media file. In various implementations, modifications and/or additions to existing media streaming schemes may also indicate to a receiver device that a media segment contains a missing (or otherwise un-decodable) frame before the receiver device requests the media segment.
In such and other implementations, the receiver device may recognize that the media file includes a corrupted video frame based on the indication/indicator/signaling/signal. The receiver device may also identify portions of the encoded data that include the corrupted video frame prior to decoding the data, and take certain measures in handling the corrupted video frame. For example, the receiver device may skip decoding of the corrupted video frame and move onto the next decodable video frame to avoid crashing or shutting down the decoder as described above. Further, during a media streaming session, the receiver device may also recognize, based on the indication/indicator/signaling/signal, that a media segment includes a missing (or otherwise un-decodable) frame prior to requesting the media segment, and take certain measures in handling the media segment. For example, a receiver device may obtain another media segment (e.g., a media segment having the same content but from a different source, a media segment having the same timestamp and duration but having a different resolution/bit rate, etc.) to maintain continuity of the streaming session.
Fig. 1 is a block diagram illustrating an example of a system 100 including an encoding device 104 and a decoding device 112. Encoding device 104 may be part of a source device and decoding device 112 may be part of a receiver device. The source device and/or the receiver device may comprise an electronic device, such as a mobile or stationary telephone handset (e.g., a smartphone, a cellular telephone, or the like), a desktop computer, a laptop or notebook computer, a tablet computer, a set-top box, a television, a video camera, a display device, a digital media player, a video game console, a video streaming device, or any other suitable electronic device. In some examples, the source device and the sink device may include one or more wireless transceivers for wireless communication. The coding techniques described herein are applicable to video coding in various multimedia applications, including streaming video transmissions (e.g., over the internet), television broadcasts or transmissions, encoding digital video for storage on a data storage medium, decoding digital video stored on a data storage medium, or other applications. In some examples, system 100 may support one-way or two-way video transmission to support applications such as video conferencing, video streaming, video playback, video broadcasting, gaming, and/or video telephony.
By using a video coding standard or protocol to generate an encoded video bitstream, the encoding device 104 (or encoder) may be used to encode video data, including metaverse video data. Video coding standards include ITU-T H.261, ISO/IEC MPEG-1 Visual, ITU-T H.262, or ISO/IEC MPEG-2 Visual, ITU-T H.263, ISO/IEC MPEG-4 Visual, and ITU-T H.264 (also known as ISO/IEC MPEG-4 AVC), including their scalable video coding and multiview video coding extensions (referred to as SVC and MVC, respectively). More recent video coding standards (high efficiency video coding (HEVC)) have been completed by the Joint collaborative team on video coding (JCT-VC) of the ITU-T Video Coding Experts Group (VCEG) and the ISO/IEC Moving Picture Experts Group (MPEG). Various extensions to HEVC handle multi-layer video coding, and are also being developed by JCT-VC, including multiview extensions to HEVC (referred to as MV-HEVC) and scalable extensions to HEVC (referred to as SHVC), or any other suitable coding protocol.
Implementations described herein use the HEVC standard or extensions thereof to describe examples. However, the techniques and systems described herein may also be applicable to other coding standards, such as AVC, MPEG, extensions thereof, or other suitable coding standards that are or have not been available or yet developed. Thus, while the techniques and systems described herein may be described with reference to a particular video coding standard, those of ordinary skill in the art will appreciate that the description should not be construed as applicable to only that particular standard.
The video source 102 may provide video data to the encoding device 104. Video source 102 may be part of a source device or may be part of a device other than a source device. The video source 102 may include a video capture device (e.g., a video camera, a camera phone, a video phone, or the like), a video archive containing stored video, a video server or content provider that provides video data, a video feed interface that receives video from the video server or content provider, a computer graphics system for generating computer graphics video data, a combination of such sources, or any other suitable video source. One example of the video source 102 may include an internet protocol camera (IP camera). The IP camera is a type of digital camera that may be used for surveillance, home security, or other suitable applications. Unlike analog Closed Circuit Television (CCTV) cameras, IP cameras can send and receive data over computer networks and the internet.
The video data from video source 102 may include one or more input pictures or frames. A picture or frame is a still image that is part of a video. The encoder engine 106 (or encoder) of the encoding device 104 encodes the video data to generate an encoded video bitstream. In some examples, an encoded video bitstream (or "video bitstream" or "bitstream") is a series of one or more coded video sequences. A Coded Video Sequence (CVS) includes a series of Access Units (AUs) that start with an AU that has a random access point picture in the base layer and that has certain properties, up to and not including a next AU that has a random access point picture in the base layer and that has certain properties. For example, certain properties of a random access point picture that starts a CVS may include a RASL flag (e.g., NoRaslOutputFlag) equal to 1. Otherwise, the random access point picture (with RASL flag equal to 0) does not start CVS. An Access Unit (AU) includes one or more coded pictures and control information corresponding to the coded pictures sharing the same output time. Coded slices of a picture are encapsulated at the bitstream level into data units called Network Abstraction Layer (NAL) units. For example, an HEVC video bitstream may include one or more CVSs that include NAL units. Two classes of NAL units exist in the HEVC standard, including Video Coding Layer (VCL) NAL units and non-VCL NAL units. VCL NAL units include one slice or slice segment of coded picture data (described below), and non-VCL NAL units include control information regarding one or more coded pictures.
A NAL unit may contain a sequence of bits (e.g., an encoded video bitstream, a CVS of the bitstream, or the like) that forms a coded representation of video data (e.g., a coded representation of a picture in video). Encoder engine 106 generates a coded representation of the pictures by partitioning each picture into a plurality of slices. The slice is then partitioned into Coding Tree Blocks (CTBs) of luma samples and chroma samples. The CTB of a luma sample and one or more CTBs of chroma samples along with the syntax of the samples are referred to as Coding Tree Units (CTUs). The CTU is the basic processing unit for HEVC coding. A CTU may be split into multiple Coding Units (CUs) of different sizes. A CU contains an array of luma and chroma samples called Coding Blocks (CBs).
Luma and chroma CBs may be further split into Prediction Blocks (PB). PB is a block of samples of luma or chroma components that use the same motion parameters for inter prediction. The luma PB and the one or more chroma PBs and associated syntax form a Prediction Unit (PU). A motion parameter set signal is sent in the bitstream for each PU and used for inter prediction of luma PB and one or more chroma PBs. A CB may also be partitioned into one or more Transform Blocks (TBs). TB represents a square block of samples of the color component, to which the same two-dimensional transform is applied for coding the prediction residual signal. A Transform Unit (TU) represents a TB of luma and chroma samples and corresponding syntax elements.
The size of a CU corresponds to the size of a coding node and may be square in shape. For example, the size of a CU may be 8 × 8 samples, 16 × 16 samples, 32 × 32 samples, 64 × 64 samples, or any other suitable size up to the size of the respective CTU. The phrase "nxn" is used herein to refer to the pixel size of a video block in terms of vertical and horizontal dimensions (e.g., 8 pixels by 8 pixels). The pixels in a block may be arranged in rows and columns. In some embodiments, a block may not have the same number of pixels in the horizontal direction as in the vertical direction. Syntax data associated with a CU may describe, for example, partitioning the CU into one or more PUs. The partition mode may differ between whether the CU is intra-prediction mode encoded or inter-prediction mode encoded. The PU may be segmented into non-square shapes. Syntax data associated with a CU may also describe partitioning the CU into one or more TUs, e.g., according to CTUs. The TU may be square or non-square in shape.
According to the HEVC standard, a transform may be performed using Transform Units (TUs). TU may vary for different CUs. A TU may be sized based on the size of a PU within a given CU. The TU may be the same size as the PU or smaller than the PU. In some examples, residual samples corresponding to a CU may be subdivided into smaller units using a quadtree structure referred to as a Residual Quadtree (RQT). The leaf nodes of the RQT may correspond to TUs. The pixel difference values associated with the TUs may be transformed to produce transform coefficients. The transform coefficients may then be quantized by the encoder engine 106.
Once a picture of video data is partitioned into CUs, encoder engine 106 predicts each PU using a prediction mode. The prediction is then subtracted from the original video data to get the residual (described below). For each CU, a prediction mode signal may be sent inside the bitstream using syntax data. The prediction mode may include intra prediction (or intra-picture prediction) or inter prediction (or inter-picture prediction). When intra prediction is used, each PU is predicted from neighboring image data in the same picture, using, for example, DC prediction to find the average of the PU, flat prediction to fit a flat surface to the PU, directional prediction to extrapolate from the neighboring data, or any other suitable type of prediction. When inter prediction is used, each PU is predicted from image data in one or more reference pictures (either before or after the current picture in output order) using motion compensated prediction. A decision to code a picture region using inter-picture prediction or intra-picture prediction may be made, for example, at the CU level. In some examples, one or more slices of a picture are assigned a slice type. Slice types include I-slice, P-slice, and B-slice. I slices (intra, independently decodable) are slices of pictures coded only by intra prediction, and thus are independently decodable since an I slice only requires data within a frame to predict any block of a slice. A P slice (uni-directional predicted frame) is a slice of a picture that can be coded by intra prediction and uni-directional inter prediction. Each block within a P slice is coded by either intra prediction or inter prediction. When inter-prediction applies, a block is predicted by only one reference picture, and thus reference samples are only from one reference region of one frame. A B slice (bi-predictive frame) is a slice of a picture that can be coded by intra-prediction and inter-prediction. The blocks of the B-slice may be bi-predicted from two reference pictures, where each picture contributes one reference region and the sample sets of the two reference regions are weighted (e.g., by equal weights) to generate the prediction signal of the bi-predicted block. As explained above, slices of one picture are coded independently. In some cases, a picture may be coded as only one slice.
The PU may include data related to a prediction process. For example, when a PU is encoded using intra prediction, the PU may include data describing an intra prediction mode for the PU. As another example, when the PU is encoded using inter mode, the PU may include data defining a motion vector for the PU. The data defining the motion vector for a PU may describe, for example, a horizontal component of the motion vector, a vertical component of the motion vector, a resolution of the motion vector (e.g., one-quarter pixel precision or one-eighth pixel precision), a reference picture to which the motion vector points, and/or a reference picture list (e.g., list 0, list 1, or list C) of the motion vector.
The encoding device 104 may then perform the transform and quantization. For example, after prediction, encoder engine 106 may calculate residual values corresponding to the PUs. The residual values may comprise pixel difference values. Any residual data that may remain after prediction execution performs a transform using a block transform, which may be based on a discrete cosine transform, a discrete sine transform, an integer transform, a wavelet transform, or other suitable transform function. In some cases, one or more block transforms (e.g., size 32 × 32, 16 × 16, 8 × 8, 4 × 4, or the like) may be applied to the residual data in each CU. In some embodiments, a TU may be used for the transform and quantization processes implemented by encoder engine 106. A given CU with one or more PUs may also include one or more TUs. As described in further detail below, residual values may be transformed into transform coefficients using a block transform, and may then be quantized and scanned using TUs to generate serialized transform coefficients for entropy coding.
In some embodiments, after intra-predictive or inter-predictive coding using PUs of the CU, encoder engine 106 may calculate residual data for the TUs of the CU. A PU may include pixel data in the spatial domain (or pixel domain). After applying the block transform, the TU may include coefficients in the transform domain. As mentioned previously, the residual data may correspond to pixel difference values between pixels of the unencoded picture and prediction values corresponding to the PU. Encoder engine 106 may form TUs that include residual data for the CU, and may then transform the TUs to generate transform coefficients for the CU.
The encoder engine 106 may perform quantization of the transform coefficients. Quantization provides further compression by quantizing the transform coefficients to reduce the amount of data used to represent the coefficients. For example, quantization may reduce the bit depth associated with some or all of the coefficients. In one example, coefficients having n-bit values may be reduced during quantization to m-bit values, where n is greater than m.
After performing quantization, the coded video bitstream includes quantized transform coefficients, prediction information (e.g., prediction modes, motion vectors, or the like), partitioning information, and any other suitable data, such as other syntax data. Different elements of the coded video bitstream may then be entropy encoded by encoder engine 106. In some examples, encoder engine 106 may utilize a predefined scan order to scan the quantized transform coefficients to generate a serialized vector that may be entropy encoded. In some examples, the encoder engine 106 may perform adaptive scanning. After scanning the quantized transform coefficients to form a vector (e.g., a one-dimensional vector), encoder engine 106 may entropy encode the vector. For example, the encoder engine 106 may use context adaptive variable length coding, context adaptive binary arithmetic coding, syntax-based context adaptive binary arithmetic coding, probability interval partitioning entropy coding, or another suitable entropy encoding technique.
An output 110 of the encoding device 104 may send the NAL units that make up the encoded video bitstream data to a decoding device 112 of the receiver device via a communication link 120. An input 114 of the decoding device 112 may receive the NAL unit. The communication link 120 may include channels provided by a wireless network, a wired network, or a combination of wired and wireless networks. The wireless network may include any wireless interface or combination of wireless interfaces, and may include any suitable wireless network (e.g., the internet or other wide area network, a packet-based network, WiFi, etc.)TMRadio Frequency (RF), UWB, WiFi-Direct, cellular, Long Term Evolution (LTE), WiMaxTMOr the like). The wired network may include any wired interface (e.g., fiber optic, ethernet, powerline ethernet, ethernet over coaxial cable, Digital Signal Line (DSL), or the like). Wired and/or wireless networks may be implemented using various equipment, such as base stations, routers, access points, bridges, gateways, switches, or the like. The encoded video bitstream data may be modulated according to a communication standard, such as a wireless communication protocol, and transmitted to a receiver device.
In some examples, encoding device 104 may store the encoded video bitstream data in memory 108. The output 110 may retrieve the encoded video bitstream data from the encoder engine 106 or from the memory 108. Memory 108 may comprise any of a variety of distributed or locally accessed data storage media. As an example, the memory 108 may be an internal storage device that is part of the encoding device 104. As another example, memory 108 may also be associated with other devices or equipment coupled to communication link 120. In all such examples, memory 108 may include a hard disk, a storage disk, flash memory, volatile or non-volatile memory, or any other suitable digital storage medium for storing encoded video data.
The input 114 of the decoding device 112 receives the encoded video bitstream data and may provide the video bitstream data to the decoder engine 116, or to the memory 118 for later use by the decoder engine 116. The decoder engine 116 may decode the encoded video bitstream data by entropy decoding (e.g., using an entropy decoder) and extracting elements of one or more coded video sequences that make up the encoded video data. The decoder engine 116 may then rescale the encoded video bitstream data and perform an inverse transform on the encoded video bitstream data. The residual data is then passed to the prediction stage of the decoder engine 116. The decoder engine 116 then predicts a block of pixels (e.g., PU). In some examples, the prediction is added to the output of the inverse transform (residual data).
Decoding device 112 may output the decoded video to video destination device 122, which may include a display or other output device for displaying the decoded video data to a consumer of the content. In some aspects, video destination device 122 may be part of a receiver device that includes decoding device 112. In some aspects, video destination device 122 may be part of a separate device than the receiver device.
Supplemental Enhancement Information (SEI) messages may be included in a video bitstream. For example, SEI messages may be used to carry information (e.g., metadata) not necessary for decoding of the bitstream by decoding device 112. This information is used to improve the display or processing of the decoded output (e.g., such information may be used by decoder-side entities to improve the visibility of the content).
In some embodiments, video encoding device 104 and/or video decoding device 112 may be integrated with an audio encoding device and an audio decoding device, respectively. The video encoding device 104 and/or the video decoding device 112 may also include other hardware or software necessary to implement the coding techniques described above, such as one or more microprocessors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), discrete logic, software, hardware, firmware, or any combinations thereof. Video encoding device 104 and video decoding device 112 may be integrated as part of a combined encoder/decoder (codec) in the respective devices.
Extensions to the HEVC standard include multi-view video coding extensions (referred to as MV-HEVC) and scalable video coding extensions (referred to as SHVC). MV-HEVC and SHVC extensions share the concept of layered coding, with different layers included in the encoded video bitstream. Each layer in a coded video sequence is addressed by a unique layer Identifier (ID). The layer ID may be present in the header of the NAL unit to identify the layer with which the NAL unit is associated. In MV-HEVC, different layers may represent different views of the same scene in a video bitstream. In SHVC, different scalable layers are provided that represent the video bitstream at different spatial resolutions (or picture resolutions) or different reconstruction fidelity. The tunable layers may include a base layer (having a layer ID of 0) and one or more enhancement layers (having a layer ID of 1, 2, … n). The base layer may conform to the profile of the first version of HEVC and represent the lowest available layer in the bitstream. The enhancement layer has increased spatial resolution, temporal resolution or frame rate and/or reconstruction fidelity (or quality) compared to the base layer. The enhancement layers are hierarchically organized and may (or may not) depend on the lower layers. In some examples, different layers may be coded using a single standard codec (e.g., all layers are encoded using HEVC, SHVC, or other coding standards). In some examples, different layers may be coded using multi-standard codecs. For example, a base layer may be coded using AVC, while one or more enhancement layers may be coded using SHVC and/or MV-HEVC extensions to the HEVC standard. In general, a layer includes a set of VCL NAL units and a corresponding set of non-VCL NAL units. NAL units are assigned a specific layer ID value. A layer may be hierarchical in the sense that the layer may depend on lower layers.
As previously described, the HEVC bitstream includes a group of NAL units, including VCL NAL units and non-VCL NAL units. Non-vcl nal units may contain, among other information, parameter sets with high level information related to the encoded video bitstream. For example, parameter sets may include Video Parameter Sets (VPS), Sequence Parameter Sets (SPS), and Picture Parameter Sets (PPS). Examples of goals for parameter sets include bit rate efficiency, error resilience, and providing system layer interfaces. Each slice refers to a single active PPS, SPS, and VPS to access information that the decoding device 112 may use to decode the slice. An Identifier (ID) may be coded for each parameter set, including a VPS ID, an SPS ID, and a PPS ID. The SPS includes an SPS ID and a VPS ID. The PPS includes a PPS ID and an SPS ID. Each slice header includes a PPS ID. Using the ID, the set of active parameters for a given slice can be identified.
The vcl nal unit includes coded picture data that forms a coded video bitstream. Various types of VCLNAL units are defined in the HEVC standard. In a single layer bitstream, as defined in the first HEVC standard, VCL NAL units contained in an AU have the same NAL unit type value, where the NAL unit type value defines the type of the AU and the type of coded picture within the AU. For example, the VCL NAL units of a particular AU may include an Instantaneous Decoding Refresh (IDR) NAL unit (value 19) such that the AU is an IDR AU and the coded picture of the AU is an IDR picture. A given type of VCL NAL unit relates to a picture, or a portion thereof contained in the VCL NAL unit (e.g., a slice or slice segment of a picture in the VCL NAL unit). Three classes of pictures are defined in the HEVC standard, including pre-pictures, post-pictures, and Intra Random Access (IRAP) pictures (also referred to as "random access pictures"). In a multi-layer bitstream, vcl NAL units of pictures within an AU have the same NAL unit type value and the same type of coded picture. For example, a picture containing a VCL NAL unit of type IDR is referred to as an IDR picture in an AU. In another example, when an AU contains a picture that is an IRAP picture at the base layer (layer ID equal to 0), the AU is an IRAP AU.
The video bitstream encoded as discussed above may be written or encapsulated in one or more files in order to transfer the bitstream from the encoding device 104 to the decoding device 112. For example, the output 110 may include a file write engine configured to generate one or more files containing a bitstream. The output 110 may transmit one or more files to the decoder device 112 via the communication link 120. Alternatively or additionally, one or more files may be stored on a storage medium (e.g., a tape, a disk, or a hard drive, or some other medium) for later transmission to the decoding device 112.
Decoder device 112 may include a file parsing engine, for example, in input 114. The file profiling engine may read files received via the communication link 120 or from a storage medium. The file parsing engine may further extract samples from the file and reconstruct the bitstream for decoding by the decoder engine 116. In some cases, the reconstructed bitstream may be the same as the bitstream generated by encoder engine 106. In some cases, the encoder engine 106 may have generated a bitstream with several possible options for decoding the bitstream, in which case the reconstructed bitstream may include only one or less than all of the possible options.
As discussed above, the media file and/or media bitstream may include damaged and/or missing video frames. In fig. 1, corrupted or missing video frames may occur due to corruption of a data file, for example, including encoded video bitstream data stored in memory 108 and/or due to data loss during transmission of the data file over communication link 120. A video frame may become lost when all of the encoded data (e.g., video coding layers, sets of motion parameters, control information, transform information, etc.) for the entire frame is lost. Video frames may become corrupted for a variety of reasons. For example, some (but not all) of the encoded data (e.g., video coding layers, sets of motion parameters, control information, etc.) for the particular frame may be corrupted or otherwise not retrievable from the data file. As another example, encoded data for a reference frame in an inter-prediction chain of video frames may become lost or corrupted such that the video frame may not be correctly decoded.
Fig. 2 illustrates an example of an ISO base media file 200 containing data and metadata for a video presentation formatted according to ISOBMFF. ISOBMFF is designed to contain timed media information in a flexible and extensible format that facilitates the interchange, management, editing, and presentation of media. The presentation of the media may be "local" to the system containing the presentation, or the presentation may be via a network or other streaming delivery mechanism.
"rendered" as defined by the ISOBMFF specification is a sequence of pictures that are related, often by having been captured sequentially by a video capture device, or related for some other reason. A presentation may also be referred to herein as a movie or video presentation. The presentation may include audio. One of ordinary skill in the art will appreciate that the presentation may include any other type of media content, such as game television programming, streaming video files, or the like. A single presentation may be contained in one or more files, where a file (or files) contains metadata for the entire presentation. Metadata includes information such as timing and framing data, descriptors, metrics, parameters, and other information describing the presentation. The metadata itself does not contain video and/or audio data. Files other than those containing metadata need not be formatted according to ISOBMFF, and need only be formatted so that such files can be referenced by metadata.
The file structure of an ISO base media file is an object-oriented structure, and the structure of individual objects in the file can be inferred directly from the type of object. The ISOBMFF specification refers to objects in an ISO base media file as "boxes. The ISO base media file is constructed as a sequence of boxes, which may contain other boxes. The box generally includes a header that provides the size and type of the box. The size describes the entire size of the box, including the header, fields, and all the boxes contained within the box. Boxes of types that cannot be recognized by the player device are typically ignored and skipped.
As illustrated by the example of fig. 2, at the top level of the file, the ISO base media file 200 may include a file type 210, a movie box 220, and one or more movie fragment boxes 230a, 230 b-230 n. Other boxes that may be included at this level, but not represented in this example, include free space boxes, metadata boxes, and media data boxes, among others.
The ISO base media file may contain a file type box 210 identified by the box type "ftyp". The file type box 210 identifies the ISOBMFF specification that is best suited for parsing the file. "most" in this case means that the ISO base media file 200 may have been formatted according to the particular ISOBMFF specification, but is likely to be compatible with other iterations of the specification. This most suitable specification is called the mainstream brand. The player device may use the mainstream brand to determine whether the device is capable of decoding and displaying the content of the file. The file type box 210 may also include a version number, which may be used to indicate the version of the ISOBMFF specification. The file type box 210 may also contain a list of compatible brands, including a list of other brands that the file is compatible with. ISO base media files are compatible with more than one mainstream brand.
When the ISO base media file 200 contains a file type box 210, there is only one file type box. The ISO base media file 200 may omit the file type box 210 in order to be compatible with early player devices. When the ISO base media file 200 does not include the file type box 210, the player device may assume a default major stream brand (e.g., "mp 41"), a minor version (e.g., "0"), and a consistent brand (e.g., "mp 41"). The file type box 210 is typically placed in the ISO base media file 200 as early as possible.
The ISO base media file may further include a movie box 220, which may contain metadata for presentation. Movie box 220 is identified by box type "moov". ISO/IEC 14496-12 specifies that a presentation contained in one file or multiple files may contain only one movie box 220. Frequently, the movie box 220 is near the beginning of the ISO base media file. Movie box 220 includes movie header box 222, and may include one or more play track boxes 224, as well as other boxes.
The movie header box 222 identified by the box type "mvhd" may contain information that is media independent and related to the presentation as a whole. For example, the movie header box 222 may include information such as a creation time, a modification time, a time scale, and/or a duration for presentation, among others. The movie header box 222 may also include an identifier that identifies the next track in the presentation. For example, in the illustrated example, the identifier may point to a track box 224 contained by movie box 220.
The track box 224 identified by box type "trak" may contain information for the track being presented. A presentation may include one or more tracks, where each track is independent of the other tracks in the presentation. Each track may include temporal and spatial information specific to the content in the track, and each track may be associated with a media frame. The data in the track may be media data, in which case the track is a media track; or the data may be packetized information for a streaming protocol, in which case the track is a hint track. For example, media data includes video and audio data. In the illustrated example, the example track box 224 includes a track header box 224a and a media box 224 b. The track box may include other boxes such as a track reference box, a track group box, an edit box, a user data box, a metadata box, and others.
The track header box 224a identified by the box type "tkhd" may specify the characteristics of the track contained in the track box 224. For example, the track header box 224a may include a creation time, a modification time, a duration, a track identifier, a layer identifier, a group identifier, a volume, a width, and/or a height of the track, among others. For media tracks, the track header box 224a may further identify whether the track is enabled, whether the track should be played as part of a presentation, or whether the track is available to preview a presentation, among other things. The presentation of the track is usually assumed to be at the beginning of the presentation. The track box 224 may comprise an edit list box, not illustrated herein, which may comprise an explicit schedule map. The schedule map may specify the offset time of the track plus others, where the offset indicates the start time of the track after the start of the presentation.
In the illustrated example, the track box 224 also includes a media box 224b identified by a box type "mdia". Media box 224b may contain objects and information about the media data in the track. For example, media box 224b may contain a handler reference box that may identify the media type of the track and the process by which the media in the track is presented. As another example, media box 224b may contain a media information box that may specify characteristics of the media in the track. The media information box may further include a sample table, where each sample describes a block of media data (e.g., video or audio data) that includes, for example, a location of the data for the sample. The data of the sample is stored in a media data box, discussed further below. As with most other boxes, the media box 224b may also include a media header box. The metadata for each track may include a list of sample description entries, each entry providing the coding format and/or encapsulation format used in the track and initialization data needed to process the format. Each sample is associated with one of the sample description entries of the track. The ISOBMFF implementation specifies sample-specific metadata through various mechanisms. Specific boxes within a sample table box ("stbl") have been standardized to respond to common needs. For example, a sync sample box ("stss") is used to enumerate random access samples of the track. The sample grouping mechanism enables mapping of samples according to a four character grouping type into groups of samples sharing the same property specified as a sample group description entry in a file. Several cluster types have been specified in ISOBMFF.
In the illustrated example, the example ISO base media file 200 also includes a plurality of segments 230a, 230b through 230n of the presentation. The clips 230a, 230 b-230 n are not ISOBMFF boxes, but in fact describe a combination of boxes including an optional clip type box 231, a movie clip box 232, and one or more media data boxes 238 referenced by the movie clip box 232. The clip type box 231, movie clip box 232, and media data box 238 are top-level boxes, but are grouped here to indicate the relationship between the movie clip box 232 and the media data box 238.
The clip type box 231, movie clip box 232, and media data box 238 define an ISOBMFF clip. The fragment type box 231 identifies the fragment type "styp," which defines the brand of the fragment. The brand of the segment needs to be one of the compatible brands listed in the file type box 210. The segment type box 231 is followed by a movie segment box 232 identified by the box type "moof," which may extend the presentation by including additional information to be additionally stored in the movie box 220. Using the movie fragment box 232, the presentation can be built up gradually. Movie fragment box 232 may include movie fragment header box 234 and track fragment box 236, as well as other boxes not illustrated herein.
The movie fragment header box 234 identified by the box type "mfhd" may contain a sequence number. The player device may use the sequence number to verify that the segment 230a contains the next segment of data for presentation. In some cases, the content of the files or the files for presentation may be provided to the player device out of order. For example, network packets may arrive frequently in an order that is different from the order in which the packets were originally transmitted. In such cases, the sequence numbers may assist the player device in determining the correct order of the segments.
The movie fragment box 232 may also include one or more track fragment boxes 236 identified by a box type "traf". The movie clip box 232 may contain a set of track clips (zero or more per track). A track segment may contain zero or more track runs, each of which describes a continuous run of samples of the track. In addition to adding samples to the track, a track segment may be used to add empty time to the track.
The media data box 238 identified by the box type "mdat" contains media data. In a video track, the media data box 238 will contain video frames. The media data box may alternatively or additionally contain audio data. A presentation may include zero or more frames of media data contained in one or more individual files. The media data is described by metadata. In the illustrated example, the media data in the media data box 238 may be described by metadata included in the track segment box 236. In other examples, the media data in the media data box may be described by metadata in the movie box 220. The metadata may reference particular media data by absolute offset within the file 200 so that media data headers and/or free space within the media data box 238 may be skipped.
Other segments 230b, 230 c-230 n in the ISO base media file 200 may contain boxes similar to those described for segment 230a, and/or may contain other boxes.
Fig. 3 illustrates an example of a media box 340 that may be included in an ISO base media file. As discussed above, the media boxes may be included in a track box and may contain objects and information describing the media data in the track. In the illustrated example, media box 340 includes media information box 342. Media block 340 may also include other blocks not illustrated herein.
Media information box 342 may contain objects that describe property information about the media in the track. For example, media information box 342 may include a data information box that describes the location of the media information in the track. As another example, when the track includes video data, media information box 342 may include a video media header. Video media headers may contain general presentation information independent of the coding of the video media. When the track includes audio data, media information box 342 may also include an audio media header.
The media information box 342 may also include a sample table box 344, as provided in the illustrated example. The sample table box 344, identified by the box type "stbl," can provide the location of the media sample (e.g., the location of the file) in the track, as well as time information for the sample. Using the information provided by the sample table box 344, the player device can locate the samples in the correct time order, determine the type of sample, and/or determine the size, container, and offset of the samples within the container plus others.
The sample table box 344 may include a sample description box 346 identified by a box type "stsd". Sample description block 346 may provide detailed information regarding, for example, the type of coding used for the sample, and any initialization information needed for that type of coding. The information stored in the sample description box may be specific to the type of track that includes the sample. For example, one format may be used for the sample description when the track is a video track, and a different format may be used when the track is a hint track. As another example, the format of the sample description may also vary depending on the format of the hint track.
The sample description box 346 may include sample entry boxes 348 a-348 n. Sample items are summary categories, and thus typically a sample description box includes a particular sample item, such as a visible sample item of video data or an audio sample item of an audio sample plus other examples. Each visible sample entry of video data may include one or more video frames. The sample entry box may store parameters for a particular sample. For example, for a video sample, a sample entry box may include the width, height, horizontal resolution, vertical resolution, frame count, and/or depth of the video sample, among others. As another example, for audio samples, the sample entries may include channel count, channel layout, and/or sampling rate, among others.
In the illustrated example, the first sample entry 348a includes a sample size box 350 identified by a sample size box type "stsz". The sample size box may indicate the number of bit groups of the sample. For example, for a visible sample, the sample size may indicate the number of bit groups of data included in one or more video frames of the visible sample. The first sample item 348a also includes a schema type box 352 identified by a schema box type "schm" that may define a sample item type to indicate the type of data of the sample. The sample entry type information may assist the decoder in determining how to handle the sample data.
In addition to the sample entry box, the sample description 346 may further include a sample group description box 360 (identified by a sample group description box type "sgpd") and a sample-to-group box 362 (identified by a sample-to-group box type "sbgp"). Both the sample group description box 360 and the sample-to-group box 362 may be part of a sample grouping mechanism to group a set of sample items based on a predetermined characteristic associated with the sample group description box 360. For example, the sample group description box 360 may include a predetermined grouping type entry. Sample entries associated with a predetermined grouping type (based on certain common characteristics shared by the sample entries) may be mapped to the grouping type entries in the sample-to-group box 362.
In addition to supporting local playback of media, ISOBMFF includes support for streaming media data over a network. One or more files comprising one movie presentation may include an additional track, referred to as a hint track, containing instructions that may assist the streaming server in forming and transmitting the one or more files as packets. For example, such instructions may include data (e.g., header information) or references to segments of media data for the server to send. The file may contain separate hint tracks for different streaming protocols. The hint track can also be added to a file without the need to reformat the file.
Referring now to fig. 4, an example system 400 for streaming is illustrated. The system 400 includes a server 402 and a client device 404 communicatively coupled to each other based on a network connection protocol via a network 406. For example, server 402 may comprise a conventional HTTP web server, while client device 404 may comprise a conventional HTTP client. An HTTP communication channel may be established and may be used by client device 404 to transmit HTTP requests to server 402 to request certain network resources. The HTTP communication channel may be used by the server 402 to transmit an HTTP response including the requested network resource back to the client device 404. One network resource hosted by the server 402 may be media content, which may be divided into media segments. Client device 404 may include a streaming application 408 to establish a streaming session with server 402 via network 406. During the streaming session, the streaming application 408 may transmit a request for one or more media segments to the request processor 410 of the server 402 via the network 406. The streaming application 408 may receive the requested one or more media segments and may render some or all of the received media segments on the client device 404 prior to transmitting a subsequent request for a subsequent media segment. With this arrangement, the streaming application 408 need not wait until the download of the entire media content is complete before rendering the media content at the client device 404, which can facilitate utilization of network resources and improve the user experience.
To enable high quality streaming of media content using conventional HTTP web servers, adaptive bitrate streaming may be used. In the case of adaptive bitrate streaming, for each media segment, client device 404 may have information about a set of alternate segment files 420 and 440. Here, a media segment may refer to a portion of a media bitstream associated with a particular play time stamp and duration. Each set of alternate clip files 420 and 440 may correspond to a particular representation of a media clip (e.g., associated with a particular play time stamp and duration). A representation may refer to a particular result of encoding certain media content (e.g., by a particular bit rate, frame rate, screen size, and/or other suitable media characteristics). Here, different representations of a media segment may refer to different results of encoding the media content of the media segment. The representation may include one or more sub-representations. The sub-representation may include, for example, information specifying encoding results (e.g., codecs, languages, embedded lower quality video layers, and/or other media characteristics) that may be used to decode and/or extract media content from the segment files of the representation. In each set of alternate segment files, each media segment file may be associated with a set of properties including, for example, a particular bit rate, frame rate, resolution, audio language, and/or other suitable media characteristics specified in the sub-representation. Based on local information (e.g., bandwidth of network 406, decoding/display capabilities of client device 404, user preferences, etc.), streaming application 408 may select a particular media clip file from the collection for each representation. As an illustrative example, client device 404 may transmit a request for a media clip file associated with a first resolution from media clip file 420. Subsequently, client device 404 may transmit another request for a media clip file associated with the second resolution due to a change in the bandwidth of network 406.
The information about the collection of replacement clip files 420 and 440 may be part of a description file 460 maintained by the server 402. Client device 404 may obtain description file 460 from server 402 and may transmit a request for a media segment file based on description file 460. The description file 460 may include, for example, a list of alternative media segment files for each representation of media content and a set of properties (e.g., bit rate, frame rate, resolution, audio language, and/or other suitable media characteristics) associated with each alternative media segment file. The list also includes location identifiers (e.g., Uniform Resource Locators (URLs), Uniform Resource Indicators (URIs), and/or other suitable identifiers) associated with the storage locations of the replacement clip files.
Various protocols exist for adaptive bitrate streaming. One example is dynamic adaptive streaming over hypertext transfer protocol (HTTP) or (DASH) (defined in ISO/IEC 23009-1: 2014), also known as MPEG-DASH. In the case of DASH, description file 460 may comprise a Media Presentation Description (MPD) file.
Fig. 5 provides an example of MPD 500. As shown in fig. 5, MPD 500 includes one or more adaptation sets (e.g., adaptation set 510) provided in a list representation. Adaptation set 510 may be associated with a start timestamp and a duration of play, and may include a set of representations 512a and 512 b. Each of representations 512a and 512b may include a collection of media segments. The media segments of representation 512a and the media segments of representation 512b may be encoded from the same content source and may be associated with different bitrates, resolutions, frame rates, and/or other suitable media characteristics. For example, representation 512a includes media segments 516a and 518a, while representation 512b includes media segments 516b and 518 b. Media segments 516a and 518a may be associated with media properties (e.g., resolution, bit rate, or the like) that are different than the media properties of media segments 516b and 518 b.
Moreover, each representation may also include one or more sub-representations. For example, representation 512a may include sub-representation 520a, while representation 512b may include sub-representation 520 b. As discussed above, the sub-representation may include, for example, information specifying encoding results (e.g., codecs, languages, embedded lower quality video layers, and/or other media characteristics) that may be used to decode and/or extract media content from the segment files that comprise the sub-representation of the representation.
In addition, each representation may also include attribute information that sends the media characteristic signals of the media segments included in the representation. For example, representation 512a includes representation attribute 514a, while representation 512b includes representation attribute 514 b. Each of the representation attributes 514a and 514b may include information including, for example, bandwidth, frame width, frame height, combinations thereof, and/or other attribute information.
The MPD may be represented in extensible markup language (XML). An MPD file in XML format may provide a list representation of adaptation sets and include a set of elements to define each adaptation set. Each of the set of elements may be associated with a set of attributes that define properties of, for example, an adaptation set, a representation, and the like. The following is an example of a portion of MPD 500 of fig. 5:
In the above example, the text such as "Period (Period)", "adaptation set (adaptation set)", "Representation (reproduction)", "sub-Representation (sub-reproduction)", "SegmentURL", and the like is an element, and "duration (duration)", "mimeType", "id", "bandwidth (bandwidth)", "width (width)" and "height (height)", "media (media)" and the like is an attribute. In this example, an adaptation set (e.g., adaptation set 510) may be associated with, for example, an mp2t video stream (based on the "mimeType" attribute) having a 30 second duration (based on the "duration" attribute). Further, the adaptation set may include a representation (e.g., representation 512a) associated with a bandwidth of 3.2M and having a width of 1280 and a frame height of 720. Bandwidth, frame width, and frame height information may be included in the representation attributes 514 a. The representation may include a sub-representation (e.g., sub-representation 520a) that specifies the codec and bandwidth of the audio component. The representation may also contain multiple fragments, each of which is represented by a URI that follows the "SegmentURL" element. Fragments may be associated with a representation or grouped according to different sub-representations.
Another example for adaptive bitrate streaming is HTTP real-time streaming (HLS), which provides streaming of file segments associated with a Transport Stream (TS) format. The transport stream specifies a container format that encapsulates a Packetized Elementary Stream (PES). Each PES includes an encapsulation of sequential groups of data bits from a video or audio decoder into PES packets. Using HLS, the server may provide a set of playlist files, each of which contains links to a sequence of file segments in TS format and associated with a particular bit rate. The playlist file may be in the format of m3u8, and includes a set of tags and attributes to provide a list representation of the media clip files. A variant playlist file may refer to a set of playlist files, each of which may be associated with a set of media segment files for the same presentation (e.g., a sequence of video frames), and each of the set of playlist files may be associated with a different bit rate. A receiver device may have variant playlist files and select a playlist file associated with a particular bandwidth, bit rate, frame rate, etc., based on local conditions (e.g., network bandwidth). The receiver device may then use the information of the selected playlist file to obtain the media segment file for streaming.
The following is an example of an HLS variant playlist:
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=300000,prog_200kbs.m3u
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=600000,prog_400kbs.m3u
here, the text "# EXT-X-STREAM-INF" is a tag that provides some information and structure of the playlist. For example, "# EXT-X-STREAM-INF" indicates that the following URL (e.g., "prog _200kbs. m3u") is a playlist file. Tags may also be associated with attributes. For example, "# EXT-X-STREAM-INF: PROGRAM-ID of 300000 "describes that the playlist is associated with a video file having an upper limit of 300000 bits per second and with a presentation identifier of 1.
An example of a playlist file referenced by the above HLS variant playlist may be as follows:
#EXTINF:10.0,
http://example.com/movie1/fileSequenceA.ts
#EXTINF:10.0,
http://example.com/movie1/fileSequenceB.ts
...
here, the text "# extinn" is also a label that provides some information and structure of the playlist. For example, "# extinn" may be a record marker describing the media file identified by the URL following it. Tags may also be associated with attributes. For example, "# extinn: 10.0 "describes that the media segment file that follows has a 10 second duration. FIG. 6 provides a graphical representation of an example of a variant playlist file and a set of playlist files referenced by the variant playlist file. The playlist file may include information for each media segment, such as a URL ("abc.ts") and associated duration (10 seconds).
Fig. 7A illustrates an example of sending a corrupted frame signal in an ISO base media format (ISOBMFF) file. The media box 740 shown in fig. 7A is an example of a media box that may be included in an ISOBMFF file. The ISOBMFF file may be generated or updated, for example, by a streaming server, by an intermediate network device between a hosting server and a receiver device, by a receiver device, or any other device that encapsulates encoded data into a media file. In the illustrated example, media block 740 includes a media information block 742 that includes a sample table 744. The sample table 744 includes a sample description 746(stsd), which in turn may include sample entries 748 a-748 n, and so on. The term 748a may include a sample size box 750(stsz) and a scheme type box 752 (schm). Optionally, the sample description 746 may also include a sample group description box 760(sgpd) and a sample to group box 762 (sbgp). Unless otherwise specified, the nature of such blocks may be the same as the corresponding blocks in fig. 3, the description of which is not repeated here.
In the example of fig. 7A, sample entries 748a may include video samples that include one or more corrupted video frames. As mentioned previously, a corrupted media frame is a media frame that cannot be decoded correctly due to video data of the corrupted frame being only partially received, due to missing data associated with or in the media frame used for the inter-prediction chain, or due to other factors that cause the video frame to become un-decodable. If the decoder attempts to decode a corrupted media frame, which is an unexpected behavior in the decoding process, the results may include a decoder crash, error, or other negative result. To signal the presence of a corrupt video frame in the sample entries 748a, the scheme type block 752 of the sample entries 748a may carry a sample entry type associated with the corrupt video frame. The sample entry type may be identified by a four-letter code (in the example of fig. 7A, the four-letter code may be defined as "crpt"). On the other hand, if the sample entry 748a includes video samples with missing video frames, the sample entry 748a may include another code identifier, such as "lost," "empty," or other suitable code identifier to indicate the occurrence of a missing video frame in the particular sample entry.
Fig. 7B illustrates another example of sending a corrupted frame signal in an ISOBMFF file. For example, the media box 740 may be included in an ISOBMFF file. Assuming that all of the video frames of sample entry 748a are corrupted (i.e., none of the video frames of sample entry 748a can be decoded), the video samples corresponding to sample entry 748a may be omitted from the ISOBMFF file by an application that generates the ISOBMFF file by encapsulating the encoded media bitstream. That is, in the example of fig. 7B, the sample entry 748a is omitted from (and not present in) the sample table 744. The omission of the sample entry 748a may be detected by the receiver device based on, for example, a gap in the position and time information for the sequence of samples listed in the sample table block 744.
In the case of the example of fig. 7A and 7B, the receiver device may detect a corrupted frame when decapsulating the ISOBMFF file. Based on the sample entry type information, or the omission of a particular sample entry, the receiver device may directly obtain the exact file location of the corrupted video frame (e.g., which track and which sample of the corrupted video frame is included), as well as timing information for the corrupted and un-decodable video frame. The receiver device may then perform a predetermined file handling procedure to handle the corrupted video frame in an efficient manner. For example, based on the indication of the corrupted media frames in the file, the receiver device does not need to perform other computational steps to find corrupted video frames. For example, the receiver device does not need to translate the byte positions to track and timing information, or other translation operations, to find the exact file position of the corrupted video frame. Moreover, due to the indication of the corrupted frame, the receiver device may also be prevented from attempting to decode the corrupted video frame, which may prevent a decoder crash, error messages, and/or other undesirable results that may harm the decoding process. All of these may facilitate correct handling of corrupted video frames and improve the user experience.
Fig. 8A and 8B illustrate an example of sending a lost frame signal in an ISOBMFF file. Fig. 8A illustrates an example of a top level of an ISO base media file 800. The ISO base media file may be generated or updated by devices such as a streaming server, an intermediate network device between a hosting server and a receiver device, encapsulating encoded data into a media file, and so on. In the example of fig. 8A, the media file 800 may include a file type box 810, a movie box 820, and one or more movie fragment boxes 830a, 830 b-830 n. Other boxes that may be included at this level, but not represented in this example, include free space boxes, metadata boxes, and media data boxes, among others. Movie box 820 includes movie header box 822, and may include one or more play track boxes 824, among other boxes. The track box 824 includes a track header box 824a and a media box 824 b. The clip 830a includes a clip type box 831 and a movie clip box 832. Unless otherwise specified, the nature of such blocks may be the same as the corresponding blocks in fig. 2, the description of which is not repeated here.
Assuming that segment 830a contains a null segment, segment type 831 can carry a brand identifier associated with the null segment. The brand identifier may be one of the compatible brands listed in file type 810. In the example of FIG. 8A, this brand identifier may be the four-letter code "empt". In addition, the media data frame associated with the empty segment may be omitted from the segment 830.
In addition, media 824b may also include an indicator to indicate an empty segment. Referring now to FIG. 8B, an example of media block 824B of FIG. 8A is illustrated. As shown in FIG. 8B, media block 824B includes media information block 842, which includes sample table 844. The sample table 844 includes sample descriptions 846, which in turn may include sample entries 848 a-848 n, and so on. Entry 848a may include a sample size box 850 and a scheme type box 852. Optionally, the sample description 846 may also include a sample group description box 860 and a sample-to-group box 862. Unless otherwise specified, the nature of such blocks may be the same as the corresponding blocks in fig. 3, the description of which is not repeated here.
Here, assuming sample entry 848a is part of an empty segment defined by segment type 831 and movie segment 832, sample size box 850 of sample entry 848a may carry a value of zero to indicate that the sample has a zero size.
Fig. 9 illustrates an example of sending a missing frame signal in an ISOBMFF file. Fig. 9 illustrates an example of a top level of an ISO base media file 900. The ISO base media file may be generated or updated by devices such as a streaming server, an intermediate network device between a hosting server and a receiver device, encapsulating encoded data into a media file, and so on. In the example of fig. 9, media file 900 may include a file type box 910, a movie box 920, and one or more movie fragment boxes 930a, 930 b-930 n. Other boxes that may be included at this level, but not represented in this example, include free space boxes, metadata boxes, and media data boxes, among others. Unless otherwise specified, the nature of such blocks may be the same as the corresponding blocks in fig. 2, the description of which is not repeated here.
In the example of FIG. 9, fragment 930a includes a fragment type box 931, which carries the brand "empt" as in FIG. 8A. The fragment 930a also contains an empty fragment information box 932. The empty fragment information box 932 may include data representing a box type code "esif" and may be identified by the "esif" box type code. As long as a fragment contains a null fragment associated with the fragment type 931, a null fragment information box 932 may be included in the fragment. The definition, syntax, and semantics (some of which are illustrated in fig. 9) of the empty fragment information box 932 ("empty segmentinfobox") may be as follows:
definition of
Frame type: "esif"
A container: document
Selecting as follows: for empty media segments, is
Quantity: a
Grammar for grammar
Semantics
The reference _ ID field may hold an unsigned 32-bit integer and may provide a stream ID for a reference stream, where the stream is a track and the stream ID is a track ID of the track of the stream. reference _ ID can be used to determine which track the empty segment locates.
The time scale field may hold an unsigned 32-bit integer and for the earlie _ presentation _ time and segment _ duration fields (to be discussed below) within this box, the time scale is defined in ticks per second. In one embodiment, the time scale defined in the empty segment information box 932 may match the time scale of the reference stream or track, as well as the time scale field of the media header box of the track (e.g., movie header 222 of fig. 2).
The earliest presentation time field may hold an unsigned 32-bit or 64-bit integer (depending on the version). The earliest presentation time may provide the earliest presentation time of the empty media segment containing this box at the time scale indicated in the time scale field.
The segment _ duration field may hold the difference between the earliest presentation time of the next segment of the reference stream (or the last presentation time of the reference stream if this is the last segment of the reference stream) and the earliest presentation time of this empty segment. The duration is in the same unit as the value held in the earliest presentation time field.
In the case of the examples of fig. 8A-8B and 9, the receiving device may also detect a null segment (with a missing video frame) by recognizing the segment type brand (e.g., the codeword "empt") associated with the null segment when decapsulating the ISOBMFF file. Furthermore, the receiver device may also directly obtain the exact location (e.g., which track) and timing information about the missing segment based on, for example, sample size information (of fig. 8B), empty segment information frame information (of fig. 9), etc. This enables the receiver device to perform a predetermined file handling procedure for the missing frame in an efficient manner. For example, the receiver device does not need to perform other computational steps to find the missing video frame. The receiver device may also be prevented from attempting to decode the missing video frame. Preventing such operations by the receiver device may facilitate correct handling of lost video frames and improve user experience.
Fig. 10 illustrates an example of providing a unified send lost video frame or corrupted video frame signal in an ISOBMFF file. In the case of uniform signaling, a single indicator may be associated with both the lost video frame and the corrupted video frame. The receiver device, upon detecting the single indicator, may determine that one or more video frames are missing or corrupted, and may perform a predetermined handling procedure to handle (or process) the missing or corrupted video frames (e.g., by not decoding those video frames). Alternatively, the receiver device may also combine the single indicator with other information (e.g., zero sample size for the lost frame and/or the null segment) to distinguish the lost frame from the corrupted frame.
As shown in fig. 10, an example of a media box 1040 of an ISO base media file is provided. The ISO base media file may be generated or updated by devices such as a streaming server, an intermediate network device between a hosting server and a receiver device, encapsulating encoded data into a media file, and so on. As shown in fig. 10, media box 1040 includes media information box 1040, which includes sample table 1044. The sample table 1044 includes a sample description 1046, which in turn may include sample entries 1048 a-1084 n, and so on. Entry 1048a may include a sample size box 1050 and a scheme type box 1052. The sample table 1044 further includes a sample group description box 1060 and a sample-to-group box 1062.
In the example of fig. 10, sample entry 1048a may include a video sample that includes one or more corrupted video frames, one or more missing video frames, or any combination thereof. To signal the presence of a missing or corrupted video frame in the sample entry 1048a, the scheme type block 1052 of the sample entry 748a may carry a sample entry type associated with both the corrupted video frame and the missing video frame. The sample entry type may be identified by a four-letter code (in the example of fig. 10, the four-letter code is defined as "mcpt"). By identifying the four-letter code that sends the presence signal of a lost or corrupted video frame, the receiver device can directly obtain the exact file location and timing information about the corrupted or missing (and non-decodable) video frame (e.g., which sample of the track contains the corrupted video frame), and can perform the predetermined file handling procedure in an efficient manner.
In addition, a new type of sample grouping may also be defined to indicate that the group includes samples associated with the sample entry type "mcpt". For example, as shown in fig. 10, the sample group description box 1060 may store a sample group type entry 1061. The sample group type entry ("missingandcorpiptframesssampleentry") may be associated with a four-letter code identifier ("mptf" in the example of fig. 10), and may be associated with the following definitions and syntax:
Definition of
Group type: "mptf"
A container: sample group description frame ("sgpd")
Selecting as follows: whether or not
Quantity: zero or greater than zero
Grammar for grammar
Class MissingAndCorruptFrames SampleEntry () extension VisualSampleGroupEntry ("mptf")
The sample group type entry may comprise an unsigned two bit integer for mpt _ frame _ type. The value of mpt _ frame _ type may indicate the condition of the media frame in the sample of the sample group associated with the sample group description item. For example, a value of 0 may indicate that the sample group contains neither a missing video frame nor a corrupted video frame. A value of 1 may indicate that the sample group includes a missing video frame. A value of 2 may indicate that the sample group indicates a corrupted video frame. A value of 3 may indicate a condition where the media frame is not known.
Further, the sample-to-group box 1062 may include a mapping between the sample group type entry included in the sample group description box 1060 and an index representing the sample entry. In the example of fig. 10, the sample-to-group box 1062 may include a mapping 1063 that maps the sample group type entry 1061 to an index associated with the sample entry 1048a to indicate that the sample entry is part of a group of samples that share a common condition of the video frame, as indicated by the mpt _ frame _ type value of the sample group.
In the case of the example of fig. 10, a unified signaling mechanism may be used to indicate whether a sample group includes a missing video frame, a corrupted video frame, or has a fully decodable video frame. The unified signaling mechanism provides a compact and efficient representation of the conditions of video frames in a media file. This may simplify the design of the receiver device for interpreting and handling the signals.
Fig. 11 and 12 illustrate examples of sending a missing file segment signal for media streaming. Fig. 11 illustrates an example of MPD for DASH streaming, and fig. 12 illustrates an example of a playlist file or CMAF file for HLS streaming. The MPD, playlist files, and CMAF files may be generated or updated by a streaming server (e.g., server 402 of fig. 4) that maintains the files. Alternatively, such files may also be generated or updated by a client device (e.g., client device 404 of fig. 4). For example, the client device may have received a description file (MPD, playlist file, CMAF file, or any suitable file) from the streaming server that lists a set of media segment files to be downloaded by the client device. During the streaming session, the client device may download a set of media segment files from the description file, and may determine that one or more of the downloaded media segment files contain missing or corrupted video frames (e.g., due to a failure in decoding the downloaded media segment files). In this case, the client device may update the MPD, playlist files, CMAF files, etc., based on the techniques to be disclosed in fig. 11 and 12, and use the updated files to request media segment files in the next streaming session to avoid receiving media segment files with missing or corrupted video frames.
Referring now to fig. 11, illustrated is a structure of an example of an MPD 1100 that sends a missing media segment signal. MPD 1100 may include an adaptation set 1102, which includes a representation 1104 and a representation 1106. In the example of FIG. 11, the representation 1106 includes representation attributes 1110, media segments 1112, media segments 1114, and so on. Each of the media segments 1112 and 1114 may be associated with a start time and a duration.
In the example of fig. 11, the media segment 1112 may be associated with a missing media segment file. MPD 1100 may include signaling information to indicate that media segment 1112 is associated with a missing media segment file. The missing media segment files may be associated with a representation or sub-representation. For example, the representation attribute 1110 of the representation 1106 (which includes the media segment 1112) may include an element "segmentsmingduties" that specifies that there are missing segments and that the duration for those segments is to be signaled in the MPD. The presentation attribute 1110 may also contain an element "MissingDurations" that specifies a duration for the missing element. The duration may be represented as a set of attributes associated with the "missingduration" element. In the example of fig. 11, the missing media segment has a start time of 0:01 and a duration of 1 second, and has a start time of 0.06 and a duration of 0.2 seconds. In some embodiments, the "segmentmissingdurations" and "MissingDurations" elements may also be part of the sub-representation to indicate missing segments of the sub-representation. The semantics and syntax of "segmentsmingdurations" and "MissingDurations" may be as follows:
Semantics
XML grammar
Referring now to fig. 12, the structure of an example of a playlist file 1200 that sends a missing media segment signal is illustrated. The playlist file 1200 may also be a CMAF file (e.g., with a link to an ISOBMFF file instead of a Transport Stream (TS) file). In the example of fig. 12, the media segment 1202 contains a missing segment file. The playlist file 1200 may indicate this with a particular tag "missing _ segment" indicating that the media segment 1202 contains a missing segment file.
Using the examples of fig. 11 and 12, a receiver device may determine that a file is missing prior to transmitting a request for a media segment file. For example, based on the "segmentmissing permissions" and "mispermissions" elements of fig. 11 and their associated attributes, the receiver device may identify start time and duration information associated with fragments with missing files. The receiver device may compare the start time and duration information of the missing segment against the start time and duration information of each of the media segments 1112 and 1114 to determine that the media segment 1112 is associated with the missing media segment file. Likewise, based on the "missing _ segment" tag in fig. 12, the receiver device may also determine that the media segment 1202 is associated with a missing media segment file. In both cases, the receiver may then perform a predetermined procedure, including, for example, obtaining a corresponding media segment from another representation (e.g., representation 1104) or from another playlist in place of media segment 1112 to obtain an alternative presentation of media segment 1202.
Fig. 13 illustrates an example of a process 1300 for processing video data. The process may be performed, for example, by a streaming server (e.g., server 402 of fig. 4), an intermediate network device between a hosting server and a receiver device, or other suitable device that encapsulates encoded data in a media file (e.g., an ISOBMFF file). The process may also be performed by a client device (e.g., client device 404 of fig. 4) streaming video data from a streaming server using a description file (e.g., an MPD, a playlist file, a CMAF file, or any suitable file).
At block 1302, process 1300 includes obtaining a plurality of frames of video data. The plurality of frames obtained by the process may be the result of encoding and/or compressing video data using a video codec. The video data may include a plurality of video samples, in which case each of the plurality of video samples includes one or more frames of a plurality of frames. In some embodiments, each of the video samples may be associated with a type identifier that identifies a type of content included in each of the video samples. The plurality of frames of video data as received by the process may be in one or more ISO format media files (e.g., ISOBMFF). The plurality of frames of video data may be in one or more media fragment files obtained based on the aforementioned description file.
At block 1304, process 1300 includes determining that at least one frame of the plurality of frames is corrupted. For example, process 1300 may determine that at least one frame of the plurality of frames is corrupted. The video data may include first data corresponding to at least one frame of the plurality of frames, in which case the first data is insufficient to correctly decode the at least one frame. As discussed above, corrupted frames may occur in different ways. In some cases, a frame may become corrupted when portions of the encoded data for the frame are lost. In some cases, a frame may become corrupted when the frame is part of an inter-prediction chain, and some other encoded data of the inter-prediction chain is lost so that the frame is not correctly decodable. For example, the at least one frame may be part of an inter-prediction chain, and the video data may include first data corresponding to the inter-prediction chain. In such cases, the first data is insufficient to correctly decode the at least one frame. In some cases, the encoded media data may become corrupted (e.g., due to media file corruption) or even lost before being encapsulated for transmission at the server. In some cases, the encoder (or transcoder) may crash or fail in the encoded media data. Encoder failure may result in some frames not being encoded (and not included) in the encoded data, such that the encoded data includes a missing frame. Encoder failures may also result in partially encoded frames and the inclusion of partial data in the encoded data. The encoded data may also include corrupted frames if the partial data is not sufficient to correctly decode the frame.
There are different ways in which a system (e.g., a streaming server) determines that at least one frame of a plurality of frames is corrupted. For example, a streaming server may receive video data in the form of network packets from another video source (e.g., a content server), and the streaming server may determine that data loss occurs during transmission of the video data based on, for example: some network packets have been lost in transmission; errors have been introduced into the payload of the network packet (e.g., based on error correction codes); and the error is uncorrectable; and the like. Based on the size of the lost data, and the location of the lost data within the video data, the streaming server may further determine whether the video data includes a corrupted frame (e.g., when data received from a network packet is unavailable to decode the frame properties), and identify the corrupted frame. As another example, a streaming server may employ an encoder to generate a video file by encoding data representing a collection of images, and the encoder may crash when attempting to encode a frame that includes corrupted data. The encoder may provide an indication to the streaming server of the frame that caused the crash. The streaming server may then determine that there is a corrupted frame based on the indication from the encoder, and identify the corrupted frame.
As another example, during a streaming session, a client device may download a set of media segment files from a description file, and may determine that one or more of the downloaded media segment files contain corrupted video frames (e.g., due to a failure in decoding the media segment files).
At block 1306, process 1300 includes generating an indication of at least one frame corruption. The indication may be in the form of an example in accordance with that described herein, including, for example, fig. 7A-12.
In some embodiments, the indication may be part of an ISO format file. In one example, the indication may be provided by a type identifier associated with the video sample including the corrupted at least one frame (e.g., a code identifier associated with the sample entry type), as discussed with respect to fig. 7A. In another example, the indication may be provided by an omission of one or more sample entries corresponding to the corrupted video frame, as discussed with respect to fig. 7B. In yet another example, as discussed with respect to fig. 10, a uniform sample type identifier associated with both the lost video frame and the corrupted video frame may be used to provide the indication. Additionally, a uniform sample group type identifier may be used to indicate that a video sample group (associated with a sample group type identifier) includes a corrupted/missing video frame. Mapping the unified sample group type identifier to a sample-to-group box of video samples containing a damaged/lost sample group may also be included to provide an indication of which of the video samples includes a damaged/lost video frame.
In some embodiments, the indication may also be part of the aforementioned description file for the streaming application. For example, the streaming server may generate a description file to indicate that one or more media segments included in the description file are missing (and/or corrupted). In another example, the client device may generate an updated description file from the original description file obtained at block 1302 to indicate which of the media segments included in the original description file was lost (and/or corrupted). In one example, the description file may be an MPD file and may include predetermined elements and attributes to specify that there are media segment losses and durations for those segments. In another example, the description file may be a playlist file and may include a predetermined tag to indicate a missing segment file or a segment file with corrupted data.
At block 1308, process 1300 includes generating a media file that includes the indication determined at block 1306. The media file may be an ISOBMFF file, a description file for a streaming application (e.g., MPD, playlist, CMAF, etc.), and so on.
As mentioned above, the video data may include a plurality of video samples, wherein each of the plurality of video samples includes one or more frames of a plurality of frames. The plurality of video samples may include a first video sample including the corrupted at least one frame. The first video sample is associated with a type identifier that identifies a type of content included in the first video sample. In this case, the indication may include a type identifier. The type identifier may indicate that the first video sample includes at least one of the corrupted frames. The type identifier may also indicate the media type and the type of decoder used to process the media file. In some cases, the type identifier includes a sample entry type.
In some examples, a media file may include a list representation of a plurality of segments of video data. In one example, the plurality of segments may include a first segment and a second segment. The first and second segments may include one or more frames of a plurality of frames. The second segment may also include one or more missing frames in the plurality of frames. For example, a missing frame that is not provided in the file is meant to be part of the second segment, but is not included in the file. The above-mentioned indication may be referred to as a first indication. In such examples, process 1300 may further include determining that the second segment includes one or more missing frames, and generating a second indication that provides an indication of the one or more missing frames. Process 1300 may add (or include) a second indication in the media file.
In some examples, the media file comprises a Media Presentation Description (MPD) format file. The list representation mentioned above may include one or more adaptation sets, wherein each of the one or more adaptation sets includes one or more representations and/or one or more sub-representations containing video data having one or more missing frames. One or more representations or each of one or more sub-representations are associated with one or more segments. The second indication may include one or more elements associated with one or more missing frames for one or more representations or one or more sub-representations. The one or more elements are associated with a set of attributes that includes a timestamp and a duration of the second segment.
In some implementations, the list representation includes information for retrieving the first segment but not the second segment. In such cases, the second indication includes an omission of information for retrieving the second segment. In some implementations, the list representation includes a text indicator associated with the second segment. The text indicator may indicate that the second segment includes one or more missing frames. In such implementations, the second indication may include a textual indicator.
In some examples, the media file is based on the HTTP real-time streaming (HLS) playlist format. In such cases, each of the plurality of clips is associated with a Transport Stream (TS) file, and the list representation may include a set of tags. In such cases, the text indicator is a tag in the set of tags associated with the second segment.
In some examples, the media files are based on a Common Media Application Format (CMAF) and include playlists. Each of the plurality of fragments is associated with an ISOBMFF. The list representation may include a set of tags and the text indicator is a tag in the set of tags associated with the second segment.
FIG. 14 illustrates an example of a process 1400 for processing media file data. The process may be performed by a receiver device of the media file data, for example. A receiver device may be any device that receives and decodes encoded video data included in a media file. The receiver device may be, for example, a client device (e.g., client device 404 of fig. 4), an intermediate network device between a hosted server and the client device, or other suitable device.
At block 1402, process 1400 includes obtaining a media file containing media content. The media content comprises a plurality of frames of video data. The media file may be a file that encapsulates the media content (e.g., an ISOBMFF file), or a description file for a streaming application (e.g., an MPD, playlist, CMAF, etc.) that links one or more media content segment files. The plurality of frames in the media file may be the result of encoding and/or compressing video data using a video codec. The video data may include a plurality of video samples, and each of the plurality of video samples includes one or more frames of a plurality of frames. In some embodiments, each of the video samples may be associated with a type identifier that identifies a type of content included in each of the video samples. The plurality of frames of video data as received by the process may be in one or more ISO format media files (e.g., ISOBMFF). The plurality of frames of video data may be in one or more media fragment files obtained based on the aforementioned description file.
At block 1404, process 1400 includes determining, based on the indication in the media file, that the plurality of frames includes at least one corrupted frame. The media content may include first data corresponding to at least one frame of the plurality of frames, in which case the first data is insufficient to correctly decode the at least one frame. As discussed above, corrupted frames may occur in different ways. In some cases, a frame may become corrupted when portions of the encoded data for the frame are lost. In some cases, a frame may become corrupted when the frame is part of an inter-prediction chain, and some other encoded data of the inter-prediction chain is lost so that the frame is not correctly decodable. For example, the at least one frame may be part of an inter-prediction chain, and the video data may include first data corresponding to the inter-prediction chain. In such cases, the first data is insufficient to correctly decode the at least one frame. In some cases, the encoded media data may become corrupted (e.g., due to media file corruption) or even lost before being encapsulated for transmission at the server. In some cases, the encoder (or transcoder) may crash or fail in the encoded media data. Encoder failure may result in some frames not being encoded (and not included) in the encoded data, such that the encoded data includes a missing frame. Encoder failures may also result in partially encoded frames and the inclusion of partial data in the encoded data. The encoded data may also include corrupted frames if the partial data is not sufficient to correctly decode the frame.
The determination in block 1404 may be based on sending an indication of a corrupted or missing video frame signal. The indication may be in the form of an example in accordance with that described herein, including, for example, fig. 7A-12. In some embodiments, the indication may be part of an ISO format file. In an example, the indication may be provided by a type identifier associated with the video sample including the corrupted at least one frame (e.g., a code identifier associated with the sample entry type), as discussed with respect to fig. 7A. In another example, the indication may be provided by an omission of one or more sample entries corresponding to the corrupted video frame, as discussed with respect to fig. 7B. In yet another example, as discussed with respect to fig. 10, a uniform sample type identifier associated with both the lost video frame and the corrupted video frame may be used to provide the indication. Additionally, the unified sample group type identifier may be used to indicate that a video sample group (associated with the sample group type identifier) includes a corrupt video frame. Mapping the unified sample group type identifier to a sample-to-group box containing video samples of the corrupt sample group may also be included to provide an indication of which of the video samples includes a corrupt video frame. Based on the indication, the system may identify a video sample, e.g., of a media file that includes the corrupted video frame.
In some embodiments, the indication may also be part of the aforementioned description file for the streaming application to indicate which of the media segments contained in the original description was corrupted (or missing). In one example, the description file may be an MPD file and may include predetermined elements and attributes to specify that there are media segments that are corrupted and not available for streaming, and the duration for those segments. In another example, the description file may be a playlist file and may include predetermined tags to indicate damaged (and/or missing) segment files. Based on the indication, the system may identify a presentation or representation/sub-representation that includes, for example, a media segment with a corrupted video frame.
At block 1406, the process 1400 includes processing the determined at least one corrupted frame based on the indication. For example, based on identifying a video sample that includes a corrupted video frame, the system may skip decoding of the video sample. For example, process 1400 may identify a portion of the media content corresponding to the at least one frame that is corrupted based on the indication, and may skip processing of the portion of the media content. As another example, an alternate presentation/representation/sub-representation is requested based on identifying a presentation/representation/sub-representation that includes a media segment with a corrupted video frame. The alternate presentation/representation/sub-representation may be requested from a different source and may be associated with the same or different media characteristics as the presentation/representation/sub-representation containing the corrupted media segment.
As mentioned above, the video data may include a plurality of video samples, where each of the plurality of video samples includes one or more frames of a plurality of frames. The plurality of video samples may include a first video sample including the corrupted at least one frame. The first video sample is associated with a type identifier that identifies a type of content included in the first video sample. In this case, the indication may include a type identifier. The type identifier may indicate that the first video sample includes at least one of the corrupted frames. The type identifier may also indicate the media type and the type of decoder used to process the media file. In some cases, the type identifier includes a sample entry type.
In some examples, the media file includes a list representation of a plurality of segments of video data. In one example, the plurality of segments may include a first segment and a second segment. The first segment and the second segment may include one or more frames of a plurality of frames. The second segment may also include one or more missing frames in the plurality of frames. The above-mentioned indication may be referred to as a first indication, in which case the media file may further include a second indication to indicate that the second segment includes one or more missing frames of the plurality of frames.
In some examples, the media file comprises a Media Presentation Description (MPD) format file. The above-mentioned list representation may include one or more adaptation sets, wherein each of the one or more adaptation sets includes one or more representations and/or one or more sub-representations containing video data having one or more missing frames. One or more representations or each of one or more sub-representations is associated with one or more segments. The second indication may include one or more elements associated with one or more missing frames for one or more representations or one or more sub-representations. The one or more elements are associated with a set of attributes that includes a timestamp and a duration of the second segment.
In some implementations, the list representation includes information for retrieving the first segment but not the second segment. In such cases, the second indication includes an omission of information for retrieving the second segment. In some implementations, the list representation includes a text indicator associated with the second segment. The text indicator may indicate that the second segment includes one or more missing frames. In such implementations, the second indication may include a textual indicator.
In some examples, the media file is based on the HTTP real-time streaming (HLS) playlist format. In such cases, each of the plurality of clips is associated with a Transport Stream (TS) file, and the list representation may include a set of tags. In such cases, the text indicator is a tag in the set of tags associated with the second segment.
In some examples, the media files are based on a Common Media Application Format (CMAF) and include playlists. Each of the plurality of fragments is associated with an ISOBMFF. The list representation may include a set of tags and the text indicator is a tag in the set of tags associated with the second segment.
In some aspects, process 1400 may process the at least one corrupted frame based on the indication by transmitting a request to the streaming server to request the third segment to replace the second segment.
Processes 1300 and 1400 are illustrated as logical flow diagrams, wherein the operations represent sequences of operations that can be implemented in hardware, computer instructions, or a combination thereof. In the context of computer instructions, the operations represent computer-executable instructions stored on one or more computer-readable storage media that, when executed by one or more processors, perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, components, data structures, and the like that perform particular functions or implement particular data types. The order in which the operations are described is not intended to be construed as a limitation, and any number of the described operations can be combined in any order and/or in parallel with an implementation.
Additionally, processes 1300 and 1400 may be performed under control of one or more computer systems configured with executable instructions and may be implemented as program code (e.g., executable instructions, one or more computer programs, or one or more applications) executing collectively on one or more processors, by hardware, or a combination thereof. As mentioned above, the program code may be stored on a computer-readable or machine-readable storage medium, for example, in the form of a computer program comprising a plurality of instructions executable by one or more processors. The computer-readable or machine-readable storage medium may be non-transitory. The computer system may include, for example, the video source 102, encoding device 104, decoding device 112, and video destination device 122 of fig. 1, as well as the server 402 and client device 404 of fig. 4.
Specific details of the encoding device 1504 and the decoding device 1612 are shown in fig. 15 and 16, respectively. Fig. 15 is a block diagram illustrating an example encoding device 1504 that may implement one or more of the techniques described in this disclosure. The encoding device 1504 may, for example, generate syntax structures described herein (e.g., syntax structures for VPS, SPS, PPS, or other syntax elements). The encoding device 1504 may perform intra-prediction and inter-prediction coding of video blocks within a video slice. As previously described, intra-coding relies at least in part on spatial prediction to reduce or remove spatial redundancy within a given video frame or picture. Inter-coding relies, at least in part, on temporal prediction to reduce or remove temporal redundancy within adjacent or surrounding frames of a video sequence. Intra mode (I-mode) may refer to any of several space-based compression modes. An inter mode, such as uni-directional prediction (P-mode) or bi-directional prediction (B-mode), may refer to any of several time-based compression modes.
Encoding device 1504 includes partition unit 35, prediction processing unit 41, filter unit 63, picture memory 64, summer 50, transform processing unit 52, quantization unit 54, and entropy encoding unit 56. Prediction processing unit 41 includes motion estimation unit 42, motion compensation unit 44, and intra-prediction processing unit 46. For video block reconstruction, encoding device 1504 also includes inverse quantization unit 58, inverse transform processing unit 60, and summer 62. Filter unit 63 is intended to represent one or more loop filters, such as deblocking filters, Adaptive Loop Filters (ALF), and Sample Adaptive Offset (SAO) filters. Although filter unit 63 is shown in fig. 12 as an in-loop filter, in other configurations, filter unit 63 may be implemented as a post-loop filter. Post-processing device 57 may perform additional processing on the encoded video data generated by encoding device 1504. The techniques of this disclosure may be implemented by the encoding device 1504 in some cases. In other cases, however, one or more of the techniques of this disclosure may be implemented by the post-processing device 57.
As shown in fig. 15, encoding device 1504 receives video data and partition unit 35 partitions the data into video blocks. Partitioning may also include partitioning into slices, slice slices, blocks, or other larger units, e.g., according to a quadtree structure of LCUs and CUs, as well as video block partitioning. The encoding device 1504 generally illustrates the components that encode video blocks within a video slice to be encoded. A slice may be divided into a plurality of video blocks (and possibly into a set of video blocks referred to as video blocks). Prediction processing unit 41 may select one of a plurality of possible coding modes (e.g., one of one or more inter-prediction coding modes of a plurality of intra-prediction coding modes) for the current video block based on the error result (e.g., coding rate and distortion level, or the like). Prediction processing unit 41 may provide the resulting intra-or inter-coded block to summer 50 to generate residual block data and to summer 62 to reconstruct the encoded block for use as a reference picture.
Intra-prediction processing unit 46 within prediction processing unit 41 may perform intra-prediction coding of a current video block relative to one or more neighboring blocks in the same frame or slice as the current block to be coded to provide spatial compression. Motion estimation unit 42 and motion compensation unit 44 within prediction processing unit 41 perform inter-predictive coding of the current video block relative to one or more predictive blocks in one or more reference pictures to provide temporal compression.
Motion estimation unit 42 may be configured to determine an inter-prediction mode for a video slice according to a predetermined pattern of a video sequence. The predetermined pattern may designate video slices in the sequence as P slices, B slices, or GPB slices. Motion estimation unit 42 and motion compensation unit 44 may be highly integrated, but are illustrated separately for conceptual purposes. Motion estimation, performed by motion estimation unit 42, is the process of generating motion vectors that estimate the motion of video blocks. A motion vector, for example, may indicate a displacement of a Prediction Unit (PU) of a video block within a current video frame or picture relative to a predictive block within a reference picture.
A predictive block is a block that is found to closely match a PU of a video block to be coded in terms of pixel differences, which may be determined by Sum of Absolute Differences (SAD), Sum of Squared Differences (SSD), or other difference metrics. In some examples, encoding device 1504 may calculate values for sub-integer pixel positions of reference pictures stored in picture memory 64. For example, the encoding device 1504 may interpolate values for quarter pixel positions, eighth pixel positions, or other fractional pixel positions of a reference picture. Thus, motion estimation unit 42 may perform a motion search with respect to full pixel positions and fractional pixel positions and output motion vectors with fractional pixel precision.
Motion estimation unit 42 calculates motion vectors for PUs of video blocks in inter-coded slices by comparing the locations of the PUs to locations of predictive blocks of reference pictures. The reference picture may be selected from a first reference picture list (example table 0) or a second reference picture list (example table 1), each of which identifies one or more reference pictures stored in reference picture memory 64. Motion estimation unit 42 sends the calculated motion vectors to entropy encoding unit 56 and motion compensation unit 44.
The motion compensation performed by motion compensation unit 44 may involve extracting or generating a predictive block based on a motion vector determined by motion estimation, possibly performing interpolation to sub-pixel precision. Upon receiving the motion vector for the PU of the current video block, motion compensation unit 44 may locate, in a reference picture list, the predictive block to which the motion vector points. Encoding device 1504 forms a residual video block by subtracting the pixel values of the predictive block from the pixel values of the current video block being coded, forming pixel difference values. The pixel difference values form residual data for the block, and may include both luma and chroma difference components. Summer 50 represents one or more components that perform this subtraction operation. Motion compensation unit 44 may also generate syntax elements associated with the video blocks and the video slice for use by decoding device 1612 in decoding the video blocks of the video slice.
As described above, as an alternative to inter-prediction performed by motion estimation unit 42 and motion compensation unit 44, intra-prediction processing unit 46 may intra-predict the current block. In particular, intra-prediction processing unit 46 may determine an intra-prediction mode to use to encode the current block. In some examples, intra-prediction processing unit 46 may encode the current block using various intra-prediction modes, e.g., during separate encoding passes, and intra-prediction processing unit 46 (or a mode selection unit, not shown in fig. 15) may select an appropriate intra-prediction mode from the tested modes for use. For example, intra-prediction processing unit 46 may calculate rate-distortion values using rate-distortion analysis for various tested intra-prediction modes, and may select the intra-prediction mode having the most preferred rate-distortion characteristics among the tested modes. Rate-distortion analysis generally determines the amount of distortion (or error) between an encoded block and an original, unencoded block, which is encoded to produce the encoded block, and the bit rate (i.e., number of bits) used to produce the encoded block. Intra-prediction processing unit 46 may calculate ratios from the distortions and rates of various encoded blocks to determine which intra-prediction mode exhibits the most preferred rate-distortion value for the block.
In any case, upon selecting the intra-prediction mode for the block, intra-prediction processing unit 46 may provide information to entropy encoding unit 56 indicating the selected intra-prediction mode for the block. Entropy encoding unit 56 may encode information indicating the selected intra-prediction mode. The encoding device 1504 may include an intra-prediction mode index table and a modified intra-prediction mode index table for each of the contexts in the transmitted bitstream configuration data definitions for the encoding contexts of the various blocks and an indication of the most probable intra-prediction mode. The bitstream configuration data may include a plurality of intra-prediction mode index tables and a plurality of modified intra-prediction mode index tables (also referred to as codeword mapping tables).
After prediction processing unit 41 generates the predictive block for the current video block via inter-prediction or intra-prediction, encoding device 1504 forms a residual video block by subtracting the predictive block from the current video block. The residual video data in the residual block may be included in one or more TUs and applied to transform processing unit 52. Transform processing unit 52 transforms the residual video data into residual transform coefficients using a transform, such as a Discrete Cosine Transform (DCT) or a conceptually similar transform. Transform processing unit 52 may convert the residual video data from the pixel domain to a transform domain (e.g., the frequency domain).
Transform processing unit 52 may send the resulting transform coefficients to quantization unit 54. Quantization unit 54 quantizes the transform coefficients to further reduce the bit rate. The quantization process may reduce the bit depth associated with some or all of the coefficients. The quantization level may be modified by adjusting a quantization parameter. In some examples, quantization unit 54 may then perform a scan of a matrix that includes quantized transform coefficients. Alternatively, entropy encoding unit 56 may perform the scanning.
After quantization, entropy encoding unit 56 entropy encodes the quantized transform coefficients. For example, entropy encoding unit 56 may perform Context Adaptive Variable Length Coding (CAVLC), Context Adaptive Binary Arithmetic Coding (CABAC), syntax-based context adaptive binary arithmetic coding (SBAC), Probability Interval Partition Entropy (PIPE) coding, or another entropy encoding technique. After entropy encoding by entropy encoding unit 56, the encoded bitstream may be transmitted to decoding device 1612 or archived for later transmission or retrieval by decoding device 1612. Entropy encoding unit 56 may also entropy encode the motion vectors and other syntax elements of the current video slice being coded.
Inverse quantization unit 58 and inverse transform processing unit 60 apply inverse quantization and inverse transform, respectively, to reconstruct the residual block in the pixel domain for subsequent use as a reference block of a reference picture. Motion compensation unit 44 may calculate the reference block by adding the residual block to a predictive block of one of the reference pictures within the reference picture list. Motion compensation unit 44 may also apply one or more interpolation filters to the reconstructed residual block to calculate sub-integer pixel values for use in motion estimation. Summer 62 adds the reconstructed residual block to the motion compensated prediction block produced by motion compensation unit 44 to produce a reference block for storage in reference picture memory 64. The reference block may be used by motion estimation unit 42 and motion compensation unit 44 as a reference block to inter-predict a block in a subsequent video frame or picture.
In this way, the encoding device 1504 of fig. 15 represents an example of a video encoder configured to generate syntax of an encoded video bitstream. The encoding device 1504 may, for example, generate VPS, SPS, and PPS parameter sets as described above. The encoding device 1504 may perform any of the techniques described herein, including the processes described above with respect to fig. 13 and 14. The techniques of this disclosure have generally been described with respect to encoding device 1504, but as mentioned above, some of the techniques of this disclosure may also be implemented by post-processing device 57.
FIG. 16 is a block diagram illustrating an example decoding device 1612. Decoding device 1612 includes entropy decoding unit 80, prediction processing unit 81, inverse quantization unit 86, inverse transform processing unit 88, summer 90, filter unit 91, and picture memory 92. Prediction processing unit 81 includes motion compensation unit 82 and intra-prediction processing unit 84. In some examples, decoding device 1612 may perform a decoding pass that is substantially reversible to the encoding pass described with respect to encoding device 1504 from fig. 15.
During the decoding process, the decoding device 1612 receives an encoded video bitstream representing video blocks and associated syntax elements of an encoded video slice sent by the encoding device 1504. In some embodiments, the decoding device 1612 may receive the encoded video bitstream from the encoding device 1504. In some embodiments, decoding device 1612 may receive the encoded video bitstream from network entity 79, such as a server, a Media Aware Network Element (MANE), a video editor/splicer, or other such device configured to implement one or more of the techniques described above. Network entity 79 may or may not include encoding device 1504. Some of the techniques described in this disclosure may be implemented by network entity 79 before network entity 79 transmits the encoded video bitstream to decoding device 1612. In some video decoding systems, network entity 79 and decoding device 1612 may be part of separate devices, while in other cases, the functionality described with respect to network entity 79 may be performed by the same device that includes decoding device 1612.
Entropy decoding unit 80 of decoding device 1612 entropy decodes the bitstream to generate quantized coefficients, motion vectors, and other syntax elements. Entropy decoding unit 80 forwards the motion vectors and other syntax elements to prediction processing unit 81. The decoding device 1612 may receive syntax elements at a video slice level and/or a video block level. Entropy decoding unit 80 may process and parse both fixed-length syntax elements and variable-length syntax elements in one or more parameter sets, such as VPS, SPS, and PPS.
When coding a video slice as an intra-coded (I) slice, intra-prediction processing unit 84 of prediction processing unit 81 may generate prediction data for a video block of the current video slice based on the signaled intra-prediction mode and data from previously decoded blocks of the current frame or picture. When coding a video slice as an inter-coded (i.e., B, P or GPB) slice, motion compensation unit 82 of prediction processing unit 81 generates predictive blocks for the video blocks of the current video slice based on the motion vectors and other syntax elements received from entropy decoding unit 80. The predictive block may be generated from one of the reference pictures within the reference picture list. Decoding device 1612 may use default construction techniques to construct reference frame lists (list 0 and list 1) based on reference pictures stored in picture memory 92.
Motion compensation unit 82 determines prediction information for video blocks of the current video slice by parsing the motion vectors and other syntax elements, and uses the prediction information to generate predictive blocks for the current video block being decoded. For example, motion compensation unit 82 may use one or more syntax elements in the parameter set to determine a prediction mode (e.g., intra-prediction or inter-prediction) used to code the video blocks of the video slice, an inter-prediction slice type (e.g., a B slice, a P slice, or a GPB slice), construction information for one or more of the reference picture lists of the slice, a motion vector for each inter-coded video block of the slice, an inter-prediction state for each inter-coded video block of the slice, and other information used to decode video blocks in the current video slice.
Motion compensation unit 82 may also perform interpolation based on interpolation filters. Motion compensation unit 82 may use interpolation filters as used by encoding device 1604 during encoding of video blocks to calculate interpolated values for sub-integer pixels of a reference block. In this case, motion compensation unit 82 may determine the interpolation filter used by encoding device 1504 from the received syntax element and may use the interpolation filter to generate the predictive block.
Inverse quantization unit 86 inverse quantizes or dequantizes the quantized transform coefficients provided in the bitstream and decoded by entropy decoding unit 80. The inverse quantization process may include using a quantization parameter calculated by the encoding device 1604 for each video block in the video slice to determine a degree of quantization and a degree of inverse quantization that should be applied as well. Inverse transform processing unit 88 applies an inverse transform (e.g., an inverse DCT or other suitable inverse transform), an inverse integer transform, or a conceptually similar inverse transform process to the transform coefficients in order to generate residual blocks in the pixel domain.
After motion compensation unit 82 generates the predictive block for the current video block based on the motion vector and other syntax elements, decoding device 1612 forms a decoded video block by summing the residual block from inverse transform processing unit 88 with the corresponding predictive block generated by motion compensation unit 82. Summer 90 represents the component that performs this summation operation. If desired, a loop filter (in or after the coding loop) may also be used to smooth pixel transitions, or otherwise improve video quality. Filter unit 91 is intended to represent one or more loop filters, such as deblocking filters, Adaptive Loop Filters (ALF), and Sample Adaptive Offset (SAO) filters. Although filter unit 91 is shown in fig. 16 as an in-loop filter, in other configurations, filter unit 91 may be implemented as a post-loop filter. The decoded video blocks in a given frame or picture are then stored in picture memory 92, which stores reference pictures used for subsequent motion compensation. Picture memory 92 also stores decoded video for later presentation on a display device, such as video destination device 122 shown in fig. 1.
In the foregoing description, aspects of the application are described with reference to specific embodiments thereof, but those skilled in the art will recognize that the invention is not limited thereto. Accordingly, while illustrative embodiments of the application have been described in detail herein, it is to be understood that the inventive concepts may be otherwise variously embodied and employed, and that the appended claims are intended to be construed to include such variations except as limited by the prior art. Various features and aspects of the above-described invention may be used separately or in combination. In addition, embodiments may be used in any number of environments and applications beyond those described herein without departing from the broader spirit and scope of the specification. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. For purposes of illustration, the methods are described in a particular order. It should be appreciated that in alternative embodiments, the methods may be performed in an order different than that described.
Where a component is described as being "configured to" perform certain operations, such configuration may be achieved, for example, by designing electronic circuitry or other hardware to perform the operations, by programming programmable electronic circuitry (e.g., a microprocessor or other suitable electronic circuitry) to perform the operations, or any combination thereof.
The various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
Thus, the techniques described herein may be implemented in hardware, software, firmware, or any combination thereof. Such techniques may be implemented in any of a variety of apparatuses such as a general purpose computer, a wireless communication device handset, or an integrated circuit device having multiple uses, including applications in wireless communication device handsets and other devices. Any features described as modules or components may be implemented together in an integrated logic device or separately as discrete but interoperable logic devices. If implemented in software, the techniques may be realized at least in part by a computer-readable data storage medium comprising program code comprising instructions that, when executed, perform one or more of the methods described above. The computer-readable data storage medium may form part of a computer program product, which may include packaging materials. The computer-readable medium may include memory or data storage media, such as Random Access Memory (RAM), e.g., Synchronous Dynamic Random Access Memory (SDRAM), read-only memory (ROM), non-volatile random access memory (NVRAM), electrically erasable programmable read-only memory (EEPROM), FLASH memory, magnetic or optical data storage media, and the like. The techniques additionally, or alternatively, may be realized at least in part by a computer-readable communication medium that carries or communicates program code, such as a propagated signal or wave, in the form of instructions or data structures and that can be accessed, read, and/or executed by a computer.
The program code may be executed by a processor, which may include one or more processors, such as one or more Digital Signal Processors (DSPs), general purpose microprocessors, Application Specific Integrated Circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Such a processor may be configured to perform any of the techniques described in this disclosure. A general purpose processor may be a microprocessor; but, in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Thus, the term "processor," as used herein may refer to any of the foregoing structure, any combination of the foregoing structure, or any other structure or device suitable for implementation of the techniques described herein. Further, in some aspects, the functionality described herein may be provided in dedicated software modules or hardware modules configured for encoding and decoding, or incorporated in a combined video encoder-decoder (codec).
The coding techniques discussed herein may be embodied in example video encoding and decoding systems. The system includes a source device that provides encoded video data to be decoded later by a destination device. In particular, a source device provides video data to a destination device via a computer-readable medium. The source device and destination device may comprise any of a wide range of devices, including desktop computers, notebook (i.e., laptop) computers, tablet computers, set-top boxes, telephone handsets such as so-called "smart" phones, so-called "smart" pads, televisions, cameras, display devices, digital media players, video game consoles, video streaming devices, or the like. In some cases, the source device and the destination device may be equipped for wireless communication.
The destination device may receive encoded video data to be decoded via a computer-readable medium. The computer-readable medium may comprise any type of medium or device capable of moving encoded video data from a source device to a destination device. In one example, the computer-readable medium may comprise a communication medium to enable a source device to transmit encoded video data directly to a destination device in real-time. The encoded video data may be modulated according to a communication standard, such as a wireless communication protocol, and transmitted to a destination device. The communication medium may include any wireless or wired communication medium, such as a Radio Frequency (RF) spectrum or one or more physical transmission lines. The communication medium may form part of a packet-based network, such as a local area network, a wide area network, or a global network, such as the internet. The communication medium may include routers, switches, base stations, or any other equipment that may be used to facilitate communication from a source device to a destination device.
In some examples, the encoded data may be output from the output interface to the storage device. Similarly, encoded data may be accessed from the storage device by the input interface. The storage device may include any of a variety of distributed or locally accessed data storage media such as a hard drive, blu-ray discs, DVDs, CD-ROMs, flash memory, volatile or non-volatile memory, or any other suitable digital storage media for storing encoded video data. In another example, the storage device may correspond to a file server or another intermediate storage device that may store the encoded video generated by the source device. The destination device may access the stored video data from the storage device via streaming or download. The file server may be any type of server capable of storing encoded video data and transmitting the encoded video data to a destination device. Example file servers include web servers (e.g., for a website), FTP servers, Network Attached Storage (NAS) devices, or local disk drives. The destination device may access the encoded video data over any standard data connection, including an internet connection. Such a connection may include a wireless channel (e.g., a Wi-Fi connection), a wired connection (e.g., DSL, cable modem, etc.), or a combination of both, suitable for accessing encoded video data stored on a file server. The transmission of encoded video data from the storage device may be a streaming transmission, a download transmission, or a combination thereof.
The techniques of this disclosure are not necessarily limited to wireless applications or settings. The techniques may be applied to video coding to support any of a variety of multimedia applications, such as over-the-air television broadcasts, cable television transmissions, satellite television transmissions, internet streaming video transmissions (e.g., dynamic adaptive streaming over HTTP (DASH)), digital video encoded onto a data storage medium, decoding of digital video stored on a data storage medium, or other applications. In some examples, the system may be configured to support one-way or two-way video transmission to support applications such as video streaming, video playback, video broadcasting, and/or video telephony.
In one example, a source device includes a video source, a video encoder, and an output interface. The destination device may include an input interface, a video decoder, and a display device. A video encoder of a source device may be configured to apply the techniques disclosed herein. In other examples, the source device and the destination device may include other components or arrangements. For example, the source device may receive video data from an external video source (e.g., an external camera). Likewise, the destination device may interface with an external display device, rather than including an integrated display device.
The above example system is merely one example. The techniques for processing video data in parallel may be performed by any digital video encoding and/or decoding device. Although the techniques of this disclosure are typically performed by a video encoding device, the techniques may also be performed by a video encoder/decoder (commonly referred to as a "CODEC"). Furthermore, the techniques of this disclosure may also be performed by a video preprocessor. Source device and destination device are merely examples of such coding devices that generate coded video data for transmission to a destination device for the source device. In some examples, the source device and the destination device may operate in a substantially symmetric manner such that each of the devices includes video encoding and decoding components. Thus, the example system may support one-way or two-way video transmission between video devices, such as for video streaming, video playback, video broadcasting, or video telephony.
The video source may include a video capture device, such as a video camera, a video archive containing previously captured video, and/or a video feed interface to receive video from a video content provider. As another alternative, the video source may generate computer graphics-based data as the source video, or a combination of live video, archived video, and computer-generated video. In some cases, if the video source is a video camera, the source device and destination device may form so-called camera phones or video phones. However, as mentioned above, the techniques described in this disclosure may be applicable to video coding in general, and may be applicable to wireless and/or wired applications. In each case, the captured, pre-captured, or computer-generated video may be encoded by a video encoder. The encoded video information may then be output by an output interface onto a computer-readable medium.
As mentioned, the computer-readable medium may include transitory media such as a wireless broadcast or a wired network transmission; or a storage medium (i.e., a non-transitory storage medium) such as a hard disk, a flash drive, a compact disc, a digital video disc, a blu-ray disc, or other computer-readable medium. In some examples, a network server (not shown) may receive encoded video data from a source device and provide the encoded video data to a destination device, e.g., via network transmission. Similarly, a computing device of a media generation facility (e.g., a disc stamping facility) may receive encoded video data from a source device and produce a disc containing the encoded video data. Thus, in various examples, a computer-readable medium may be understood to include one or more computer-readable media in various forms.
The input interface of the destination device receives information from the computer-readable medium. The information of the computer-readable medium may include syntax information defined by a video encoder, which is also used by a video decoder, including syntax elements that describe characteristics and/or processing of blocks and other coded units, such as groups of pictures (GOPs). The display device displays the decoded video data to a user, and may comprise any of a variety of display devices, such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), a plasma display, an Organic Light Emitting Diode (OLED) display, or another type of display device. Various embodiments of the present invention have been described.
Claims (55)
1. A method of processing video data, the method comprising:
obtaining a plurality of frames of video data, wherein the video data includes a plurality of video samples, each of the plurality of video samples including one or more frames of a plurality of frames;
determining that at least one frame of the plurality of frames in a first video sample of the plurality of video samples is corrupted; and
generating a media file comprising a sample entry for each of the plurality of video samples and a sample group description box comprising a sample group type entry, a first one of the sample group type entries comprising a type identifier indicating that a video sample has at least one frame corrupted, wherein generating the media file comprises associating each of the plurality of video samples with the sample group type entry included in the sample group description box, and
wherein in response to determining that at least one frame in the first video sample is corrupted, associating each video sample in the plurality of video samples with a sample group type entry comprises associating the first video sample with a first entry.
2. The method of claim 1, wherein the video data comprises first data corresponding to the at least one frame of the plurality of frames, and wherein the first data is insufficient to correctly decode the at least one frame.
3. The method of claim 1, wherein the at least one frame is part of an inter-prediction chain, wherein the video data comprises first data corresponding to the inter-prediction chain, and wherein the first data is insufficient to correctly decode the at least one frame.
4. The method of claim 1, wherein the type identifier includes a sample entry type.
5. The method of claim 1, wherein the media file is based on the international organization for standardization (ISO) base media file format (ISOBMFF).
6. The method of claim 1, further comprising:
generating a second media file comprising a list representation of a plurality of segments of the video data, the plurality of segments including a first segment and a second segment, wherein each of the first segment and the second segment comprises one or more frames of the video data, the second segment further comprising one or more missing frames of the video data;
determining that the second segment contains the one or more missed frames;
generating a second indication of the one or more missed frames; and
including the second indication in the second media file.
7. The method of claim 6, wherein the second media file is based on a Media Presentation Description (MPD) format, wherein the list representation includes one or more adaptation sets, each of the one or more adaptation sets including at least one or more of one or more representations or one or more sub-representations comprising the one or more missing frames, wherein the one or more representations or each of the one or more sub-representations is associated with one or more segments, and wherein the second indication includes one or more elements associated with the one or more missing frames included in the one or more representations or the one or more sub-representations, the one or more elements associated with a set of attributes including a timestamp and duration of the second segment.
8. The method of claim 6, wherein the list representation includes information for retrieving the first segment but not the second segment, and wherein the second indication comprises an omission of information for retrieving the second segment.
9. The method of claim 6, wherein the list representation includes a text indicator associated with the second segment, the text indicator indicating that the second segment includes the one or more missed frames, and wherein the second indication includes the text indicator.
10. The method of claim 9, wherein the second media file is based on an HTTP real-time streaming (HLS) playlist format, wherein each clip of the plurality of clips is associated with a Transport Stream (TS) file, wherein the list representation includes a set of tags, and wherein the text indicator is a tag of the set of tags associated with the second clip.
11. The method of claim 9, wherein the second media file is based on a Common Media Application Format (CMAF) and includes a playlist, wherein each segment of the plurality of segments is associated with an ISOBMFF, wherein the list representation includes a set of tags, and wherein the textual indicator is a tag of the set of tags associated with the second segment.
12. A method of processing a media file, the method comprising:
obtaining a media file comprising media content, wherein the media content comprises:
a plurality of video samples, each of the plurality of video samples including one or more frames of a plurality of frames;
a sample entry for each of a plurality of video samples; and
a sample group description box including sample group type entries, a first entry of the sample group type entries including a type identifier indicating that a video sample has at least one frame corrupted, wherein each video sample of the plurality of video samples is associated with a sample group type entry included in the sample group description box, determining that the first video sample of the plurality of video samples is associated with the first entry;
Determining that the plurality of frames in the first video sample includes at least one corrupted frame based on the association of the first video sample with the first item; and
processing the at least one corrupted frame.
13. The method of claim 12, wherein the video sample includes first data corresponding to the at least one frame of the plurality of frames, and wherein the first data is insufficient to correctly decode the at least one frame.
14. The method of claim 12, wherein the at least one frame is part of an inter-prediction chain, wherein the video sample includes first data corresponding to the inter-prediction chain, and wherein the first data is insufficient to correctly decode the at least one frame.
15. The method of claim 12, wherein the type identifier includes a sample entry type.
16. The method of claim 12, wherein the media file is based on the international organization for standardization (ISO) base media file format (ISOBMFF).
17. The method of claim 12, wherein processing the at least one corrupted frame based on the indication comprises:
identifying a portion of the media content corresponding to the at least one corrupted frame based on the type identifier; and
Skipping processing of the portion of the media content.
18. The method of claim 12, further comprising:
receiving a second media file comprising a tabular representation of a plurality of segments of the video sample, the plurality of segments including a first segment and a second segment, each of the first segment and the second segment including one or more frames of the video sample, wherein the second segment also includes one or more missing frames of the video sample, and wherein the media file also includes a second indication indicating that the second segment includes one or more missing frames of the video sample; and
processing the second media file based on the second indication.
19. The method of claim 18, wherein the second media file is based on a Media Presentation Description (MPD) format, wherein the list representation includes one or more adaptation sets, each of the one or more adaptation sets including at least one or more of one or more representations or one or more sub-representations comprising the one or more missing frames, wherein the one or more representations or each of the one or more sub-representations is associated with one or more segments, and wherein the second indication includes one or more elements associated with the one or more representations or the one or more missing frames included in the sub-representation associated with the second segment, the one or more elements associated with a set of attributes including a timestamp and duration of the second segment.
20. The method of claim 18, wherein the list representation includes information for retrieving the first segment but not the second segment, and wherein the second indication comprises an omission of information for retrieving the second segment.
21. The method of claim 18, wherein the list representation includes a text indicator associated with the second segment, the text indicator indicating that the second segment includes the one or more missed frames, and wherein the second indication includes the text indicator.
22. The method of claim 21, wherein the second media file is based on an HTTP real-time streaming (HLS) playlist format, wherein each clip of the plurality of clips is associated with a Transport Stream (TS) file, wherein the list representation includes a set of tags, and wherein the text indicator is a tag of the set of tags associated with the second clip.
23. The method of claim 21, wherein the second media file is based on a Common Media Application Format (CMAF) and includes a playlist, wherein each segment of the plurality of segments is associated with an ISOBMFF, wherein the list representation includes a set of tags, and wherein the textual indicator is a tag of the set of tags associated with the second segment.
24. The method of claim 18, wherein said processing said at least one corrupted frame comprises:
a request is transmitted to a streaming server to request a third fragment to replace the second fragment.
25. An apparatus for processing video data, comprising:
a memory configured to store the video data; and
a processor configured to:
obtaining a plurality of frames of the video data, wherein the video data includes a plurality of video samples, each of the plurality of video samples including one or more frames of the plurality of frames;
determining that at least one frame of the plurality of frames in a first video sample of the plurality of video samples is corrupted; and
generating a media file comprising a sample entry for each of a plurality of video samples, and a sample group description box comprising sample group type entries, a first one of the sample group type entries comprising a type identifier indicating that a video sample has at least one frame corrupted,
wherein generating the media file comprises associating each video sample of a plurality of video samples with a sample group type item contained in a sample group description box, an
Wherein in response to determining that at least one frame of the first video sample is corrupted, associating each video sample of the plurality of video samples with a sample group type entry comprises associating the first video sample with a first entry.
26. The apparatus of claim 25, wherein the video data comprises first data corresponding to the at least one frame of the plurality of frames, and wherein the first data is insufficient to correctly decode the at least one frame.
27. The apparatus of claim 25, wherein the at least one frame is part of an inter-prediction chain, wherein the video data comprises first data corresponding to the inter-prediction chain, and wherein the first data is insufficient to correctly decode the at least one frame.
28. The apparatus of claim 25, wherein the type identifier comprises a sample entry type.
29. The apparatus of claim 25, wherein the media file is based on the international organization for standards (ISO) base media file format (ISOBMFF).
30. The apparatus of claim 25, wherein the processor is further configured to generate a second media file comprising a tabular representation of a plurality of segments of the video data, the plurality of segments comprising a first segment and a second segment, wherein each of the first segment and the second segment comprises one or more frames of video data, the second segment further comprising one or more missing frames of the video data;
Determining that the second segment contains one or more missed frames;
generating a second indication of the one or more missed frames; and
including the second indication in the second media file.
31. The apparatus of claim 30, wherein the second media file is based on a Media Presentation Description (MPD) format, wherein the list representation includes one or more adaptation sets, each of the one or more adaptation sets including at least one or more of one or more representations or one or more sub-representations comprising the one or more missing frames, wherein the one or more representations or each of the one or more sub-representations is associated with one or more segments, and wherein the second indication includes one or more elements associated with the one or more missing frames included in the one or more representations or the one or more sub-representations, the one or more elements associated with a set of attributes including a timestamp and duration of the second segment.
32. The apparatus of claim 30, wherein the list representation comprises information for retrieving the first segment but not the second segment, and wherein the second indication comprises an omission of information for retrieving the second segment.
33. The apparatus of claim 30, wherein the list representation includes a text indicator associated with the second segment, the text indicator indicating that the second segment includes the one or more missing frames, and wherein the second indication includes the text indicator.
34. The apparatus of claim 33, wherein the second media file is based on an HTTP real-time streaming (HLS) playlist format, wherein each clip of the plurality of clips is associated with a Transport Stream (TS) file, wherein the list representation includes a set of tags, and wherein the text indicator is a tag of the set of tags associated with the second clip.
35. The apparatus of claim 33, wherein the second media file is based on a Common Media Application Format (CMAF) and includes a playlist, wherein each segment of the plurality of segments is associated with an ISOBMFF, wherein the list representation includes a set of tags, and wherein the textual indicator is a tag of the set of tags associated with the second segment.
36. The apparatus of claim 25, wherein the apparatus comprises a mobile device having a camera for capturing pictures.
37. An apparatus for processing a media file, comprising:
a memory configured to store the media file; and
a processor configured to:
obtaining a media file comprising media content, wherein the media content comprises:
a plurality of video samples, each of the plurality of video samples including one or more frames of a plurality of frames;
a sample entry for each of the plurality of video samples; and
a sample group description box containing sample group type entries, a first one of the sample group type entries containing a type identifier indicating that a video sample has at least one frame corrupted, wherein each video sample of the plurality of video samples is associated with a sample group type entry contained in the sample group description box,
determining that a first video sample of the plurality of samples is associated with a first item;
determining that the plurality of frames in the first video sample includes at least one corrupted frame based on the association of the first video sample with the first item; and
processing the at least one corrupted frame.
38. The apparatus of claim 37, wherein the video sample comprises first data corresponding to the at least one frame of the plurality of frames, and wherein the first data is insufficient to correctly decode the at least one frame.
39. The apparatus of claim 37, wherein the at least one frame is part of an inter-prediction chain, wherein the video sample includes first data corresponding to the inter-prediction chain, and wherein the first data is insufficient to correctly decode the at least one frame.
40. The apparatus of claim 37, wherein the type identifier comprises a sample entry type.
41. The apparatus of claim 37, wherein the media file is based on the international organization for standardization (ISO) base media file format (ISOBMFF).
42. The apparatus of claim 37, wherein the processor is further configured to:
identifying a portion of the media content corresponding to the at least one corrupted frame based on the type identifier; and
skipping processing of the portion of the media content.
43. The apparatus of claim 37, wherein the processor is further configured to receive a second media file comprising a tabular representation of a plurality of segments of the video sample, the plurality of segments including a first segment and a second segment, each of the first segment and the second segment including one or more frames of the video sample, wherein the second segment also includes one or more missing frames of the video sample, wherein the indication is a first indication, and wherein the media file also includes a second indication to indicate that the second segment includes one or more missing frames of the video sample; and
Processing a second media file based on the second indication.
44. The apparatus of claim 43, wherein the second media file is based on a Media Presentation Description (MPD) format, wherein the list representation includes one or more adaptation sets, each of the one or more adaptation sets including at least one or more of one or more representations or one or more sub-representations comprising the one or more missing frames, wherein the one or more representations or each of the one or more sub-representations is associated with one or more segments, and wherein the second indication includes one or more elements associated with the one or more representations or the one or more missing frames included in the sub-representation associated with the second segment, the one or more elements associated with a set of attributes including a timestamp and a duration of the second segment.
45. The apparatus of claim 43, wherein the list representation includes information for retrieving the first segment but not the second segment, and wherein the second indication includes an omission of information for retrieving the second segment.
46. The apparatus of claim 43, wherein the list representation includes a text indicator associated with the second segment, the text indicator indicating that the second segment includes the one or more missing frames, and wherein the second indication includes the text indicator.
47. The apparatus of claim 46, wherein the second media file is based on an HTTP Live Streaming (HLS) playlist format, wherein each segment of the plurality of segments is associated with a Transport Stream (TS) file, wherein the list representation includes a set of tags, and wherein the text indicator is a tag of the set of tags associated with the second segment.
48. The apparatus of claim 46, wherein the second media file is based on a Common Media Application Format (CMAF) and includes a playlist, wherein each segment of the plurality of segments is associated with an ISOBMFF, wherein the list representation includes a set of tags, and wherein the textual indicator is a tag of the set of tags associated with the second segment.
49. The apparatus of claim 43, wherein the processor is further configured to:
a request is transmitted to a streaming server to request a third fragment to replace the second fragment.
50. The apparatus of claim 37, further comprising:
a display for displaying one or more of the plurality of frames.
51. The apparatus of claim 37, wherein the apparatus comprises a mobile device having a camera for capturing pictures.
52. An apparatus for processing video data, comprising:
means for obtaining a plurality of frames of video data, wherein the video data includes a plurality of video samples, each of the plurality of video samples including one or more frames of a plurality of frames;
means for determining that at least one frame of the plurality of frames in a first video sample of the plurality of video samples is corrupted;
and
means for generating a media file comprising a sample entry for each of the plurality of video samples, and a sample group description box comprising sample group type entries, a first one of the sample group type entries comprising a type identifier indicating that a video sample has at least one frame corrupted,
wherein generating the media file comprises associating each video sample of a plurality of video samples with a sample group type item contained in a sample group description box, an
Wherein in response to determining that at least one frame in the first video sample is corrupted, associating each video sample in the plurality of video samples with a sample group type entry comprises associating the first video sample with a first entry.
53. An apparatus for processing a media file, comprising:
means for obtaining a media file comprising media content, wherein the media content comprises:
a plurality of video samples, each of the plurality of video samples including one or more frames of a plurality of frames;
a sample entry for each of a plurality of video samples; and
a sample group description box including sample group type entries, a first one of the sample group type entries including a type identifier indicating that a video sample has at least one frame corrupted, wherein each video sample of the plurality of video samples is associated with a sample group type entry included in the sample group description box, means for determining that the first video sample of the plurality of samples is associated with the first entry;
means for determining, based on the association of the first video sample with the first item, that the plurality of frames in the first video sample includes at least one corrupted frame; and
means for processing the at least one corrupted frame.
54. A non-transitory computer-readable medium having instructions stored thereon, which when executed by one or more processors, cause the one or more processors to:
Obtaining a plurality of frames of video data, wherein the video data includes a plurality of video samples, each of the plurality of video samples including one or more frames of a plurality of frames;
determining that at least one frame of the plurality of frames in a first video sample of the plurality of video samples is corrupted;
and
generating a media file comprising a sample entry for each of the plurality of video samples and a sample group description box comprising a sample group type entry, a first one of the sample group type entries comprising a type identifier indicating that a video sample has at least one frame corrupted, wherein generating the media file comprises associating each of the plurality of video samples with the sample group type entry included in the sample group description box, and
wherein in response to determining that at least one frame in the first video sample is corrupted, associating each video sample in the plurality of video samples with a sample group type entry comprises associating the first video sample with a first entry.
55. A non-transitory computer-readable medium having instructions stored thereon, which when executed by one or more processors, cause the one or more processors to:
Obtaining a media file comprising media content, wherein the media content comprises: a plurality of video samples, each of the plurality of video samples including one or more frames of a plurality of frames;
a sample entry for each of a plurality of video samples; and
a sample group description box including sample group type entries, a first entry of the sample group type entries including a type identifier indicating that a video sample has at least one frame corrupted, wherein each video sample of the plurality of video samples is associated with a sample group type entry included in the sample group description box, determining that the first video sample of the plurality of samples is associated with the first entry;
determining that the plurality of frames in the first video sample includes at least one corrupted frame based on the association of the first video sample with the first item; and
processing the at least one corrupted frame.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US62/406,349 | 2016-10-10 | ||
| US15/708,914 | 2017-09-19 |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| HK40001868A HK40001868A (en) | 2020-03-13 |
| HK40001868B true HK40001868B (en) | 2023-02-24 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| AU2022228092B2 (en) | Systems and methods for signaling missing or corrupted video data | |
| US11997349B2 (en) | Time signaling for media streaming | |
| US10701400B2 (en) | Signalling of summarizing video supplemental information | |
| US10389999B2 (en) | Storage of virtual reality video in media files | |
| US10349067B2 (en) | Handling of end of bitstream NAL units in L-HEVC file format and improvements to HEVC and L-HEVC tile tracks | |
| US11532128B2 (en) | Advanced signaling of regions of interest in omnidirectional visual media | |
| US20160373771A1 (en) | Design of tracks and operation point signaling in layered hevc file format | |
| KR102105804B1 (en) | Improved constraint scheme design for video | |
| HK40001868B (en) | Systems and methods for signaling missing or corrupted video data | |
| HK40001868A (en) | Systems and methods for signaling missing or corrupted video data |