[go: up one dir, main page]

WO2024226863A1 - Programmateur de messages de filigrane video - Google Patents

Programmateur de messages de filigrane video Download PDF

Info

Publication number
WO2024226863A1
WO2024226863A1 PCT/US2024/026355 US2024026355W WO2024226863A1 WO 2024226863 A1 WO2024226863 A1 WO 2024226863A1 US 2024026355 W US2024026355 W US 2024026355W WO 2024226863 A1 WO2024226863 A1 WO 2024226863A1
Authority
WO
WIPO (PCT)
Prior art keywords
message
messages
watermark
video
vpl
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/US2024/026355
Other languages
English (en)
Inventor
Rade Petrovic
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Verance Corp
Original Assignee
Verance Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Verance Corp filed Critical Verance Corp
Publication of WO2024226863A1 publication Critical patent/WO2024226863A1/fr
Anticipated expiration legal-status Critical
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/835Generation of protective data, e.g. certificates
    • H04N21/8358Generation of protective data, e.g. certificates involving watermark
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/236Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
    • H04N21/2362Generation or processing of Service Information [SI]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/238Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
    • H04N21/2389Multiplex stream processing, e.g. multiplex stream encrypting
    • H04N21/23892Multiplex stream processing, e.g. multiplex stream encrypting involving embedding information at multiplex stream level, e.g. embedding a watermark at packet level

Definitions

  • the present invention generally relates to the use of watermarks to facilitate recognition and utilization of multimedia content, and more particularly to a video watermark message scheduler.
  • multimedia content such as an audiovisual content
  • a multimedia content often consists of a series of related images, which, when shown in succession, can impart an impression of motion, together with accompanying sounds, if any.
  • Such a content can be accessed from various sources including local storage such as hard drives or optical disks, remote storage such as Internet sites or cable/ satellite distribution servers, over- the-air broadcast channels, etc.
  • such a multimedia content, or portions thereof may contain only one type of content, including, but not limited to, a still image, a video sequence and an audio clip, while in other scenarios, the multimedia content, or portions thereof, may contain two or more types of content such as audiovisual content and a wide range of metadata.
  • the metadata can, for example include one or more of the following: channel identification, program identification, content and content segment identification, content size, the date at which the content was produced or edited, the owner and producer identification of the content, timecode identification, copyright information, closed captions, and locations such as URLs where advertising content, software applications, interactive services content, and signaling that enables various services, and other relevant data that can be accessed.
  • metadata is the information about the content essence (e.g., audio and/or video content) and associated services (e.g., interactive services, targeted advertising insertion).
  • the metadata can enable content management, annotation, packaging, and search throughout content production and distribution value chain. Since the introduction of digital TVs, metadata has been introduced to enable digital interactive features and services. Various standardization efforts (such as MPEG-7, MPEG-21 , TV- Anytime, DVB-SI, ATSC) strive to produce metadata standards with predefined data structures and transport methods for describing essence to support interoperability and unified services.
  • Metadata may be useful in some applications, especially for enabling broadcast interactive services, they must be interleaved, prepended or appended to a multimedia content, which occupies additional bandwidth and, more importantly, can be lost when content is transformed into a different format (such as digital to analog conversion, transcoded into a different file format, etc.), processed (such as transcoding), and/or transmitted through a communication protocol/interface (such as HDMI, adaptive streaming).
  • a communication protocol/interface such as HDMI, adaptive streaming
  • FIG. 1 illustrates an exemplary architecture for direct watermark signaling that can be used for implementing various disclosed embodiments
  • FIG. 2 is a table showing the Bit Stream Syntax of the Watermark Payload for implementing various disclosed embodiments.
  • FIG. 3 is a table showing the Bit Stream Syntax for the Watermark message Block for implementing various disclosed embodiments.
  • FIG. 4 is a table showing the wtn_message_id Encoding for implementing various disclosed embodiments.
  • FIG. 5 is a table showing the Bit Stream Syntax of the Reassembled Watermark
  • FI (3. 6 is a table showing Bit Stream Syntax for the vpl_message() for implementing various disclosed embodiments.
  • FIG. / illustrates the temporal structure of VP1 message groups carrying vp 1 message! for implementing various disclosed embodiments.
  • FIG. 8 is a table showing Bit Stream Syntax for the extended vpl message! for implementing various disclosed embodiments.
  • FIG. 9 is a table showing Time Offset Sequences for implementing various disclosed embodiments.
  • FIG. 10 is a table showing the Bit Stream Syntax for the Dynamic Event Message for implementing various disclosed embodiments.
  • FIG. 11 illustrates the temporal structure of VP1 Message Groups carrying extended_vpl_messages() for implementing various disclosed embodiments.
  • FIG. 12 illustrates the temporal structure of VP1 Message Groups carrying both vpl messages! and extended vpl messages! for implementing various disclosed embodiments.
  • FIG. 13 is a table showing the Bit Stream Syntax for the Dynamic Event Message for implementing various disclosed embodiments.
  • FIG. 14 is a table showing the delivery J3rotocol_type field Encoding for implementing various disclosed embodiments.
  • FIG. 15 illustrates message segmentation and interleaving (IX System) for implementing various disclosed embodiments.
  • FIG. 16 is a table illustrating how pod length is dependent on frame rate for implementing various disclosed embodiments.
  • FIG. 17 is a table illustrating the scheduler operation by showing a sequence of nine watermark marks that follow a sequence of events comprising the addition or removal of watermark messages.
  • FIG. 18 illustrates a block diagram of a device that can be used for implementing various disclosed embodiments.
  • the method includes generating a plurality of watermark message groups, each comprising a set of video frames of video content to be embedded with watermark payloads.
  • a fixed number of pods are generated within each watermark message group, each pod comprising at least one VP1 message followed by one or more non- VP 1 messages.
  • a plurality of message groups are generated, each including an integer number of pods, wherein all VP1 messages within each message group have the same interval code value.
  • the video content is embedded with the watermark message groups, such that non- VP 1 messages are interleaved with VP1 messages.
  • exemplary is used io mean serving as an example, instance, or illustration. Any embodiment or design described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word exemplary' is intended to present concepts in a concrete manner.
  • Signaling and supplemental content delivered over-the-air may not reach an ATSC 3.0 device receiving audio and video from a set-top box via an HDMI cable.
  • This document describes methods for recovering ATSC 3.0 supplemental content in this or similar scenarios.
  • Content recovery can be achieved by delivering data to the ATSC 3.0 device via the HDMI connection, such as within an uncompressed audio or video signal.
  • data minimally includes a means of identifying the content and may include additional information depending on the capacity of the data deliver ⁇ ' method.
  • the ATSC 3.0 receiver may then use this data to connect to a remote server via broadband that interprets the data and delivers the supplemental content to the receiver via broadband.
  • Watermarking [0037]
  • One such method of delivering data is via watermarks that can be extracted from uncompressed audio or video in the receiver.
  • the methods of inserting watermarks for ATSC 3.0 content recovery are normatively specified in ATSC A/335 Video Watermark Emission specification and ATSC A/334 Audio Watermark Emission specification.
  • direct watermark signaling can be used when the content contains a foil set of ATSC video watermarks. These watermarks contain instructions to access Signaling Servers, which provide data (such as MPDs and AEIs) needed to access and present the supplementary content.
  • Signaling Servers which provide data (such as MPDs and AEIs) needed to access and present the supplementary content.
  • FIG. 1 An informative description of example receiver behavior when this mode is employed is provided in Annex C of the A/336 Standard.
  • the primary difference between the direct and indirect watermark signaling processes is that the indirect process entails retrieving a Recovery File (as defined in Section 5.4.3 of the A/336 Standard) from a Recovery File Server, while the direct signaling process does not (because the full ATSC video watermark contains signaling data associated with the current service that enables the receiver to access the Signaling Server directly).
  • a Recovery File as defined in Section 5.4.3 of the A/336 Standard
  • the direct signaling process does not (because the full ATSC video watermark contains signaling data associated with the current service that enables the receiver to access the Signaling Server directly).
  • an operational VP1 DNS authoritative name server must exist.
  • VP 1 Watermark Segments As described in the VP1 Section below.
  • the presence of discontinuities in VP1 Watermark Segments can be used by the receiver, in addition to changes in the data contained in watermark payloads or messages, to identify service changes (e.g. tune in, channel change, component change).
  • the emission format for video watermarks shall conform to the ATSC A/335 Video Watermark Emission specification. As described therein, the “IX” emission format delivers 30 bytes of data per video frame, while the “2X” system delivers 60 bytes per frame.
  • the VP1 Message (described below), when used, is repeated across multiple video frames. In instances where a watermark payload is not recovered from an individual video frame, a receiver may attempt to recover a VP1 Message by combining luma values (e.g. via averaging) from two or more successive video frames.
  • the payload format for the video watermark is the same in both the 1 X and 2X systems.
  • the run-in pattern (as specified in Section 5.1 of A/335) is followed by one or more instances of a message block.
  • the watermark pay load shall conform to the syntax given in FIG. 2.
  • This 16-bit value is set to 0xEB52 to identify that the video line includes a watermark of the format specified herein.
  • wm__message__btockQ A full wm_message() or a fragment of a larger wm_message(), formatted per the Watermark Message Syntax Section below.
  • the re-assembly of a wm message() from multiple wm message block() instances shall be as described in FIG. 5.
  • Watermark message blocks shall follow the syntax given in FIG. 3 and the semantics that follow.
  • wm__message_id This 8-bit value shall uniquely identify the syntax and semantics of the data bytes carried in the message block, coded according to FIG. 4.
  • the encodings of the watermark message types defined in the present standard may be found in the Content ID Message below and in Section 5.1.11 of A/336. Note that additional watermark message types might exist as indicated in the ATSC Code Point Registry per Section 5.1.1 of A/336.
  • wm message block length This 8-bit value shall specify the number of remaining bytes in the wm__message_block() that immediately follows this field up to and including the CRC_32 field.
  • wm raessage version This 4-bit value shall be incremented if and only if anything in the watermark message changes, with wrap-around to 0 after the value reaches 15.
  • the watermark processor in the receiving device is expected to use wm_message_version to discard duplicates.
  • the video signal may include repeated instances of the same watermark message to improve reliability of delivery.
  • frafpnentjnumber This 2-bit or 8-bit value shall specify the number of the current message fragment minus one.
  • (wm_message_id & 0x80) 0) . i.e. bit 7 is value 'O', then fragment-number shall be 2 bits in length.
  • (wm_message_id & 0x80) :::::::: l) i.e. bit 7 is value T
  • fragment jiumber shall be 8 bits in length.
  • value of 0 in fragmentjnumber indicates the wm_message_block() carries the first (or only) fragment of a message
  • a fragmentjiumber value of 1 indicates the wm_message_block() carries the second fragment of a message
  • the value of fragment number shall be less than or equal to the value of last_fragment.
  • Iast_fr agment This 2-bit or 8-bit value shall specify the fragment number of the last fragment used to deliver the complete watermark message.
  • bit 7 When (wm__message_id & 0x80) ::::::: 0) , i.e. bit 7 is value then lastjragment shall be 2 bits in length.
  • a value of I in last_fragment indicates the wmjrnessagef) will be delivered in two parts, a value of 2 indicates the watermark message will be delivered in three parts, and a value of 3 indicates it will be delivered in four parts, etc.
  • the pair of values fragment jiumber and lastjfragment may be considered to signal “part M of N”.
  • wm_message_bytesO When the value of last ..fragment is 0, wm...message...bytes() shall be a complete instance of the watermark message identified by the value of wm...message...id. When the value of last , fragment is non-zero, wm...message...bytes() shall be a fragment of that watermark message.
  • the concatenation of al l instances of wm message block() with a given wm message Jd and wm message fragment version number shall result in the complete watermark message associated with that wm_message_id.
  • message_CRC_32 When a message is sent in two or more fragments (e.g. lastjfragment > 0) a 32-bit CRC covering the complete message (before segmentation) shall be provided in the last fragment of a fragmented message.
  • the message CRC 32 field shall not be present for non-fragmented messages (e.g. when the value of lastjragment is 0) or in any fragment other than the last (e.g. when fragment number f last fragment).
  • the message_CRC_32 when present, shall contain the CRC value that gives a zero output of the registers in the decoder defined in 1SO/TEC 13818-1 [ 13], Annex A after processing the entire re- assembled message payload formed by concatenating the wm_message_id and wm_message_bytes(i) as specified in FIG. 5.
  • CRC 32 This 32-bit field shall contain the CRC value that gives a zero output of the registers in the decoder defined in ISO/IEC 13818-1 Annex A after processing the entire message block.
  • the wm__message_block() can deliver fragments of watermark messages that are intended to be reassembled before being processed further.
  • the wm message() data structure specified in FIG 5 below represents the reassembled fragments.
  • the definitions of wm_message Jd, lastjfragment, and message _CRC__32 shall be as specified above for wm_message__block().
  • the wmjnessage _bytes(i) field shall represent the wm jmessage _bytes() contained in the i-th fragment of the message (counting from zero).
  • short-form message shall mean a watermark message using the encoding format with 2-bit fragment ⁇ number and last fragment fields.
  • long- form message shall mean a watermark message using the encoding format with 8-bit fragmenfonumber and iast_fragment fields.
  • each successive wm_message_block() of the same form shall have the same value in wm__message_id until the last fragment is sent (e.g. the value of fragment jiumber equals the value of last_fragment), with no intervening messages of the same form with other values of wrn__rnessage_id.
  • Fragments of any given message shall be sent in order.
  • the fragment_number values for any given message shall start at 0 and increase monotonically until the last fragment is sent.
  • the fragmen t_number value in the last fragment shall be equal to the value of last fragment.
  • Any given wm_message_block() (as indicated by the values of wm_message_id plus wm_message__version) may be sent multiple times. Receivers are expected to discard duplicates.
  • instances of message blocks comprising a long- form message may be interleaved with instances of message blocks comprising a short- form message. At most, fragments of only one short-form and one long-form message may be transmitted at any given time.
  • a receiver can thus implement one buffer for short-form messages and a single separate buffer for long-form messages.
  • FIG. 15 illustrates segmentation and reassembly for four different example messages, each of different lengths. Note: FIG . 15 a black and white version of Figure 5.1 of the A/336 standard, which may be preferred due to its color coding. Note the presence of the 32-bit message CRC 32 in the last fragment of the segmented messages.
  • the first three messages in the examples in FIG. 15 are the short-form variety (i.e., they can be delivered in four fragments or less).
  • the fourth message has an 88-byte payload and thus uses the long-form syntax (which allows up to 256 fragments). Note that per the rules given in the Multiplexing and Processing Rules Section above, fragments of the long-form message are interleaved with fragments of the short-form messages.
  • the VP1 Message enables the recovery process (specified in Section 5.3 of A/336) to be employed in conjunction with the Video Watermark.
  • a VP1 Video Watermark Segment shall consist of video content carrying a series of successive VP1 Message Groups whose initial video frames are nominally at 1.5 second intervals such that if the initial video frame of the first VP 1 Message Group in a VP1 Video Watermark Segment occurs at time T seconds in the presentation, the initial video frame of the nth successive VP1 Message Group in the VP1 Video Watermark Segment occurs within ⁇ 0.5 video frames of time T ⁇ 1.5n seconds.
  • All VP1 Message Groups in a VP1 Video Watermark Segment shall have the same Server Code and successive VP1 Message Groups in a VP I Video Watermark Segment shall have sequentially incrementing Interval Codes.
  • the query __flag value in the VP1 pay load may change between successive VP1 Message Groups in a VP1 Video Watermark Segment.
  • the VP1 Message Groups of the video watermark shall be time-aligned in the presentation such that the initial video frame in every VP1 Message Group occurs within ⁇ 0.5 video frames of the corresponding starting Cell boundary in the VP1 audio watermark.
  • the VP1 Message contains header and parity bits in addition to payload bits and because it is always repeated in multiple video frames, it may be recoverable from content for which the run-in sequence is not recoverable or where there are bit errors that cause the CR.C-32 check to fail. Receivers may attempt to recover the VP1 Message in instances where run-in sequence recovery or CRC-32 check is unsuccessful.
  • bit stream syntax of the VP1 Message shall be as shown in FIG. 6.
  • header - This 32-bit field shali consist of a header element as specified in ATSC A/334 Audio Watermark Emission.
  • the vpl__message() shall be the first (i.e. left-most) wm_message() present in a video frame.
  • vpl_message()s carrying identical data shall be repeated for all successive video frames across at least a 1/6 second duration of content.
  • the value of wm_message__version does not increment between vpl__message()s within a VPI Message Group.
  • FIG. 7 illustrates the temporal structure of VP1 Message Groups carrying a vpl_message() in a VP1 Video Watermark Segment, with time alignment in the presentation to a VPI Audio Watermark Segment, Note: FIG. 7 a black and white version of Figure 5.2 of the A/336 standard, which may be preferred due to its color coding.
  • the sections between 1.0000 and 1.1667 on the timeline is the VP I Message Group, which carries the same VPI Payload as does the VPI cell() in the audio signal.
  • the VPI Message Group spacing is exactly 1.5 seconds and the audio signal VPI cell() is offset from the initial video frame of each VPI Message Group by A of a video frame period.
  • bit stream syntax of the extended vpl message/ shall be as shown in FIG. 8 (was Table 5.12).
  • the extended_vpl_message() is sometimes called eVPl message. It is also assumed that eVPl message is just a variant of VPI message, and all statements related to VPI messages are applicable to eVPl messages as well.
  • time jaffset - This 6-bit field shall convey the time offset of the video frame in which this extended jcpl _message() is carried relative to the first frame in its VP I Message Group, in units of 1 /30 of a second. It shall convey a value in the range 0 through 44, inclusive.
  • header This 32-bit field shall consist of a header element with value 0xAE0AB9E4 as specified in ATSC A/334 Audio Watermark Emission.
  • tinie_offsetjparity_msb This 2-bit field shall convey the 2 most-significant bits of the Time Offset Parity Sequence associated with the value of the time_offset field, as specified in FIG. 9.
  • time_offset_party_lsb This 32-bit field shall convey the 32 least-significant bits of the Time Offset Parity Sequence associated with the value of the time_offset field, as specified in FIG. 9.
  • alternate packet() - This 127-bit field shall be as given by Table 5.23 of A/336 and the parameter descriptions that follow; however, the alternate_parity_whitening_sequence and alternate_payload_whitening_sequence given in FIG. 10 shall be employed in place of the parity whitening sequence and payload whitening sequence given in Table 5.24 of A/336.
  • the extended vpl message shall be the first (i.e., left-most) wrn_message() present in a video frame.
  • a Time Offset Parity Sequence when present, enables receivers to error-correct the time_offset field.
  • the parity sequences are selected to maximize the Hamming distance among valid emission sequences in the 40 most-significant bits of an extended_ypl_message().
  • the minimum Hamming distance among the Time Offset Parity Sequences is 15. Inclusion of Time Offset Parity bits in anyextended vpl message! is optional.
  • Receivers can determine whether an extended_vp1_message() instance includes a Time Offset Parity Sequence by comparing the values in its 9th through 40 ta most-significant bits to the fixed header sequence and the time_offset_parity_lsb sequences.
  • the minimum Hamming distance between the header sequence and any time_offsetjparity_lsb sequence is 13.
  • an extended_vpl_message() shall be conveyed in at least those video frames whose sampling instant is within a right half-open time interval starting at a time that is an integer multiple of 0.3 seconds following the initial frame of the VP1 Message Group and ending 1/30 of a second later, for integers 0 through 4, unless a vpl-jnessageO is also present in the VP1 Message Group, in which case for integers 1 through 4.
  • both message types shall be used in every VP1 Message Group of the VP1 Video Watermark
  • VP1 and eVPl messages carry information about media time in two fields: Interval Code (IC) field and Time Offset (TO) field.
  • IC Interval Code
  • TO Time Offset
  • This information enables receivers to monitor progress of media time during rendering of video content in order to detect changes in rendering speed, jumps within the content, or state changes in the content stream such as a channel change, new content selection, or end of segment being reached.
  • IC Interval Code
  • TO Time Offset
  • a Watermark Segment End event is only output based on a failure to detect a continuous watermark code; it is not output when a discontinuous watermark code is detected (in this case, a Watermark Segment Start event is output).
  • a Watermark Segment End event causes a transition from the Marked Content State to an Unmarked Content State.
  • FIG. 11 was illustrates the temporal structure of VP1 Message Groups carrying an extended_vpl_message() in a VP1 Video Watermark Segment, with time alignment to a VP1 Audio Watermark Segment.
  • FIG. 11 a black and white version of Figure 5.3 of the A/336 standard, which may be preferred due to its color coding.
  • the sections labeled “VP1 Message Group” (shaded yellow in the A/336 Standard) carry the same VP1 Payload as does the VP1 cell() in the audio signal.
  • the VP1 Message Group spacing is exactly 1.5 seconds and the audio signal VP1 cell() is offset from the initial video frame of each VP1 Message Group by Vi of a video frame period.
  • Each video frame in the VP1 Message Group carries the same VP1 Payload as the aligned VP1 cell() and has a time_offset value equal to its offset from the first frame in the VP1 Message Group, since the frame rate is 30 fps.
  • FIG. 12 illustrates the temporal structure of VP1 Message Groups carrying both vpl__message() and extended_vpl_inessage() in a VP1 Video Watermark Segment, with time alignment to a VP1 Audio Watermark Segment.
  • FIG. 12 a black and white version of Figure 5.4 of the A/336 standard, which may be preferred due to its color coding.
  • the sections labeled “VP1 Message Group” cany the same VP1 Payload as does the VP I cel1() in the audio signal.
  • the dynamic_eventjmessage0 supports delivery of Dynamic Events in video watermarks.
  • the syntax and bitstream semantics of the Dynamic Event Message shall be as given in FIG. 13 (was Table 5.15) and the parameter descriptions that follow.
  • delivery jrotocol_type This 4-bit field shall signify the delivery protocol (e.g.,
  • FIG. 14 describes the encoding of this field. Note that additional encodings might exist as indicated in the ATSC Code Point Registry.
  • value strlen - This 8 -bit unsigned integer field shall give the length of the valuejstring field in bytes.
  • valse strmg - This string shall give the value for the Event Stream of the Event.
  • timescale This 32 -bit unsigned integer shall give the time scale for the Event
  • presentation time This 32-bit unsigned integer shall indicate the presentation time of the Event on the Recovery Media Timeline, encoded as the least-significant 32 bits of the count of the number of seconds since January 1, 1970 00:00:00, International Atomic Time (TAI).
  • TAI International Atomic Time
  • [0098] presents tion time ms - This 10-bit unsigned integer in the range 0 to 999 shall indicate the milliseconds offset from the time indicated in presentation__time, such that the formula presentation time + (presentation time ms/1000) yields the actual presentation time to the nearest 1 millisecond.
  • duration This 32-bit unsigned integer shall give the duration of the Event, in the time scale of the Event.
  • data - This field shall contain data needed for responding to the event, if any.
  • the format and use of the data is determined by the Event Stream specification, which will be defined in the standard for any standards-based Event, and which will be known to any application registering to receive the Event for any Event targeted to applications.
  • reserved IJWdJeiJgth This 8-bit unsigned integer field shall give the length in bytes of the reserved 1 field, which immediately follows this field.
  • the sum of the values of the scheme_id_uri_length, value_strlen, and data_length fields shall be less than or equal to 58 for IX video watermark emission format (IX System) and shall be less than or equal to 178 for 2X video watermark emission format (2X System).
  • the sum of the values of the scheme id uri length, value strlen, and da ta length fields shall be less than or equal to 4,838 for IX video watermark emission format (IX System) and shall be less than or equal to 12,518 for 2X video watermark emission format (2X System).
  • reserved1_field_length shall be less than or equal to 78 for LX video watermark emission format (LX System) and shall be less than or equal to 198 for 2X video watermark emission format (2X System).
  • delivery__protocol_type has a value designated in FIG. 8 as “Reserved” or “Industry Reserved”, and does not have an encoding defined in the ATSC Code Point Registry, and the message is sent in a long-form message
  • the value of reserved! _field_length shall be less than or equal to 4,858 for LX video watermark emission format (LX System) and shall be less than or equal to 12,538 for 2X video watermark emission format (2X System).
  • delivery_protocol_type has a value designated in FIG. 8 as “Industry Reserved” and does have an encoding defined in the ATSC Code Point Registry, the reserved! _field_length and reservedl fields are replaced with encoding defined by the registrant.
  • the A336 standard includes requirements such as:
  • a VP1 Video Wa termark Segment shall consist of video content carrying a series of successive VP1 Message Groups whose initial video frames are nominally at 1 .5 second intervals such that if the initial video frame of the first VP I Message Group in a VP1 Video Watermark Segment occurs at time T seconds in the presentation, the initial video frame of the nth successive VP1 Message Group in the VP1 Video Watermark Segment occurs within ⁇ 0.5 video frames of time T+1.5n seconds.”
  • the A/336 standard also includes the requirement of: “Within a VP1 Message Group, an extended_vpl_message() shall be conveyed in at least those video frames whose sampling instant is within a right half-open time interval starting at a time that is an integer multiple of 0.3 seconds following the initial frame of the VP1 Message Group”
  • VP1 messages cannot be more than 0.3 s apart. This is addressed by introducing the concept of pods that are no longer than 0.3 s and carry one or more VP1 messages, e.g. at the beginning, and other messages later. Since the “message group” is nominally 1 .5 s, there are at least 5 pods per message group.
  • Embodiments are disclosed which enable the interleaving of various types of video watermark messages in a manner that preserves timing information.
  • This disclosure will present an example of interleaveing Dynamic Event Messages (DEMs) with VP1 messages in a video watermark embedder. Multiple DEMs can be placed in the queue for interleaving at the same time, and they are distinguished by 8-bit DEMid. All DEMs are expected to be of short form, with up to four fragments per DEM.
  • DEM scheduler The inputs for DEM scheduler are: Get Watermark, Start VP1, Start DEM, End VP1, and End DEM, and associated actions will be described below.
  • VP I + DEM VP I + DEM.
  • Start VP1 which triggers transition to VP I only state
  • Start DEM which triggers transition to DEM only state.
  • the input Start DEM will trigger transition to VPl+DEM state, while End VP1 will trigger transition to No Marks state.
  • the Get Watermark input will trigger a watermark creation (presumably already implemented).
  • Input Start VP1 will trigger reset of media timeline, i.e. it is expected to provide new anchor interval code (ICa) and associated presentation time (PTa) which will enable new calculation of message group boundaries, message group IC (ICmg) and time offsets (To) within the message group.
  • End DEM will trigger an error code return.
  • Start VP1 will trigger transition to VPl+DEM sta te, while End DEM may trigger transition to No Marks state (if there is only one DEM in the queue).
  • the Get Watermark input will trigger a watermark creation.
  • End VP1 should trigger an error code return.
  • End VP1 triggers transition to DEM only state, while End DEM may trigger transition to VP1 only state (if there is only one DEM in the queue).
  • the Start DEM triggers an update in DEM queue, while Start aVPI triggers reset of media timeline.
  • the Get Watermark input triggers a watermark data creation as described below.
  • a state machine can be created based on the proposal above, after a review 7 .
  • DEMid is an eight-bit counter initialized to zero and incremented modulo 256 with every Start DEM input.
  • the priority counter counts down from N for each opportunity that this DEM can be embedded and allows embedding only when this counter reaches 1.
  • Queue iength is the number of DEMs in the queue.
  • Qnenejpointer is the pointer to the next DEM to be embedded.
  • Fragment jointer is pointer to the next fragment to be embedded.
  • Rc is repetition counter (counter of fragment repetitions in adjacent frames).
  • Podjength is number of frames grouped together to include one group of VP 1 frames as well as one complete DEM.
  • the pod length depends on frame rate as specified in the table shown in FIG. 6.
  • Padjpmnter is counter of frames within the pod, which determines whether VP1 or DEM watermark is returned. Pod pointer is used only in VPl+DEM state.
  • Wmv is the wm message version. This value is incremented by 1 and wraps modulo 16 for each message_block except when message_Mocks are repeated (the Wmv is the same for a repeated message block).
  • DEMsf is DEM start flag initialized to 0. It is switched to 1 when the first repetition of the first fragment of a DEM is transmitted. It is reset back to zero whenever the pod structure is disturbed, e.g. when End DEM arrives or when Start VP1 arrives in either DEM only or DEM+VP1 state.
  • This section contains a pseudo code describing actions that executed following each of the five inputs from the application: Get Watermark, Start VP1, Start DEM, End VP1 , and End DEM. The actions depend on scheduler state, as specified below.
  • Fragment_pointer ::: fragrnent__pointer +1
  • Remove the record associated with the DEMid in the DEM queue and decrement queue__length are three DEMs with number of fragments equal to 3, 4, and 1 .
  • AH DEMs are assumed to have priority 1.
  • FIG. 18 illustrates a block diagram of a device 1000 within which the various disclosed embodiments may be implemented.
  • the device 1000 comprises at least one processor 1002 and/or controller, at least one memory 1004 unit that is in communication with the processor 1002, and at least one communication unit 1006 that enables the exchange of data and information, directly or indirectly, through the communication link 1008 with other entities, devices and networks.
  • the communication unit 1006 may provide wired and/or wireless communication capabilities in accordance with one or more communication protocols, and therefore it may comprise the proper transmitter/receiver antennas, circuitry and ports, as well as the encoding/decoding capabilities that may be necessary for proper transmission and/or reception of data and other information.
  • the device 1000 and the like may be implemented in software, hardware, firmware, or combinations thereof.
  • the various components or sub-components within each module may be implemented in software, hardware, or firmware.
  • the connectivity between the modules and/or components within the modules may be provided using any one of the connectivity methods and media that is known in the art, including, but not limited to, communications over the Internet, wired, or wireless networks using the appropriate protocols.
  • Various embodiments described herein are described in the general context of methods or processes, which may be implemented in one embodiment by a computer program product, embodied in a computer-readable medium, including computer-executable instructions, such as program code, executed by computers in networked environments.
  • a computer-readable medium may include removable and non-removable storage devices including, but not limited to, Read Only Memory (ROM), Random Access Memory (RAM), compact discs (CDs), digital versatile discs (D VD), etc. Therefore, the computer-readable media that is described in the present application comprises non-transitory storage media.
  • program modules may include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
  • Computer-executable instructions, associated data structures, and program modules represent examples of program code for executing steps of the methods disclosed herein.
  • the particular sequence of such executable instructions or associated data structures represents examples of corresponding acts for implementing the functions described in such steps or processes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Television Systems (AREA)

Abstract

L'invention concerne un système et un procédé permettant d'incorporer des filigranes dans un contenu vidéo. Le procédé consiste à générer une pluralité de groupes de messages de filigrane, chacun comprenant un ensemble de trames vidéo d'un contenu vidéo auquel incorporer des charges utiles de filigrane. Un nombre fixe de capsules sont générées à l'intérieur de chaque groupe de messages de filigrane, chaque capsule comprenant au moins un message VP1 suivi d'un ou de plusieurs messages non VPl. Une pluralité de groupes de messages sont générés, chacun comprenant un nombre entier de capsules, tous les messages VP1 dans chaque groupe de messages ayant la même valeur de code d'intervalle. Un filigrane, des groupes de messages, sont incorporés au contenu vidéo, de telle sorte que des messages non VPl soient entrelacés avec des messages VP1.
PCT/US2024/026355 2023-04-26 2024-04-25 Programmateur de messages de filigrane video Pending WO2024226863A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202363498271P 2023-04-26 2023-04-26
US63/498,271 2023-04-26

Publications (1)

Publication Number Publication Date
WO2024226863A1 true WO2024226863A1 (fr) 2024-10-31

Family

ID=93257311

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2024/026355 Pending WO2024226863A1 (fr) 2023-04-26 2024-04-25 Programmateur de messages de filigrane video

Country Status (1)

Country Link
WO (1) WO2024226863A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200045379A1 (en) * 2015-10-07 2020-02-06 Lg Electronics Inc. Broadcast signal transmission/reception device and method
US20220272405A1 (en) * 2015-12-04 2022-08-25 Sharp Kabushiki Kaisha Method of receiving a recovery file format
US20220312081A1 (en) * 2021-02-08 2022-09-29 Verance Corporation System and method for tracking content timeline in the presence of playback rate changes
US20230007364A1 (en) * 2016-04-18 2023-01-05 Verance Corporation System and method for signaling security and database population

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200045379A1 (en) * 2015-10-07 2020-02-06 Lg Electronics Inc. Broadcast signal transmission/reception device and method
US20220272405A1 (en) * 2015-12-04 2022-08-25 Sharp Kabushiki Kaisha Method of receiving a recovery file format
US20230007364A1 (en) * 2016-04-18 2023-01-05 Verance Corporation System and method for signaling security and database population
US20220312081A1 (en) * 2021-02-08 2022-09-29 Verance Corporation System and method for tracking content timeline in the presence of playback rate changes

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ANONYMOUS: "ATSC Standard: Content Recovery in Redistribution Scenarios (A/336)", ADVANCED TELEVISION SYSTEMS COMITTEE, 5 June 2017 (2017-06-05), pages 1 - 58, XP093230280, Retrieved from the Internet <URL:https://www.atsc.org/wp-content/uploads/2017/03/A336-2017a-Content-Recovery-in-Redistribution-Scenarios-2.pdf> *

Similar Documents

Publication Publication Date Title
US7586924B2 (en) Apparatus and method for coding an information signal into a data stream, converting the data stream and decoding the data stream
US8056110B2 (en) Service system of thumbnail image and transmitting/receiving method thereof
US9942602B2 (en) Watermark detection and metadata delivery associated with a primary content
US9426479B2 (en) Preserving captioning through video transcoding
WO2016028936A1 (fr) Détection de tatouages numériques utilisant plusieurs motifs prédits
US20100238792A1 (en) Information acquisition system, transmit apparatus, data obtaining apparatus, transmission method, and data obtaining method
WO2016086047A1 (fr) Distribution améliorée de métadonnées et de contenu au moyen de filigranes
US20110222545A1 (en) System and method for recovering the decoding order of layered media in packet-based communication
CN118192925A (zh) 即时播放帧(ipf)的生成、传输及处理的方法、设备及系统
US8718131B2 (en) Method and apparatus for generating and processing packet in MPEG-2 transport stream
WO2024226863A1 (fr) Programmateur de messages de filigrane video
US7839925B2 (en) Apparatus for receiving packet stream
KR101008976B1 (ko) 멀티미디어 스트리밍 시스템에서의 에러 검출 방법
US8184660B2 (en) Transparent methods for altering the video decoder frame-rate in a fixed-frame-rate audio-video multiplex structure
US7949052B1 (en) Method and apparatus to deliver a DVB-ASI compressed video transport stream
CN115668955A (zh) 用于从转码器恢复呈现时间戳的系统
FI124520B (fi) Menetelmä ja järjestely digitaalisten multimediasignaalien synkronoimiseksi
US20040179136A1 (en) Image transmission system and method thereof

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24797989

Country of ref document: EP

Kind code of ref document: A1