WO2015036358A1 - Procédé et appareil pour décomposer et reconstruire une image à grande gamme dynamique - Google Patents
Procédé et appareil pour décomposer et reconstruire une image à grande gamme dynamique Download PDFInfo
- Publication number
- WO2015036358A1 WO2015036358A1 PCT/EP2014/069063 EP2014069063W WO2015036358A1 WO 2015036358 A1 WO2015036358 A1 WO 2015036358A1 EP 2014069063 W EP2014069063 W EP 2014069063W WO 2015036358 A1 WO2015036358 A1 WO 2015036358A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- picture
- dynamic
- range
- low
- modulation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/46—Embedding additional information in the video signal during the compression process
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
- G06T5/92—Dynamic range modification of images or parts thereof based on global image properties
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/30—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20208—High dynamic range [HDR] image processing
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2320/00—Control of display operating conditions
- G09G2320/06—Adjustment of display parameters
- G09G2320/0626—Adjustment of display parameters for control of overall brightness
- G09G2320/0646—Modulation of illumination source brightness and image signal correlated to each other
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2340/00—Aspects of display data processing
- G09G2340/04—Changes in size, position or resolution of an image
- G09G2340/0407—Resolution change, inclusive of the use of different resolutions for different screen areas
- G09G2340/0428—Gradation resolution change
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2360/00—Aspects of the architecture of display systems
- G09G2360/14—Detecting light within display terminals, e.g. using a single or a plurality of photosensors
- G09G2360/144—Detecting light within display terminals, e.g. using a single or a plurality of photosensors the light being ambient light
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2360/00—Aspects of the architecture of display systems
- G09G2360/16—Calculation or use of calculated indices related to luminance levels in display data
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2370/00—Aspects of data communication
- G09G2370/04—Exchange of auxiliary data, i.e. other than image data, between monitor and graphics controller
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2370/00—Aspects of data communication
- G09G2370/12—Use of DVI or HDMI protocol in interfaces along the display data pipeline
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G3/00—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
- G09G3/20—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
- G09G3/34—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters by control of light from an independent source
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G3/00—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
- G09G3/20—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
- G09G3/34—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters by control of light from an independent source
- G09G3/36—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters by control of light from an independent source using liquid crystals
- G09G3/3611—Control of matrices with row and column drivers
Definitions
- the present invention generally relates to the reconstruction of high- dynamic-range pictures, !n particular, the technical field of the present invention is related to the reconstruction of a high-dynamic-range picture from two low-dynamic-range pictures generated by a dual modulation which takes a high dynamic range picture as input.
- LDR pictures Low-dynamic-range pictures
- HDR pictures are pictures whose luminance values are represented with a limited number of bits (most often 8, 10 or 12 bits). This limited representation does not allow to correctly render small signal variations, in particular in dark and bright luminance's ranges.
- High-Dynamic-Range pictures the signal representation is extended in order to keep a high accuracy of the signal on its entire range.
- pixels' value are usually represented in floating-point format (either 32-bit or 16-bit for each component), the most popular format being openEXR half-float format (16-bit per RGB component, i.e. 48 bits per pixel).
- Dual modulation methods are usually used in the dual modulation HDR displays.
- Such dual modulation HDR displays are made of two panels:
- one LED panel as a backlight panel that generates low resolution luminance picture of the scene; and - one LCD panel that modulates the light coming from the LED panel to generate the resulting HDR picture.
- a HDR picture is decomposed in two separate LDR pictures by help of a so-called dual modulation: a first picture (usually called a LCD picture) which is a low dynamic range version of the high-dynamic-range picture and a second picture (usually called a LED picture) which is a low-resolution version of the luminance component of the high-dynamic-range picture.
- a first picture usually called a LCD picture
- a second picture usually called a LED picture
- Dual-modulation-HDR displays have the ability to display content with luminance values up to 4000 cd/m2. Such luminance values are useful to see details in bright areas if the display environment is bright (midday in a room with large windows) but can be very aggressive and disturbing if the display environment is dim (at night in a room with few ambience light). Dual modulation HDR displays are also able to provide true blacks and lot of details in very dim scenes. Such details will be visible if the display environment is dim (at night in a room with few ambiance light) but won't be visible if the display environment is bright (midday in a room with large windows). These phenomena's are due to the eye capability to adapt to the environment average luminance. Thus HDR content shall be adapted to the illumination conditions of the environnemet around the display.
- the HDR content shall be adapted to end-user preferences. For example, a user may prefer brighter and dimmer content and have the option to select the correct HDR content relative brightness for example. 3. Summary of the invention.
- the invention is aimed at alleviating at least one of the drawbacks of the prior art.
- the invention relates to a method for generating two low-dynamic-range picture from a high-dynamic-range picture by help of a dual-modulation, one of said low-dynamic-range pictures, said first picture, being a low-dynamic-range version of the high-dynamic-range picture and the other of said low-dynamic-range pictures, said second picture, being a low-resolution version of the luminance component of the high-dynamic- range picture, the method is characterized in that the behavior of said dual- modulation is controlled by metadata received from a remote device.
- the metadata are received following the transmission to the remote device of some metadata which are either relative to a characteristic of a dual-modulation HDR display or defined from data issued from sensors that inform on the illumination environment around the dual-modulation HDR display or defined according to the end-user preference.
- the two low-dynamic-range picture are transmitted separately to the remote device.
- the two low-dynamic-range pictures are transmitted over usual 8 bits channel usually used to transmit low-dynamic- range picture.
- the invention also relates to a method for reconstructing a high-dynamic-range picture by help of an inverse dual-modulation which combines together a first and a second picture to reconstruct said high-dynamic-range picture.
- Said first picture being a low- dynamic-range version of the high-dynamic-range picture
- said second picture being a low-resolution version of the luminance component of the high-dynamic-range picture
- the method is characterized in that the behavior of said inverse dual-modulation is controlled by metadata received from a remote device.
- the two low-dynamic-range pictures are received separately.
- the invention relates to an apparatus for generating two low-dynamic-range picture from a high- dynamic-range picture by help of dual-modulation means, said means being configured in order that one of said low-dynamic-range pictures, said first picture, is a low-dynamic-range version of the high-dynamic-range picture and the other of said low-dynamic-range pictures, said second picture, is a low-resolution version of the luminance component of the high-dynamic- range picture.
- the apparatus is characterized in that said dual-modulation means are controlled by metadata received from a remote device.
- the invention relates to an apparatus for reconstructing a high-dynamic-range picture by help of inverse dual-modulation means configured to combine together a first and a second picture in order to reconstruct said high-dynamic-range picture.
- Said first picture being a low-dynamic-range version of the high-dynamic-range picture
- said second picture being a low-resolution version of the luminance component of the high-dynamic-range picture
- said apparatus is characterized in that said inverse dual-modulation means are controlled by metadata received from a remote device.
- Fig. 1 shows a diagram which represents an embodiment of a dual modulation of a floating-point HDR picture.
- Fig. 2 shows a diagram which represents an embodiment of an inverse dual-modulation used to reconstruct a floating-point HDR picture from a LDR and LED picture.
- Fig. 3 shows an embodiment of the invention implemented apparatuses of a communication system.
- Fig. 4 shows an example of a VSDB.
- Fig. 5 shows an example of a VSDB relative to specific metadata.
- Fig. 6 shows an example to carry out the LED picture as metadata.
- Fig. 7 shows an example of a SCDCS table to cary out metadata.
- Fig. 8 represents an exemplary architecture of a device 80.
- each block represents a circuit element, module, or portion of code which comprises one or more executable instructions for implementing the specified logical function(s).
- the function( s) noted in the blocks may occur out of the order noted. For example, two blocks shown in succession way, in fact, be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending on the functionality involved.
- Fig. 1 shows a diagram which represents an embodiment of a dual modulation of a floating-point HDR picture.
- the goal of the dual modulation is to generate two low-dynamic-range pictures called a LED picture and a LCD picture from the floating-point HDR picture.
- HDR input picture are calculated.
- step 12 the floating-point HDR input picture is normalized between
- the normalized picture is scaled to a max brightness value (for instance 4000 cd/m2). This produces a picture called "scale_RGB”.
- the square root of the picture scale RGB is computed.
- a blur function is applied (for instance a gaussian filter) to the luminance component Y in order to have a coarse representation of the luminance component that makes the further downsampling (step 17) to the LED grid more robust to peaks (noise).
- the coarse luminance component is downsampled to a resolution of a LED grid of a dual-modulation HDR display and the resulting component is scaled (step 18) in order to take into account a further convolution with a Point Spread Function (step 21 ) that will increase each luminance pixel value due to the additive process of the convolution.
- the resulting luminance component is scaled between
- the luminance component, output of the step 18, is also used to reconstruct a full resolution backlight picture usually called LCD picture.
- step 20 the luminance component, output of the step 18, is copied on a full-size picture grid and each copied value is convoluted with a Point Spread Function (step 21 ).
- the resulting luminance component called rec-lum is then used to divide the picture scale_RGB to produce a RGB version of the HDR picture.
- the RGB version of the HDR picture is scaled between [0..255] in order to produce the LCD picture.
- the minimal and maximal values of the LCD picture are also calculated.
- the Point Spread Function is a Gaussian filter which is characterized by a standard deviation (sigma) and the size in pixels of the picture representing the PSF.
- the Point Spread Function is given for example by:
- the max brightness value used in step 13 the resolution of the LED grid used at step 17, the full-size picture grid used in step 20 and the Point Spread function used in step 21 form a first set of metadata.
- the metadata relative to the resolution of a LED grid of a dual- modulation HDR display may be a number of lines and a number of rows of a LED grid.
- the metadata relative to the Point Spread Function may be a standard deviation (sigma) of a Gaussian filter and the size in pixel of this Point spread Function for example.
- the behavior of the dual- modulation is controlled by the first set of metadata.
- the first set of metadata comprises at least one defaut value stored locally.
- the first set of metadata is received from a remote device.
- the invention is not limited to the embodiment of the dua!-modulation described in Fig. 1 and any other dual-modulation may be used to generate the LCD and LED pictures. Moreover, the scope of the invention is not limited to the examples of metadata (first set of metadata) given in relation with the dual-modulation described in Fig. 1 but extends to any metadata which control the behavior of a dual-modulation.
- Fig. 2 shows a diagram which represents an embodiment of an inverse dual-modulation used to reconstruct a floating-point HDR picture from a LDR and LED picture.
- the LCD picture is inverse scaled using the min and max LCD values calculated at the step 23 to generate a reconstructed version of the RGB version of the HDR picture called "inv-scale-RGB".
- the LED picture is inverse scaled using the minimal and maximal LED values calculated at step 19 to generate a reconstructed version of the scale LED picture.
- the reconstructed version of the scaled LED picture is used to reconstruct the full resolution backlight picture.
- step 26 the reconstructed version of the scale LED picture is copied on a full-size picture grid.
- the step 26 requires the resolution of the LED grid used in step 17.
- step 27 the output of step 26 is convoluted with the Point Spread Function used in the step 21.
- step 28 the output of step 27 which is a reconstructed version of the rec-lum (step 22) is multiplied by the picture "inv-scale-RGB" picture to generate the reconstructed RGB version of the HDR picture.
- the reconstructed RGB version of the HDR picture is normalized by dividing it by the max brightness value (for instance 4000 cd/m2) used in step 13.
- the output of step 29 is then de-normalized (step 30) using the min and max HDR values calculated at step 11 in order to generate a reconstructed version of the HDR picture.
- the behavior of the inverse dual-modulation is controlled by metadata received from a remote device.
- the second set of metadata may define at least one item of the following list: the min and max LCD values calculated at the step 23, the minimal and maximal LED values calculated at step 19, the min and max HDR values calculated at step 11.
- the second set of metadata may also comprises at least one item of the first set of metadata descrived above.
- the second set of metadata comprises also the LED picture.
- the invention is not limited to the embodiment of the inverse dual- modulation described in Fig. 2 and any other inverse dual-modulation may be used to reconstruct a HDR picture from the LCD and LED pictures.
- the scope of the invention is not limited to the examples of metadata given in relation with the inverse dual-modulation described in Fig. 2 but extends to any metadata which control the behavior of an inverse dual- modulation.
- Fig. 3 shows an embodiment of the invention implemented in apparatuses of a communication system.
- This communication system is detailed in the following when two remote apparatuses A1 and A2 are configured to communicate to each other via at least one communication link or network NET.
- the apparatus A1 is called a transmitter and the apparatus A2 is called a receiver.
- the invention is not limited to the use of a single transmitter but may be extended to multiple transmitters communicating with at least one receiver A2.
- an apparatus A1 or A2 may be a single device or composed of multiple devices which communicate together to implement the same functions of either the apparatus A1 or A2 detailed in the following.
- the dual-modulation described in relation with Fig. 1 is implemented in the transmitter A1 and the inverse dual-modulation described in relation with Fig. 2 is implemented in the receiver A2.
- the first set of metada is stored locally in the transmitter A1 or may be obtained from the floating-point HDR input picture, and the transmitter A1 is configured to send to the receiver A2 the first set of metadata.
- the receiver A2 is then configured to receive the first set of metadata sent by the transmitter A1 and to control the behavior of the inverse dual-modulation according to the received metadata.
- the receiver A2 may obtain this metadata from an internal storing memory.
- the receiver A2 is configured to send to the transmitter A1 some metadata which are either relative to a characteristic of a dual-modulation HDR display or defined from data issued from sensors that inform on the illumination environment around the transmitter A1 (or a display) or defined according to the end-user preference.
- metadata are called the third set of metadata in the following.
- Some metadata of the third set of metadata may be relative to a characteristic of a dual-modulation HDR display.
- they may be metadata of the first set of metadata.
- the transmitter A1 is then configured to receive the third set of metadata and to control the behavior of the dual- modulation according to the received metadata.
- the receiver A2 has the ability to change dynamically the behavior of both the dual-modulation implemented by the transmitter A1 and the inverse dual-modulation implemented by the receiver A2 in order to optimize the rendering of the HDR picture on the dual-modulation HDR display.
- the transmitter A1 selects the most appropriate first set of metadata to control the dual-modulation from the received third set of metadata and sends this selected first set of metadata to the receiver A2.
- the most appropriate set of first metadata is the the first set of metadata embedded in the received third set of metadata.
- the transmitter A1 is configured to receive occasionally or periodically a third set of metadata from the receiver A2 and the transmitter A1 is then configured either to select a new most appropriate first set of metadata from the last received third set of metadata and to send the selected first set of metadata to the receiver A2 or to consider the first set of metadata embedded in the last received third set of metadata to control the dual-modulation.
- an end-user may also prefer brighter and dimmer content and this may depend also on the environment around the display. So the end-user may have the option to select the correct HDR content relative brightness, for example, from an user interface of the receiver A2. This selection defines then a maximum brightness value which is embedded as metadata in the third set of metadata for example. Both the duai-modulation and the inverse dual-modulation takes then into account this new maximum brightness value.
- the LCD and LED picture are transmitted separately, i.e. the LCD and LED pictures are transmitted by the transmitter A1 using different means and the receiver A2 receives the LCD and the LED picture from different means.
- the LCD picture is sent over a communication link or network as an active video data and the LED picture is sent over the communication link or network as metadata.
- the term “separately” means that the metadata are not transmitted as a part of the transmitted active video data as well-known in prior art.
- the LED picture data are directly inserted in the first pixels of the LCD picture, following a sync word, and the modified LCD picture is then sent over the communication link or network.
- Such a method is dedicated, for example, to a well-known "Sim2 HDR47" display.
- the main drawback is the lost of the two first lines of the LCD picture that are used to transmit the LED picture data.
- the LCD picture and metadata may be sent over 8 bits channels usually used to transmit low-dynamic-range picture.
- apparatuses A1 and A2 communicate each other over a HDMI link
- the HDMI protocol between the transmitter A1 and the receiver A2 starts by a discovery phase : when the receiver A2 is connected to the transmitter A1 , or when you power on those already connected pieces of equipment, the transmitter A1 detects the receiver A2 and discovers it through the DDC (Display Data Channel) of the HDMI link. It's basically a way to have the transmitter A1 receiving metadata from the receiver A2.
- DDC Display Data Channel
- the metadata may possibly comprise an information which informs the transmitter A1 whether the inverse dual-modulation is supported by the receiver A2.
- the metadata comprise also the third set of metadata as described above.
- the transmitter A1 compares the receiver A2's capabilities (represented by metadata embedded in the received third set of metadata) with its own capabilities and selects the "most appropriate format" to drive and feed the receiver A2.
- the transmitter A1 may consider the first set of metadata embedded in the received set of metadata to configure the dual- modulation. Other variants may be considered as explained above.
- the VSDB (Vendor Specific Data Block) as defined by the HDMI forum is used to hold the metadata from the apparatus A2 to the apparatus A1 .
- Fig. 4 shows an example of such VSDB.
- the metadata may be carried either by feeding the "Rsvd(O)" bits by the metadata or by specifying a different length N to increase the VSDB payioad up to 32 bytes (including the header) and by feeding the "Reserved(O) bits.
- Fig. 5 shows an example of a VSDB relative to specific metadata.
- Fig. 6 shows an example to carry out the LED picture as metadata from the apparatus A1 to the apparatus A2.
- a VSIF Vendor Specific InfoFrame defined by CEA-861 -F is used.
- the LED picture data are here designated by the term "dual modulation byte" in Fig. 6.
- the 3 first bytes are the header as standardized by CEA 861 .
- Our payioad start at byte 3.
- the "number of LEDs per line” x "number of LEDs per column” dual modulation bytes will have to be broken down is 20 bytes packets.
- Each DM packet will be numbered by a 12b "continuation index”.
- HDMI version 2.0 provides two ways for the apparatus A2 to inform dynamically the apparatus A1 that end-user preferences and/or environments conditions have changed: either the CEC or the SCDCS.
- the apparatus A2 will whenever it needs send a to be defined specific CEC message to the apparatus A1 and using the SCDCS, the apparatus A1 will regularly poll the apparatus A2 to read the metadata from a SCDCS table defined by the HDMI 2.0. A specific entry for each metadata sent by the apparatus A2 has to be added to the SCDCS table defined by the HDMI 2.0 as shown in Fig. 7.
- the apparatuses A1 and A2 are identical to one example. According to another example, the apparatuses A1 and A2 are identical to one another.
- DisplayPort provides a discovery phase that is similar to the HDMI's one, thus allowing an equivalent exchange of metadata.
- the packet based protocol uses a physical layer that carries data at a given bitrate that is higher than the raw data bitrate to be transmitted.
- DisplayPort carries up to 4.32 Gb/s per lane providing 17.16 Gb/ ' s bitrate for 4 lanes, which can be used to transmit 1080p 50Hz video that needs about 2.5 Gbps data rate for instance.
- the data are transmitted in asynchronous packets, i.e. carried data don't need to be transmitted synchronously at the video rate. These data can be transmitted in small data packets using high speed bursts of data that are buffered in the receiver.
- the packets are typed as video packets, audio packets, control packets or other packets...
- these packets are sent using the 4 "MLJane" lanes.
- LED packet type is for example defined and LED packets are send from the apparatus A1 to the apparatus A2 on the 4 "MLJane” lanes.
- these packet based protocols generally implement a bidirectional additional channel used as a communication channel between the transmitter and the receiver.
- the Auxiliary Channel "AUX CH" that is usually used to carry information as EDID for instance may be used to carry out the metadata from the apparatus A2 to the apparatus A1 .
- the modules are functional units, which may or not be in relation with distinguishable physical units. For example, these modules or some of them may be brought together in a unique component or circuit, or contribute to functionalities of a software. A contrario, some modules may potentially be composed of separate physical entities.
- the apparatus which are compatible with the inventbn are implemented using either pure hardware, for example using dedicated hardware such ASIC or FPGA or VLSI, respectively « Application Specific Integrated Circuit » « Field- Programmable Gate Array » « Very Large Scale Integration » or from several integrated electronic components embedded in a device or from a brend of hardware and software components.
- Fig. 8 represents an exemplary architecture of a device 80.
- Device 80 comprises following elements that are linked together by a data and address bus 81 :
- microprocessor 82 which is, for example, a DSP (or Digital Signal Processor);
- ROM Read Only Memory
- RAM Random Access Memory
- the battery 86 is external to the device.
- the word « register » used in the specification can correspond to area of small capacity (some bits) or to very large area (e.g. a whole program or large amount of received or data).
- ROM 83 comprises at least a program and parameters. Algorithm of the method according to the invention is stored in the ROM 83. When switched on, the CPU 82 uploads the program in the RAM and executes the corresponding instructions.
- RAM 84 comprises, in a register, the program executed by the CPU 82 and uploaded after switch on of the device 80, input data in a register, intermediate data in different states of the method in a register, and other variables used for the execution of the method in a register.
- the implementations described herein may be implemented in, for example, a method or a process, an apparatus, a software program, a data stream, or a signal. Even if only discussed in the context of a single form of implementation (for example, discussed only as a method or a device), the implementation of features discussed may also be implemented in other forms (for example a program).
- An apparatus may be implemented in, for example, appropriate hardware, software, and firmware.
- the methods may be implemented in, for example, an apparatus such as, for example, a processor, which refers to processing devices in general, including, for example, a computer, a microprocessor, an integrated circuit, or a programmable logic device.
- Processors also include communication devices, such as, for example, computers, cell phones, portable/personal digital assistants ("PDAs"), and other devices that facilitate communication of information between end-users.
- Communication devices such as, for example, computers, cell phones, portable/personal digital assistants ("PDAs"), and other devices that facilitate communication of information between end-users.
- Implementations of the various processes and features described herein may be embodied in a variety of different equipment or applications, particularly, for example, equipment or applications. Examples of such equipment include an encoder, a decoder, a post-processor processing output from a decoder, a pre-processor providing input to an encoder, a video coder, a video decoder, a video codec, a web server, a set-top box, a laptop, a personal computer, a cell phone, a PDA, a TV set and other communication devices.
- the equipment may be mobile and even installed in a mobile vehicle.
- the methods may be implemented by instructions being performed by a processor, and such instructions (and/or data values produced by an implementation) may be stored on a processor-readable medium such as, for example, an integrated circuit, a software carrier or other storage device such as, for example, a hard disk, a compact diskette (“CD"), an optical disc (such as, for example, a DVD, often referred to as a digital versatile disc or a digital video disc), a random access memory (“RAM”), or a read-only memory (“ROM”).
- the instructions may form an application program tangibly embodied on a processor-readable medium. Instructions may be, for example, in hardware, firmware, software, or a combination.
- a processor may be characterized, therefore, as, for example, both a device configured to carry out a process and a device that includes a processor-readable medium (such as a storage device) having instructions for carrying out a process. Further, a processor-readable medium may store, in addition to or in lieu of instructions, data values produced by an implementation.
- implementations may produce a variety of signals formatted to carry information that may be, for example, stored or transmitted.
- the information may include, for example, instructions for performing a method, or data produced by one of the described implementations.
- a signal may be formatted to carry as data the rules for writing or reading the syntax of a described embodiment, or to carry as data the actual syntax-values written by a described embodiment.
- Such a signal may be formatted, for example, as an electromagnetic wave (for example, using a radio frequency portion of spectrum) or as a baseband signal.
- the formatting may include, for example, encoding a data stream and modulating a carrier with the encoded data stream.
- the information that the signal carries may be, for example, analog or digital information.
- the signal may be transmitted over a variety of different wired or wireless links, as is known.
- the signal may be stored on a processor-readable medium.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Image Processing (AREA)
Abstract
La présente invention concerne de façon générale un procédé pour reconstruire une image à grande gamme dynamique à l'aide d'une modulation double inverse qui combine l'une avec l'autre une première image (LCD) et une seconde image (LED) afin de reconstruire ladite image à grande gamme dynamique, ladite première image (LCD) étant une version à faible gramme dynamique de l'image à grande gamme dynamique et ladite seconde image (LED) étant une version à basse résolution de la composante de luminance de l'image à grande gamme dynamique. Le procédé est caractérisé en ce que le comportement de ladite modulation double inverse est commandé par des métadonnées reçues en provenance d'un dispositif distant. L'invention concerne également un procédé pour décomposer une image à grande gamme dynamique à l'aide d'une modulation double, et un appareil configuré pour mettre en œuvre les deux procédés.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| EP13306254.7 | 2013-09-13 | ||
| EP13306254 | 2013-09-13 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2015036358A1 true WO2015036358A1 (fr) | 2015-03-19 |
Family
ID=49378199
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/EP2014/069063 Ceased WO2015036358A1 (fr) | 2013-09-13 | 2014-09-08 | Procédé et appareil pour décomposer et reconstruire une image à grande gamme dynamique |
Country Status (2)
| Country | Link |
|---|---|
| TW (1) | TW201514924A (fr) |
| WO (1) | WO2015036358A1 (fr) |
Cited By (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2016153249A1 (fr) * | 2015-03-23 | 2016-09-29 | 엘지전자(주) | Procédé et dispositif d'émission/réception de puissance au moyen d'une interface hdmi |
| WO2018164942A1 (fr) * | 2017-03-06 | 2018-09-13 | E Ink Corporation | Procédé permettant de restituer des images en couleurs |
| US10104334B2 (en) | 2017-01-27 | 2018-10-16 | Microsoft Technology Licensing, Llc | Content-adaptive adjustment of display device brightness levels when rendering high dynamic range content |
| US10176561B2 (en) | 2017-01-27 | 2019-01-08 | Microsoft Technology Licensing, Llc | Content-adaptive adjustments to tone mapping operations for high dynamic range content |
| US10218952B2 (en) | 2016-11-28 | 2019-02-26 | Microsoft Technology Licensing, Llc | Architecture for rendering high dynamic range video on enhanced dynamic range display devices |
| US10957024B2 (en) | 2018-10-30 | 2021-03-23 | Microsoft Technology Licensing, Llc | Real time tone mapping of high dynamic range image data at time of playback on a lower dynamic range display |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| GB2558234B (en) * | 2016-12-22 | 2020-05-13 | Apical Ltd | Image processing |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2010089682A1 (fr) * | 2009-02-03 | 2010-08-12 | Koninklijke Philips Electronics N.V. | Système d'affichage et son procédé de fonctionnement |
| US20110164855A1 (en) * | 2008-09-19 | 2011-07-07 | Crockett Brett G | Upstream quality enhancement signal processing for resource constrained client devices |
| EP2613532A1 (fr) * | 2012-01-06 | 2013-07-10 | Thomson Licensing | Procédé et dispositif de codage d'une vidéo HDR conjointement à une vidéo LDR, procédé et dispositif de reconstruction d'un codage de vidéo HDR conjointement à une vidéo LDR codée et support de stockage non transitoire |
-
2014
- 2014-09-08 WO PCT/EP2014/069063 patent/WO2015036358A1/fr not_active Ceased
- 2014-09-12 TW TW103131455A patent/TW201514924A/zh unknown
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20110164855A1 (en) * | 2008-09-19 | 2011-07-07 | Crockett Brett G | Upstream quality enhancement signal processing for resource constrained client devices |
| WO2010089682A1 (fr) * | 2009-02-03 | 2010-08-12 | Koninklijke Philips Electronics N.V. | Système d'affichage et son procédé de fonctionnement |
| EP2613532A1 (fr) * | 2012-01-06 | 2013-07-10 | Thomson Licensing | Procédé et dispositif de codage d'une vidéo HDR conjointement à une vidéo LDR, procédé et dispositif de reconstruction d'un codage de vidéo HDR conjointement à une vidéo LDR codée et support de stockage non transitoire |
Non-Patent Citations (1)
| Title |
|---|
| SEWOONG OH: "High Dynamic Range Image Encoding for BrightSide Display", 1 January 2006 (2006-01-01), XP055130767, Retrieved from the Internet <URL:http://scien.stanford.edu/pages/labsite/2007/psych221/projects/07/HDR_encoding/SewoongOh_report.pdf> [retrieved on 20140722] * |
Cited By (14)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2018515047A (ja) * | 2015-03-23 | 2018-06-07 | エルジー エレクトロニクス インコーポレイティド | Hdmiを使用して電力を送受信するための方法及びその装置 |
| US10038871B2 (en) | 2015-03-23 | 2018-07-31 | Lg Electronics Inc. | Method and device for transmitting and receiving power using HDMI |
| WO2016153249A1 (fr) * | 2015-03-23 | 2016-09-29 | 엘지전자(주) | Procédé et dispositif d'émission/réception de puissance au moyen d'une interface hdmi |
| EP3276947A4 (fr) * | 2015-03-23 | 2018-10-31 | LG Electronics Inc. | Procédé et dispositif d'émission/réception de puissance au moyen d'une interface hdmi |
| US10218952B2 (en) | 2016-11-28 | 2019-02-26 | Microsoft Technology Licensing, Llc | Architecture for rendering high dynamic range video on enhanced dynamic range display devices |
| US10104334B2 (en) | 2017-01-27 | 2018-10-16 | Microsoft Technology Licensing, Llc | Content-adaptive adjustment of display device brightness levels when rendering high dynamic range content |
| US10176561B2 (en) | 2017-01-27 | 2019-01-08 | Microsoft Technology Licensing, Llc | Content-adaptive adjustments to tone mapping operations for high dynamic range content |
| WO2018164942A1 (fr) * | 2017-03-06 | 2018-09-13 | E Ink Corporation | Procédé permettant de restituer des images en couleurs |
| US10467984B2 (en) | 2017-03-06 | 2019-11-05 | E Ink Corporation | Method for rendering color images |
| US11094288B2 (en) | 2017-03-06 | 2021-08-17 | E Ink Corporation | Method and apparatus for rendering color images |
| RU2755676C2 (ru) * | 2017-03-06 | 2021-09-20 | Е Инк Корпорэйшн | Способ и устройство для рендеринга цветных изображений |
| US11527216B2 (en) | 2017-03-06 | 2022-12-13 | E Ink Corporation | Method for rendering color images |
| US12100369B2 (en) | 2017-03-06 | 2024-09-24 | E Ink Corporation | Method for rendering color images |
| US10957024B2 (en) | 2018-10-30 | 2021-03-23 | Microsoft Technology Licensing, Llc | Real time tone mapping of high dynamic range image data at time of playback on a lower dynamic range display |
Also Published As
| Publication number | Publication date |
|---|---|
| TW201514924A (zh) | 2015-04-16 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2015036358A1 (fr) | Procédé et appareil pour décomposer et reconstruire une image à grande gamme dynamique | |
| CN110460745B (zh) | 显示装置 | |
| US10432435B2 (en) | Methods and apparatus for enabling and disabling scrambling of control symbols | |
| CN107209928A (zh) | 用于将hdr画面映射为sdr画面的方法和设备以及相应的sdr到hdr的映射方法和设备 | |
| US10764549B2 (en) | Method and device of converting a high dynamic range version of a picture to a standard-dynamic-range version of said picture | |
| CN107209929A (zh) | 用于对高动态范围图像进行处理的方法和装置 | |
| JP2016517533A (ja) | コンテンツ適応lcdバックライト制御 | |
| WO2016120210A1 (fr) | Procédé et appareil pour effectuer un mappage de tonalité inverse sur une image | |
| JP2018507620A (ja) | カラー・ピクチャを復号する方法および装置 | |
| TWI600312B (zh) | 顯示介面帶寬調制 | |
| JP7058632B2 (ja) | 画像/ビデオ信号に関係するビットストリームを生成する方法及び装置、並びに特定の情報を取得する方法及び装置 | |
| US12167165B2 (en) | Efficient electro-optical transfer function (EOTF) curve for standard dynamic range (SDR) content | |
| KR102657462B1 (ko) | 디스플레이장치 및 그 제어방법 | |
| EP3035678A1 (fr) | Procédé et dispositif de conversion d'une version à grande gamme dynamique d'une image en une version à gamme dynamique standard de ladite image | |
| CN120584357A (zh) | 能量感知sl-hdr | |
| WO2024213419A1 (fr) | Procédé de traitement d'image pour déterminer une carte de fréquence et appareil correspondant | |
| WO2025201839A1 (fr) | Message de volume de couleur d'affichage de matriçage (mdcv) pour réduction d'énergie d'affichage | |
| JP2016059053A (ja) | 非圧縮ビデオ相互接続で伝送される画像データの知覚的な損失のない圧縮 | |
| CN112165633A (zh) | 一种显示信号远传的方法、设备及多屏笔记本电脑 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14761365 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 14761365 Country of ref document: EP Kind code of ref document: A1 |