WO2012161318A1 - Dispositif de codage d'image, dispositif de décodage d'image, procédé de codage d'image, procédé de décodage d'image et programme - Google Patents
Dispositif de codage d'image, dispositif de décodage d'image, procédé de codage d'image, procédé de décodage d'image et programme Download PDFInfo
- Publication number
- WO2012161318A1 WO2012161318A1 PCT/JP2012/063503 JP2012063503W WO2012161318A1 WO 2012161318 A1 WO2012161318 A1 WO 2012161318A1 JP 2012063503 W JP2012063503 W JP 2012063503W WO 2012161318 A1 WO2012161318 A1 WO 2012161318A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- prediction
- unit
- image
- pixel
- processing target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/11—Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/597—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
Definitions
- the present invention relates to an image encoding device, an image decoding device, an image encoding method, an image decoding method, and a program.
- H.264 H.264 Non-patent Reference 1
- macroblocks a plurality of blocks
- prediction methods sequentially selecting a prediction method having a high coding efficiency for each macroblock.
- an intra prediction encoding method (intra prediction encoding method) that predicts and encodes an encoding target block using pixel information of a block that has already been encoded in the screen.
- an inter-picture predictive coding method (inter-predictive coding method) for predicting and coding an encoding target block with reference to an image different from an image to be processed.
- the intra prediction encoding method is a macroblock (16 pixels ⁇ 16 pixels) unit, or 4 pixels ⁇ 4 pixels and 8 pixels ⁇ 8 pixels (8 pixels ⁇ 8 pixels are further divided by H.264 FRExt).
- Information for identifying the prediction mode and the code amount when the difference image (residual component) between the prediction image generated according to the prescribed prediction mode and the original image to be encoded is encoded in units)
- An optimal prediction method is selected based on the amount of code necessary for encoding (Non-patent Document 1).
- FIGS. 6 and 6 Four types of prediction modes (FIGS. 6 and 6 will be described later) can be applied to a block of 16 pixels ⁇ 16 pixels, and prediction using one type of DC component (average value prediction) and three types of prediction are possible. There are predictions using angles (vertical prediction, horizontal prediction, planar prediction).
- FIGS. 4 and 4 Nine kinds of prediction modes (FIGS. 4 and 4 will be described later) can be applied to a block of 4 pixels ⁇ 4 pixels or 8 pixels ⁇ 8 pixels, and prediction (average) by one type of DC component is applicable.
- Value prediction) and prediction using eight types of prediction angles prediction of non-uniform angles of 45 ° to 206.57 °).
- prediction is performed using the prediction modes of the upper part and the left part of the processing target block, and when the prediction matches with the prediction mode A 1-bit flag is prepared, and a determination is made by setting a flag that matches. If the prediction does not match, the flag is not set, and encoding is performed by adding information for 3 bits for determining the remaining eight types of prediction modes excluding the prediction mode that does not match. If the prediction is correct, only one bit of information is required to encode the prediction mode, but if the prediction is not correct, information of 4 bits is required.
- a prediction mode for example, an index value indicating a mode
- the invention described in Japanese Patent Application Laid-Open No. H10-228707 is H.264 so that prediction can be performed at an arbitrary prediction angle for the purpose of improving the encoding efficiency of the intra prediction encoding scheme.
- the number of prediction modes is increased compared to the H.264 method.
- a technique is disclosed in which a theoretical reference pixel position is obtained from a predicted angle and the position of a pixel to be processed, and a pixel value corresponding to the pixel position is generated by interpolating surrounding reference pixels.
- the invention described in Patent Document 2 discloses a method for accurately predicting a prediction mode based on the angle of surrounding prediction modes.
- the present invention has been made in view of such circumstances, and an object of the present invention is to provide an encoding device, a decoding device, an encoding method, a decoding method, and a program that improve the accuracy of a predicted image while suppressing an increase in the amount of code. Is to provide.
- the present invention has been made to solve the above-described problems, and according to one aspect of the present invention, when an input image is encoded, pixel values of peripheral pixels around the processing target pixel are used.
- An image encoding apparatus that performs intra prediction to predict a pixel value of the processing target pixel, and includes information indicating a distance to the subject in the processing target pixel and information indicating a distance to the subject in the peripheral pixels.
- An image coding apparatus comprising an intra-screen prediction unit that determines a predicted value for each pixel based on the image.
- the other aspect of this invention is the above-mentioned image coding apparatus, Comprising:
- the said prediction part in a screen is for every pixel based on the distance between the pixels of the said process target pixel and the said surrounding pixel. A predicted value is determined.
- the intra-screen prediction unit uses information indicating a boundary of the subject and a distance to the subject of the input image.
- a subject boundary detection unit for detecting, and when the boundary is not detected between the processing target pixel and a pixel adjacent in a predetermined direction, the processing target is detected using the pixel adjacent in the predetermined direction;
- a prediction value of a pixel is predicted.
- a pixel value of a processing target pixel is predicted using pixel values of peripheral pixels around the processing target pixel.
- An image decoding device that performs prediction, and when performing the intra prediction, information for each pixel based on information indicating a distance to the subject in the processing target pixel and information indicating a distance to the subject in the peripheral pixels.
- An image decoding apparatus comprising an in-screen prediction unit that determines a predicted value.
- the other aspect of this invention is the above-mentioned image decoding apparatus, Comprising:
- the said prediction part in a screen is the prediction for every pixel based on the distance between the pixels of the said process target pixel and the said surrounding pixel. It is characterized by determining a value.
- the in-screen prediction unit detects a boundary of the subject using information indicating a distance to the subject of the input image.
- a subject boundary detection unit that detects the boundary between the processing target pixel and a pixel adjacent in a predetermined direction, and uses the pixel adjacent in the predetermined direction to detect the processing target pixel. The prediction value is predicted.
- a pixel value of the processing target pixel is predicted using pixel values of peripheral pixels around the processing target pixel.
- An image encoding method for performing prediction wherein when performing intra prediction, information indicating a distance to the subject in the processing target pixel and information indicating a distance to the subject in the peripheral pixels. This is an image coding method characterized by having a process of determining a predicted value for each pixel based on the above.
- intra-frame prediction when decoding an input image, intra-frame prediction that predicts a pixel value of the processing target pixel using pixel values of peripheral pixels around the processing target pixel.
- the intra prediction unit is based on information indicating the distance to the subject in the processing target pixel and information indicating the distance to the subject in the peripheral pixels. And a process for determining a predicted value for each pixel.
- a pixel value of the processing target pixel is predicted using pixel values of peripheral pixels around the processing target pixel.
- the image coding apparatus that performs prediction performs the intra prediction, prediction for each pixel is performed based on information indicating the distance to the subject in the processing target pixel and information indicating the distance to the subject in the peripheral pixels. It is a program for functioning as an in-screen prediction unit for determining a value.
- intra-frame prediction when decoding an input image, intra-frame prediction that predicts the pixel value of the processing target pixel using pixel values of peripheral pixels around the processing target pixel.
- the prediction value for each pixel is calculated based on the information indicating the distance to the subject in the processing target pixel and the information indicating the distance to the subject in the peripheral pixels. It is a program for functioning as an in-screen prediction unit to be determined.
- FIG. 1 is a schematic block diagram showing a configuration of a moving image transmission system according to an embodiment of the present invention.
- the moving image transmission system 10 in this embodiment includes an image encoding device 100, a communication network 500, an image decoding device 800, and a display device 600.
- the image encoding device 100 encodes the image and the depth map from the image signal R of the image to be encoded and the depth map signal D of the depth map corresponding to the image, and uses the encoded data.
- a certain encoded data E is generated and output.
- the communication network 500 transmits the encoded data E output from the image encoding device 100 to the image decoding device 800.
- the image decoding apparatus 800 decodes the transmitted encoded data E, and generates an image signal R ′ of the decoded image.
- the display device 600 includes an image display device such as a liquid crystal display or a plasma display, and displays an image indicated by the image signal R ′ generated by the image decoding device 800.
- the image encoding device 100 is provided in a television broadcasting station, for example, and encodes a broadcast program.
- the communication network 500 is a communication network that transmits using broadcast waves
- the image decoding device 800 and the display device 600 are provided in a television receiver.
- the Internet or a mobile phone network may be used as the communication network 500.
- the image encoding apparatus 100 is provided in a content holder that edits contents stored and sold on a DVD (Digital Versatile Disc) or a BD (Blu-ray Disc), and encodes these contents.
- the encoded image E is stored in a DVD, a BD, or the like, and is delivered by a delivery network instead of the communication network 500.
- the image decoding device 800 is provided in a DVD player, a BD player, or the like.
- FIG. 2 is a schematic block diagram illustrating a configuration of the image encoding device 100 according to the present embodiment.
- the image coding apparatus 100 includes an image input unit 101, a subtraction unit 102, an orthogonal transformation unit 103, a quantization unit 104, an entropy coding unit 105, an inverse quantization unit 106, an inverse orthogonal transformation unit 107, an addition unit 108, and a prediction method.
- Control unit 109, selection unit 110, deblocking filter unit 111, frame memory unit 112, motion compensation unit 113, motion vector detection unit 114, depth information use intra prediction unit 115, depth map encoding unit 116, depth map decoding unit 117 and a depth input unit 118.
- the de-blocking filter unit 111, the frame memory 112, the motion compensation unit 113, and the motion vector detection 114 constitute an inter prediction unit 120.
- the depth information use intra prediction unit 115 and the depth map decoding unit 117 constitute an intra prediction unit 121.
- the image input unit 101 for example, outputs an image signal R (input image signal) indicating an image to be encoded (input image) as an example every 5 frames (the types of 5 frames will be described later). 100 from outside.
- the image input unit 101 divides the input image frame represented by the acquired input image signal into blocks having a predetermined size (for example, 16 pixels in the vertical direction ⁇ 16 pixels in the horizontal direction).
- the image input unit 101 outputs an image block signal B representing each of the divided blocks to the subtraction unit 102, the motion vector detection unit 114, and the depth information use intra prediction unit 115.
- the image input unit 101 repeats this process for each image frame until the output for all the blocks in the image frame is completed and the acquired image is completed while sequentially changing the block position.
- the input image to the image encoding device 100 includes at least a reference image (base view).
- the reference image is an image of one predetermined viewpoint included in a multi-view (multi-view) moving image for stereoscopic display, and is an image serving as a basis for calculating a depth map.
- the depth map is distance information representing the depth or distance from the photographing device of the subject represented in the reference image, and includes a quantized value given for each pixel of the reference image. Each of the quantized values is called a depth value, for example, a value quantized with 8 bits.
- the image signal R for every five frames input to the image input unit 101 is, for example, an image signal of an I picture (I0), a B picture (B3), a B picture (B2), a B picture (B4), and a P picture (P1). including.
- the image signal R input to the image encoding device 100 is input in this order (hereinafter referred to as input order), for example.
- input order hereinafter referred to as input order
- the leading I, etc. indicates the type of image
- 0, etc. indicates the order of encoding (hereinafter, encoding order) (therefore, the input order and the encoding order are different). .
- An I picture is an intra-frame picture (Intra Frame Picture), which can be decoded using only a code obtained by encoding the picture.
- the P picture is an inter-frame forward prediction image (Predictive Picture), which is an image that can be decoded using a code obtained by encoding the image and a code obtained by encoding an image signal of a past frame.
- Predictive Picture inter-frame forward prediction image
- a B picture is a bi-predictive coded picture (Bi-directional Predictive Picture), which is obtained by decoding the input picture and using a code obtained by coding a plurality of past or future frame image signals. It is an image that can be decoded.
- the subtraction unit 102 subtracts the prediction image block signal output from the selection unit 110 from the image block signal output from the image input unit 101 to generate a difference image block signal.
- the subtraction unit 102 outputs the generated difference image block signal to the orthogonal transformation unit 103.
- the orthogonal transform unit 103 performs orthogonal transform on the difference image block signal output from the subtraction unit 102 to generate signals indicating the strengths of various frequency characteristics.
- the orthogonal transform unit 103 orthogonally transforms the difference image block signal
- the difference image block signal is subjected to, for example, DCT transform (discrete cosine transform) and a frequency domain signal (for example, DCT transform). , DCT coefficients).
- DCT transform discrete cosine transform
- a frequency domain signal for example, DCT transform
- DCT coefficients DCT coefficients
- other methods for example, FFT (Fast Fourier Transform)
- the orthogonal transform unit 103 outputs the coefficient value included in the generated frequency domain signal to the quantization unit 104.
- the quantization unit 104 quantizes the coefficient value indicating each frequency characteristic intensity output from the orthogonal transform unit 103, and generates the generated quantized signal ED (difference image block code) with the entropy encoding unit 105 and the inverse quantization unit 106. Output to.
- ED difference image block code
- the inverse quantization unit 106 performs inverse quantization on the quantized signal ED output from the quantization unit 104 to generate a decoded frequency domain signal, and outputs the decoded frequency domain signal to the inverse orthogonal transform unit 107.
- the inverse orthogonal transform unit 107 performs, for example, inverse DCT transform on the input decoded frequency domain signal to generate a decoded differential image block signal that is a spatial domain signal.
- the inverse orthogonal transform unit 107 is not limited to the inverse DCT transform, and other methods (eg, IFFT (Inverse Fast Fourier Transform)) are used. It may be used.
- the inverse orthogonal transform unit 107 outputs the generated decoded difference image block signal to the addition unit 108.
- the addition unit 108 acquires the predicted image block signal from the selection unit 110 and acquires the decoded difference image block signal from the inverse orthogonal transform unit 107.
- the adder 108 adds the decoded differential image block signal to the predicted image block signal, and generates a reference image block signal RB obtained by encoding / decoding the input image (internal decoding).
- the reference image block signal RB is output to the inter prediction unit 120 and the intra prediction unit 121.
- the inter prediction unit 120 acquires the reference image block signal RB from the addition unit 108 and acquires the image block signal from the image input unit 101.
- the inter prediction unit 120 performs inter prediction using these signals, and generates an inter prediction image block signal.
- the inter prediction unit 120 outputs the generated inter prediction image block signal to the prediction method control unit 109 and the selection unit 110.
- the inter prediction unit 120 outputs the generated inter prediction coding information IPE to the prediction scheme control unit 109.
- the inter prediction unit 120 will be described later.
- the intra prediction unit 121 acquires the reference image block signal RB from the addition unit 108, acquires the image block signal from the image input unit 101, and acquires depth map encoded data from the depth map encoding unit 116.
- the intra prediction unit 121 performs intra prediction using these signals and data, and generates an intra predicted image block signal.
- the intra prediction unit 121 outputs the generated intra prediction image block signal to the prediction scheme control unit 109 and the selection unit 110.
- the intra prediction unit 121 outputs the generated intra prediction encoding information TPE to the prediction scheme control unit 109.
- the intra prediction unit 121 will be described later.
- the depth input unit 118 acquires the depth map signal D of the depth map corresponding to the input image input to the image input unit 101 from the outside of the image encoding device 100.
- the depth input unit 118 divides (depth block signal) the acquired depth map so that the input image block divided by the image input unit 101 has the same position and the same block size, and the depth map encoding unit.
- the depth map encoding unit 116 encodes the depth block signal output from the depth input unit 118 using, for example, variable length encoding (entropy encoding), and converts the depth map encoded data E ⁇ b> 2 whose data amount is further compressed. Generate.
- the depth map encoding unit 116 outputs the generated depth map encoded data E2 to the intra prediction unit 121 and the outside of the image encoding device 100 (for example, the image decoding device 800 via the communication network 500).
- the inter prediction unit 120 includes a deblocking filter unit 111, a frame memory 112, a motion compensation unit 113, and a motion vector detection unit 114.
- the deblocking filter unit 111 acquires the reference image block signal RB from the adder unit 108, and reduces the block distortion generated when the image is encoded, for example, a known encoding method (for example, H.264 Reference Software). FIR (Finite Impulse Response) filter processing used in JM ver. 13.2 Encoder, http://iphome.hhi.de/suehring/tml/, 2008).
- the deblocking filter unit 111 outputs the processing result (correction block signal) to the frame memory 112.
- the frame memory 112 holds the correction block signal output from the deblocking filter unit 111 as a part of the image of the frame number together with information for identifying the frame number.
- the motion vector detection unit 114 searches for a block similar to the image block signal input from the image input unit 101 from the image stored in the frame memory 112 (block matching), and vector information (motion vector) representing the searched block. Is generated.
- the motion vector detection unit 114 calculates an index value between the divided blocks for each area, and searches for an area where the calculated index value is minimum.
- the motion vector detection unit 114 has two areas: a block in the reference image area having the smallest index value, a block in the reference image area having the next smallest index value, and the two areas. Find out.
- the motion vector detection unit 114 uses, for example, an absolute value sum (SAD: Sum of Absolute Difference) of a difference between a luminance value of a pixel included in a divided block and a luminance value in a certain region of the reference image.
- SAD an absolute value sum
- SAD between a block (for example, a size of N ⁇ N pixels) divided from the input image signal and the block of the reference image signal is expressed by the following equation (1).
- Iin (i0 + i, j0 + j) represents the luminance value at the coordinates (i0 + i, j0 + j) of the input image
- (i0, j0) represents the pixel coordinates of the upper left corner of the divided block
- Iref (i0 + i + p, j0 + j + q) is a luminance value at the coordinates (i0 + i + p, j0 + j + q) of the reference image
- (p, q) is a shift amount (motion vector) based on the coordinates of the upper left corner of the divided block.
- the motion vector detection unit 114 calculates SAD (p, q) for each (p, q) in block matching, and finds (p, q) that minimizes SAD (p, q).
- (P, q) represents a vector (motion vector) from the divided block in the input image to the position of the reference region in the reference image.
- the motion compensation unit 113 acquires a motion vector from the motion vector detection unit 114, and outputs the corresponding reference block to the prediction scheme control unit 109 and the selection unit 110 as an inter prediction image block signal.
- the motion compensator 113 outputs the corresponding image block when the motion vector output from the motion vector detector 114 is one, and applies when the motion vector output from the motion vector detector 114 is two. Two image blocks are averaged and output.
- the motion compensation unit 113 outputs information necessary for prediction (hereinafter, inter prediction coding information IPE), for example, a motion vector, to the prediction scheme control unit 109.
- inter prediction coding information IPE information necessary for prediction
- the intra prediction unit 121 includes a depth map decoding unit 117 and a depth information use intra prediction unit 115.
- the depth map decoding unit 117 decodes the depth block signal having a larger amount of information by using, for example, variable length decoding, the depth map encoded data output from the depth map encoding unit 116.
- the depth map decoding unit 117 outputs the decoded depth map D ′ (depth block decoded signal) to the depth information use intra prediction unit 115.
- FIG. 3 is a schematic block diagram illustrating a configuration of the depth information use intra prediction unit 115 according to the present embodiment.
- the processing of the depth information use intra prediction unit 115 will be described with reference to FIG.
- the depth information use intra prediction unit 115 includes a first prediction mode execution unit 200-1 to an nth prediction mode execution unit 200-n (n is a natural number of 1 or more, for example, 6), and a depth use prediction mode execution.
- Unit 201 and a prediction mode selection unit 202 is a prediction mode selection unit 202.
- the first prediction mode execution unit 200-1 to the n-th prediction mode execution unit 200-n perform the first to nth prediction mode execution units 200-n according to the processing in each prediction mode (prediction image block generation method) from the reference image block signal RB output from the addition unit 108. An n-th predicted image block signal is generated.
- the first prediction mode execution unit 200-1 to the n-th prediction mode execution unit 200-n output the generated first to n-th prediction image block signals to the prediction mode selection unit 202.
- Each of the first prediction mode execution unit 200-1 to the n-th prediction mode execution unit 200-n is, for example, a conventional intra-screen prediction mode (for example, H.264 Reference Software JM ver. 13.2 Encoder, http: // iphome. hhi.de/suehring/tml/, 2008) is used to perform in-screen prediction (intra prediction).
- H.264 there are nine types of intra prediction applied to a 4 ⁇ 4 pixel sub-block obtained by further dividing a macroblock, and four types of intra prediction applied to a macroblock unit (note that 8 ⁇ 8 pixels).
- In-screen prediction using sub-blocks is formulated in H.264 FRExt, and the same intra-screen prediction method as 4 ⁇ 4 pixels is applied.
- the first prediction mode execution unit 200-1 performs intra prediction (intra-screen prediction) using, for example, 4 ⁇ 4 sub-blocks.
- the second prediction mode execution unit 200-2 performs intra prediction using, for example, 8 ⁇ 8 sub-blocks.
- the third prediction mode execution unit 200-3 to the sixth prediction mode execution unit 200-6 perform four types of prediction methods, for example, in units of 16 ⁇ 16 macroblocks.
- the first prediction mode execution generation unit 200-1 further divides the reference image block signal output from the addition unit 108 into 4 ⁇ 4 sub-block sizes, and performs prediction in units of 4 ⁇ 4 pixels in the order shown in FIG. Execute the method. That is, a 16 ⁇ 16 pixel block is divided into four 8 ⁇ 8 pixel blocks, which are processed in the order of upper left, upper right, lower left, and lower right. For each of these 8 ⁇ 8 pixel blocks, each is divided into four 4 ⁇ 4 pixel sub-blocks, and within each of these 8 ⁇ 8 pixel blocks, upper left, upper right, lower left, right Intra prediction is performed in the following order.
- the first prediction mode execution unit 200-1 includes a 4 ⁇ 4 pixel predicted image block generated by each of nine types of prediction methods and a corresponding sub-block of the image block signal B output from the image input unit 101. An index indicating the degree of correlation is calculated, and a prediction method is selected for each sub-block based on the index.
- the first prediction mode execution unit 200-1 calculates, for example, the absolute value sum (SAD) of the luminance value differences as the index, and sets the prediction method with the smallest SAD value to the corresponding 4 ⁇ 4 pixel sub A block prediction method is selected, and a first predicted image block signal at a corresponding position is generated. Also, the prediction method is retained. The first prediction mode execution unit 200-1 repeats the above process until a prediction method for 16 ⁇ 16 pixels and a first predicted image block signal are generated.
- SAD absolute value sum
- the second prediction mode execution unit 200-2 further divides the reference image block signal RB output from the addition unit 108 into four 8 ⁇ 8 pixel sub-blocks, and uses the prediction used in the first prediction mode execution unit 200-1. Nine kinds of prediction methods similar to those in the mode 0 to the prediction mode 8 are applied to each of the 8 ⁇ 8 pixel sub-blocks to generate a prediction image. At the same time, the prediction method is retained.
- the second prediction mode execution unit 200-2 repeats the above processing, sequentially determines the prediction method in units of 8 ⁇ 8 pixel sub-blocks, and predicts all 16 ⁇ 16 pixel block prediction methods and predicted images based on the prediction method. Generate block signals.
- the third prediction mode execution unit 200-3 to the sixth prediction mode execution unit 200-6 are 16 ⁇ 16 pixel intra predictions (intra-screen predictions), and use the reference image block signal output from the addition unit 108. Prediction image block signals corresponding to prediction modes 0 to 3 in FIG. 6 are generated.
- the depth use prediction mode execution unit 201 obtains a reference image block signal from the addition unit 108 and a depth block decoded signal from the depth map decoding unit 117, and uses the depth map to suppress prediction across a subject boundary. I do. Details of the depth use prediction mode execution unit 201 will be described later.
- the depth use prediction mode execution unit 201 outputs the prediction image block signal and the prediction method to the prediction mode selection unit 202.
- the prediction mode selection unit 202 acquires the prediction image block signal generated by the first prediction mode execution unit 200-1 to the nth prediction mode execution unit 200-n and the depth use prediction mode execution unit 201 and information necessary for prediction. To do.
- the information necessary for prediction is applied to each sub-block of the first prediction mode execution unit 200-1 and the second prediction mode execution unit 200-2 that perform processing by further dividing 16 ⁇ 16 pixels into sub-blocks, for example.
- Information indicating the predicted mode, and information indicating the prediction mode indicating the direction of prediction of the depth use prediction mode execution unit 201 are examples of the prediction mode.
- the prediction mode selection unit 202 selects one prediction image block signal having the smallest index value from the obtained prediction image block signals (including the prediction image block signal output by the depth use prediction mode execution unit 201). .
- the prediction mode selection unit 202 for example, as shown by the following equation, the luminance value Iin (i0 + i, j0 + j) of the corresponding image block included in the input image input from the image input unit 101 and the candidate prediction image SAD with the luminance value Ip, m (i0 + i, j0 + j) of the block is used.
- m is an index indicating which prediction mode of which prediction mode execution unit is. Accordingly, Ip, m (x, y) is a luminance value at the coordinates x, y of the predicted image in the prediction mode m. Further, i0 and j0 are the coordinates of the upper left vertex of the block, and N is the size of the block (the number of pixels on one side).
- the effectiveness of processing for each prediction mode such as the correlation between the image block included in the input image as an index value and the candidate predicted image block, the similarity, or the amount of information after encoding. Any variable that represents can be used.
- the prediction mode selection unit 202 generates prediction mode information including an index representing the prediction mode. Alternatively, the prediction mode selection unit 202 selects a prediction mode in which information necessary for prediction exists (specifically, the first prediction mode execution unit 200-1, the second prediction mode execution unit 200-2, and the depth). In the prediction mode of the use prediction mode execution unit 201, the index and information necessary for this prediction are collected to generate prediction mode information.
- the prediction mode selection unit 202 sends the selected prediction image block signal (hereinafter referred to as an intra prediction image block signal) to the selection unit 110 and the prediction scheme control unit 109, and the prediction mode information (hereinafter referred to as intra prediction coding information TPE). Output to the prediction method control unit 109.
- the prediction scheme control unit 109 receives the picture type of the input image, the inter prediction image block signal input from the inter prediction unit 120, its inter prediction encoding information IPE, and the intra prediction unit 121.
- a prediction scheme is determined based on the intra-predicted image block signal and the intra-encoded information, and information on the prediction scheme is output to the selection unit 110 and the entropy encoding unit 105.
- the prediction method control unit 109 monitors the picture type of the input image, and selects an intra prediction method when the input image is an I picture.
- the prediction scheme control unit 109 uses, for example, a conventional technique (from the residual between the number of bits generated by the encoding performed by the entropy encoding unit 105 and the original image of the subtraction unit 102 (for example, the Lagrange cost is calculated using H.264 Reference Software JM ver. 13.2 Encoder, http://iphome.hhi.de/suehring/tml/, ⁇ 2008), and either inter prediction method or intra prediction method is selected. select.
- the prediction method control unit 109 adds information that can specify the prediction method to the coding information corresponding to the selected prediction method, among the inter prediction coding information IPE or the intra prediction coding information TPE, as prediction coding information.
- the data is output to the entropy encoding unit 105.
- the selection unit 110 selects the inter prediction image block signal input from the inter prediction unit 120 or the intra prediction image block signal input from the intra prediction unit 121 according to the prediction method information input from the prediction method control unit 109. Then, the predicted image block signal is output to the subtracting unit 102 and the adding unit 108.
- the selection unit 110 selects and outputs the inter prediction image block signal input from the inter prediction unit 120, and the prediction method control unit 109.
- the input prediction method is intra prediction
- the intra prediction image block signal input from the intra prediction unit 121 is selected and output.
- the entropy encoding unit 105 packs the differential image code input from the quantization unit 104 and the prediction encoding information input from the prediction scheme control unit 109, for example, variable length encoding (entropy).
- the encoded data E1 is generated by using the encoding) to compress the amount of information.
- the entropy encoding 105 outputs the generated encoded data E1 to the outside of the image encoding device 100 (for example, the image decoding device 800 via the communication network 500).
- In-screen prediction is performed by predicting pixels of a processing target block using surrounding pixels as described above.
- a predicted image block signal is created by sequentially copying neighboring pixels that have been processed in the prediction direction.
- the pixel of the processing target block can be accurately predicted by this intra prediction, and the difference (residual) between the pixel of the processing target block and the pixel of the prediction block As a result, the code amount can be reduced (or the error during decoding can be reduced).
- the prediction directions of the intra prediction using the depth performed by the depth use prediction mode execution unit 201 in the present embodiment are the prediction in the vertical direction (prediction mode 0) and the prediction in the horizontal direction (prediction mode 1) shown in FIG. is there.
- prediction mode 0 the prediction in the vertical direction
- prediction mode 1 the prediction in the horizontal direction
- the processing described below can be applied (however, except for the prediction mode 2 in FIG. 6), other prediction directions can be used.
- the processing described below can be applied to the prediction method in units of sub-blocks in FIG. 4 (except for prediction mode 2). That is, as in the present embodiment, a new prediction mode may be added while leaving the conventional prediction mode, or the prediction method performed by the depth use prediction mode execution unit 201 may be replaced with the conventional method. Thus, the number of modes may not be increased.
- the following describes an example in which a depth usage prediction mode is newly added.
- FIG. 7 and 8 are diagrams for explaining the processing concept of the depth use prediction mode execution unit 201.
- the graphic indicated by a circle indicates a pixel for which processing has been completed, and can be referred to when a predicted pixel block is generated.
- a graphic indicated by a square indicates a pixel to be processed, and is a target that is predicted using pixels that can be referred to in the vicinity.
- the arrow indicates the direction of prediction, and pixels that can be referred to are sequentially predicted (specifically, simply copied) in the direction of the arrow. That is, in the prediction mode of FIG. 7, the pixel value is copied in the vertical direction, and in the prediction mode of FIG. 8, the pixel value is copied in the horizontal direction. 7 and 8, a thick broken line indicates the boundary of the subject.
- FIG. 9 is a schematic block diagram illustrating a configuration of the depth use prediction mode execution unit 201 according to the present embodiment.
- the depth use prediction mode execution unit 201 includes a boundary control prediction image generation unit 300, a boundary prediction control unit 301, and a subject boundary detection unit 302.
- the subject boundary detection unit 302 acquires a depth block signal representing a depth value of a pixel corresponding to the image block signal B to be processed from the depth map decoding unit 117, and detects a depth edge.
- Depth edge detection is performed by thresholding the difference between adjacent pixels in the depth map. Whether or not the depth edge exists in the horizontal direction is determined by whether or not the absolute value of the difference between pixels adjacent in the vertical direction is larger than the threshold value TV, as shown in Expression (3). Similarly, whether or not the depth edge exists in the vertical direction is determined by whether or not the absolute value of the difference between pixels adjacent in the horizontal direction is larger than the threshold value TH.
- D (i, j) represents a depth map value at the pixel position (i, j).
- TV and TH are threshold values used when determining whether or not edges exist in the horizontal direction and the vertical direction, respectively.
- the threshold is 10, for example.
- FIGS. 7 and 8 As an example of the depth edge detection result obtained by the above method, a case where a depth edge is detected as shown by a thick dotted line in FIGS. 7 and 8 will be described.
- the position of the depth edge is the same.
- the prediction accuracy is remarkably lowered at the pixel straddling this edge and the subsequent pixels in the prediction direction.
- the boundary prediction control unit 301 controls the prediction performed by the boundary control predicted image generation unit 300 using the boundary information (depth edge) of the subject in the horizontal direction and the vertical direction input from the subject boundary detection unit 302. Specifically, when there is a depth edge perpendicular to the prediction direction, the boundary prediction control unit 301 performs control to suppress copying from pixels adjacent to the prediction direction.
- the control for suppressing copying of pixels in the prediction direction is, for example, controlling the processing in the boundary control predicted image generation unit 300 as follows.
- the boundary control predicted image generation unit 300 acquires the reference image block signal RB from the addition unit 108 and generates a predicted image block signal as follows.
- the prediction mode of the boundary control prediction image generation unit 300 includes a prediction mode in which the prediction direction is vertical as shown in FIG. 7 and a prediction mode in which the prediction direction is horizontal as shown in FIG. Yes (two types of predicted image block signals are generated).
- the boundary prediction control unit 301 The boundary control predicted image generation unit 300 is processed in the same manner as in the conventional prediction method. That is, the boundary prediction control unit 301 controls the boundary control prediction image generation unit 300 so as to copy the pixel value immediately before the processing target pixel in the prediction direction.
- the boundary prediction control unit 301 controls the boundary control prediction image generation unit 300 so as to copy the pixel value of the pixel Pv1 to the pixel value of the pixel Qv1.
- the boundary prediction control unit 301 controls the boundary control prediction image generation unit 300 to perform the following processing.
- the boundary control prediction image generation unit 300 generates a prediction pixel according to the following formula when there is a depth edge perpendicular to the prediction direction.
- Expression (5) is an expression for generating a prediction pixel when a depth edge exists in the horizontal direction in the prediction mode in the vertical direction.
- Expression (6) is an expression for generating a prediction pixel when there is a depth edge in the vertical direction in the horizontal prediction mode. Since the basic processing is the same in the horizontal direction and the vertical direction, Expression (5) in the horizontal direction will be described below.
- G [x] on the left side is a predicted pixel value of the pixel x.
- the evaluation formula is formed from two terms.
- the first term (
- the second term (Dis (Qvi, pre)) represents the distance between the position of the pixel to be processed and the pixel position of each pixel in the pre.
- the meaning of each term is that the first term is controlled so as to select a pixel having a depth value so that a pixel showing the subject considered to be the same as the subject appearing in the processing target pixel can be referred to as much as possible.
- Term Term.
- the term also suppresses the use of a pixel that has a subject boundary between the target pixel and the processing target pixel, as in the control by the boundary prediction control unit 301.
- the second term is a term for selecting pixels as close as possible to the pixel to be processed.
- ⁇ and ⁇ accumulated in each term are constants for changing the weighting between the first term and the second term. Specifically, for example, ⁇ is 0.1 and ⁇ is 1.0.
- the sum of the first term and the second term is used in the evaluation formula, but a ratio may be used. Further, only the first term may be used.
- the above formulas (5) and (6) are used only when there is a depth edge between the pixel to be processed and the previous pixel in the prediction direction. The above formulas (5) and (6) may be used.
- the boundary prediction control unit 301 when there is a depth edge (subject boundary) between the pixel to be processed and the previous pixel (adjacent pixel) in the prediction direction (predetermined direction), By using the above-described equations (5) and (6) by the boundary control predicted image generation unit 300, the use of the pixel value of the previous pixel in the prediction direction is suppressed.
- the first term is the difference in depth value between the processing target pixel and the pixel (peripheral pixel) in the previous column (or row) in the prediction direction. Therefore, it is possible to suppress the use of peripheral pixels that have a subject boundary between the processing target pixels and have a large difference in depth value.
- the boundary control predicted image generation unit 300 generates a predicted image block predicted in the horizontal direction and the vertical direction.
- the boundary control predicted image generation unit 300 determines the correlation between the image block input from the image input unit 101 and the predicted image block predicted in each of the two types of prediction modes using, for example, the SAD value. As a result of this determination, the boundary control prediction image generation unit 300 selects a prediction image block having a higher correlation (similar) and outputs the prediction image block to the prediction mode selection unit 202.
- the boundary control prediction image generation unit 300 also outputs prediction encoding information indicating the prediction mode of the selected prediction image block to the prediction mode selection unit 202. In this way, since control is performed to suppress continuous pixel prediction within the depth map boundary (subject boundary) indicating the distance to the subject, the prediction accuracy can be improved.
- FIG. 10 is a flowchart showing an image encoding process performed by the image encoding apparatus 100 according to the present embodiment.
- Step S201 The image encoding apparatus 100 acquires an image for each frame and a depth map corresponding to the image from the outside. Thereafter, the process proceeds to step S202.
- Step S202 The image input unit 101 divides an input image signal for each frame acquired from the outside of the image encoding device 100 into blocks of a predetermined size (for example, 16 pixels in the vertical direction ⁇ 16 pixels in the horizontal direction).
- the depth input unit 118 divides the depth map synchronized with the image input to the image input unit 101 in the same manner as the image division performed by the image input unit 101, and sends the depth map to the depth map encoding unit 116. Output.
- the image coding apparatus 100 repeats the processing from step S203 to step S211 for each image block in the frame.
- the depth map encoding unit 116 encodes the depth map input from the depth input unit 118, and converts the depth map encoded data whose data amount is further compressed into the intra prediction unit 121 and the image encoding device. 100 (for example, the image decoding device 800). Thereafter, the process of step S204 and the process of step S205 are performed in parallel.
- Step S ⁇ b> 204 The inter prediction unit 120 acquires an image block signal from the image input unit 101, and acquires a reference image block signal decoded by the addition unit 108. The inter prediction unit 120 performs inter prediction using these acquired signals. The inter prediction unit 120 outputs the inter prediction image block signal generated by the inter prediction to the prediction method control unit 109 and the selection unit 110, and outputs the inter prediction coding information IPE to the prediction method control unit 109.
- a reset image block an image block signal with all pixel values being 0
- it will progress to step S206.
- Step S205 The intra prediction unit 121 acquires an image block signal from the image input unit 101, acquires depth map encoded data from the depth map encoding unit 116, and receives the reference image block signal decoded by the addition unit 108. get.
- the intra prediction unit 121 performs intra prediction using these acquired signals.
- the intra prediction unit 121 outputs the intra prediction image block signal generated by the intra prediction to the prediction scheme control unit 109 and the selection unit 110, and outputs the intra prediction coding information TPE to the prediction scheme control unit 109.
- a reset image block an image block in which all pixel values are 0
- the process of the intra estimation part 121 is completed, it will progress to step S206.
- the prediction scheme control unit 109 receives the inter prediction image block signal and the inter prediction encoding information IPE from the inter prediction unit 120, and receives the intra prediction image block signal and the intra prediction encoding information TPE from the intra prediction unit 121. .
- the prediction scheme control unit 109 selects a prediction mode with good coding efficiency based on the Lagrangian cost.
- the prediction method control unit 109 outputs information indicating the selected prediction mode to the selection unit 110.
- the prediction scheme control unit 109 outputs prediction encoding information corresponding to the selected prediction mode to the entropy encoding unit 105.
- the selection unit 101 selects an inter prediction image block signal input from the inter prediction unit 120 or an intra prediction image block signal input from the intra prediction unit 121 according to the prediction mode information input from the prediction method control unit 109. To the subtraction unit 102 and the addition unit 108. Thereafter, the process proceeds to step S207.
- Step S207 The subtraction unit 102 subtracts the predicted image block signal output from the selection unit 110 from the image block signal output from the image input unit 101 to generate a difference image block signal.
- the subtraction unit 102 outputs the difference image block signal to the orthogonal transformation unit 103. Thereafter, the process proceeds to step S208.
- the orthogonal transform unit 103 acquires the difference image block signal from the subtraction unit 102, and performs the orthogonal transform.
- the orthogonal transform unit 103 outputs the signal after the orthogonal transform to the quantization unit 104.
- the quantization unit 104 performs the above quantization process on the signal input from the orthogonal transform unit 103 to generate a difference image code.
- the quantization unit 104 outputs the difference image code to the entropy coding unit 105 and the inverse quantization unit 106.
- the entropy encoding unit 105 packs the differential image code input from the quantization unit 104 and the prediction encoding information input from the prediction scheme control unit 109, and performs variable length encoding (entropy encoding).
- the entropy encoding unit 105 To generate encoded data E1 in which the amount of information is further compressed.
- the entropy encoding unit 105 outputs the encoded data E1 to the outside of the image encoding device 100 (for example, the image decoding device 800). Thereafter, the process proceeds to step S209.
- Step S209 The inverse quantization unit 106 acquires the difference image code ED from the quantization unit 104, and performs the inverse process of the quantization performed by the quantization unit 104.
- the inverse quantization unit 106 outputs the signal generated by this processing to the inverse orthogonal transform unit 107.
- the inverse orthogonal transform unit 107 acquires the inversely quantized signal from the inverse quantization unit 106, performs the inverse orthogonal transform process of the orthogonal transform process performed by the orthogonal transform unit 103, and obtains a difference image (decoded difference image block signal). ).
- the inverse orthogonal transform unit 107 outputs the decoded difference image block signal to the addition unit 108. Thereafter, the process proceeds to step S210.
- Step S210 The addition unit 108 adds the predicted image block signal output from the selection unit 110 to the decoded difference image block signal output from the inverse orthogonal transform unit 107, and decodes the input image (reference image block). signal).
- the adding unit 108 outputs the reference image block signal to the inter prediction unit 120 and the intra prediction unit 121. Then, it progresses to step S211.
- Step S211 When the image coding apparatus 100 has not completed the processes of Steps S203 to S210 for all the blocks in the frame, the block to be processed is changed and the process returns to Step S202. When all the processes are completed, the process ends.
- FIG. 11 is a flowchart for explaining the processing of the inter prediction unit 120.
- Step S ⁇ b> 301 The deblocking filter unit 111 acquires a reference image block signal from the addition unit 108 that is outside the inter prediction unit 120, and performs the FIR filter process.
- the deblocking filter unit 111 outputs the corrected block signal after the filtering process to the frame memory 112. Thereafter, the process proceeds to step S302.
- Step S302 The frame memory 112 acquires the correction block signal of the deblocking filter unit 111, and holds the correction block signal as a part of the image together with information that can identify the frame number. Thereafter, the process proceeds to step S303.
- Step S303 Upon receiving the image block signal from the image input unit 101, the motion vector detection unit 114 searches the image stored in the frame memory 112 for a block similar to the image block output by the image input unit 101 (block). Matching) and generating vector information (motion vector) representing the found block. The motion vector detection unit 114 outputs information necessary for encoding including the detected vector information to the motion compensation unit 113. Thereafter, the process proceeds to step S304.
- Step S304 The motion compensation unit 113 acquires information necessary for encoding from the motion vector detection 114, and extracts a corresponding prediction block from the frame memory.
- the motion compensation unit 113 outputs the prediction image block signal extracted from the frame memory to the prediction method control unit 109 and the selection unit 110 as an inter prediction image block signal.
- the motion compensation unit 113 outputs information necessary for prediction acquired from the motion vector detection unit 114 to the prediction method control unit 109. Thereafter, the inter prediction is terminated.
- FIG. 12 is a flowchart for explaining processing of the intra prediction unit 121.
- the depth map decoding unit 117 acquires depth map encoded data E2 from the depth map encoding unit 116, and decodes a depth map having a larger amount of information by using, for example, variable length decoding.
- the depth map decoding unit 117 outputs the decoded depth map (depth block decoded signal) to the depth information use intra prediction unit 115. Thereafter, the process proceeds to step S402.
- Step S402 The first prediction mode execution unit 200-1 to the n-th prediction mode execution unit 200-n perform processing in each prediction mode (prediction image block generation method) from the reference image block signal acquired from the addition unit 108. In response, first to n-th predicted image block signals are generated.
- the first prediction mode execution unit 200-1 to the n-th prediction mode execution unit 200-n output the generated first to n-th prediction image block signals to the prediction mode selection unit 202.
- the depth use prediction mode execution unit 201 generates a prediction image block signal using depth from the reference image block signal acquired from the addition unit 108 and the depth block decoded signal acquired from the depth map decoding unit 117, and a prediction mode selection unit To 202. Thereafter, the process proceeds to step S403. Note that the predicted image generation processing performed by the depth use prediction mode execution unit 201 is as described above.
- the prediction mode selection unit 202 receives the prediction image block signal from the first prediction mode execution unit 200-1 to the n-th prediction mode execution unit 200-n and the depth-use prediction mode execution unit 201 and information necessary for prediction. Enter.
- the prediction mode selection unit 202 selects a prediction mode with high coding efficiency by the above method from the input prediction image block signals (including the prediction image block signal input from the depth-based prediction mode execution unit). , Corresponding prediction mode information is generated.
- the prediction mode selection unit 202 sends the selected prediction image block signal (hereinafter referred to as an intra prediction image block signal) to the selection unit 110 and the prediction scheme control unit 109, and the prediction mode information (hereinafter referred to as intra prediction coding information TPE). Output to the prediction method control unit 109. Then, intra prediction is complete
- FIG. 13 is a schematic block diagram showing the configuration of the image decoding device 800 according to this embodiment.
- the image decoding apparatus 800 includes an encoded data input unit 813, an entropy decoding unit 801, an inverse quantization unit 802, an inverse orthogonal transform unit 803, an addition unit 804, a prediction scheme control unit 805, a selection unit 806, and a deblocking filter unit 807.
- the deblocking filter unit 807, the frame memory 808, and the motion compensation unit 809 constitute an inter processing unit 820.
- the depth information utilization intra prediction unit 810 and the depth map decoding unit 811 constitute an intra processing unit 821.
- the encoded data input unit 813 divides the encoded data E1 acquired from the outside (for example, the image encoding device 100) into processing block units and outputs the result to the entropy decoding unit 801.
- the encoded data input unit 813 repeatedly outputs the blocks until the blocks are sequentially changed until all the blocks in the frame are completed and the acquired encoded data is completed.
- the entropy decoding unit 801 performs processing reverse to the encoding method (for example, variable-length encoding) performed by the entropy encoding unit 105 on the encoded data divided into processing units acquired from the encoded data input unit 813 ( For example, entropy decoding that is variable length decoding) is performed to generate a difference image block code and predictive coding information PE.
- the entropy decoding unit 801 outputs the difference image block code to the inverse quantization unit 802 and the prediction coding information PE to the prediction scheme control unit 805.
- the inverse quantization unit 802 performs inverse quantization on the difference image block code input from the entropy decoding unit 801 to generate a decoded frequency domain signal, and outputs the decoded frequency domain signal to the inverse orthogonal transform unit 803.
- the inverse orthogonal transform unit 803 generates a decoded difference image block signal that is a spatial domain signal by, for example, inverse DCT transforming the decoded frequency domain signal output from the inverse quantization unit 802.
- the inverse orthogonal transform unit 803 can generate a spatial domain signal based on the decoded frequency domain signal
- the inverse orthogonal transform unit 803 is not limited to the inverse DCT transform, and other methods (for example, IFFT (Inverse Fast Fourier Transform)) are used. It may be used.
- the inverse orthogonal transform unit 803 outputs the generated decoded difference image block signal to the addition unit 804.
- the prediction method control unit 805 extracts the prediction method PM in units of macroblocks adopted by the image coding device 100 from the prediction coding information PE input from the entropy decoding unit 801.
- the prediction method PM is inter prediction or intra prediction.
- the prediction method control unit 805 outputs information regarding the extracted prediction method PM to the selection unit 806.
- the prediction scheme control unit 805 takes out the prediction coding information corresponding to the prediction scheme PM extracted from the prediction coding information PE output from the entropy decoding unit 801, and stores it in the processing unit corresponding to the extracted prediction scheme PM.
- Predictive coding information is output.
- the prediction method control unit 805 outputs the inter prediction coding information IPE to the inter processing unit 820 when the prediction method PM is inter prediction.
- the prediction method control unit 805 outputs the intra prediction encoding information TPE to the intra processing unit 821 when the prediction method PM is intra prediction.
- an inter prediction image block signal is selected.
- an intra prediction image block signal is selected.
- the selection unit 806 outputs the selected predicted image block signal to the addition unit 804.
- the addition unit 804 adds the predicted image block signal output from the selection unit 806 to the decoded difference image block signal output from the inverse orthogonal transform unit 803 to generate a decoded image block signal DB.
- the adding unit 804 outputs the decoded decoded image block signal DB to the inter processing unit 820, the intra processing unit 821, and the image output unit 812.
- the inter processing unit 820 includes a deblocking filter unit 807, a frame memory 808, and a motion compensation unit 809.
- the deblocking filter unit 807 performs the same processing as the FIR filter performed by the deblocking filter unit 111 on the decoded image block signal DB input from the addition unit 804, and the processing result (correction block signal) is framed. Output to the memory 808.
- the frame memory 808 acquires the correction block signal from the deblocking filter unit 807, and holds the correction block signal as a part of the image together with information that can identify the frame number.
- the motion compensation unit 809 acquires inter prediction coding information IPE from the prediction method control unit 805, and extracts reference image information and prediction vector information (motion vector) from the inter prediction coding information IPE.
- the motion compensation unit 809 extracts a target image block signal (predicted image block signal) from the images stored in the frame memory 808 based on the extracted reference image information and predicted vector information.
- the motion compensation unit 809 extracts one corresponding image block from the frame memory 808 and outputs it to the selection unit 806.
- two prediction vectors motion vectors
- two corresponding image blocks are taken out from the frame memory 808, averaged, and output to the selection unit 806.
- This signal output from the inter processing unit 820 (motion compensation unit 809) to the selection unit 806 is an inter prediction image block signal.
- the intra processing unit 821 includes a depth information use intra prediction unit 810 and a depth map decoding unit 811.
- the depth map encoded data input unit 814 divides the depth map encoded data E2 input from the outside (for example, the image encoding device 100) into processing blocks, and outputs them to the intra processing unit 821.
- the depth map decoding unit 811 reverses the block unit depth map encoded data output from the depth map encoded data input unit 814 to the encoding method (for example, variable length encoding) performed by the depth map encoding unit 116.
- the depth block decoded signal is generated by performing entropy decoding which is the above process (for example, variable length decoding).
- the depth map decoding unit 811 outputs the depth block decoded signal to the depth information use intra prediction unit 810.
- FIG. 14 is a schematic block diagram illustrating a configuration of the depth information use intra prediction unit 810.
- the depth information use intra prediction unit 810 includes a first prediction mode execution unit 900-1, a second prediction mode execution unit 900-2, an nth prediction mode execution unit 900-n, a depth use prediction mode execution unit 901, and a prediction mode selection. A portion 902 is included.
- the prediction mode selection unit 902 includes an index (prediction mode) indicating the prediction mode created by the prediction mode selection unit 202 of the image encoding device 100 from the intra prediction encoding information TPE output by the prediction method control unit 805, Each piece of information necessary for prediction is extracted.
- the information necessary for prediction is extracted because the prediction mode indicated by the index is a prediction mode in which information necessary for prediction exists (specifically, the sub-blocks of the first prediction mode and the second prediction mode). This is a case where a prediction image is generated in units and a depth use prediction mode).
- the prediction mode selection unit 902 extracts information necessary for prediction, the prediction mode selection unit 902 outputs the information to the corresponding prediction mode execution units 900-1 to 900-n and 901.
- the prediction mode selection unit 902 selects a prediction image block signal of the prediction mode indicated by the index (prediction mode) from the prediction image block signals generated by each prediction mode execution unit, and selects the prediction image block signal as an intra prediction image block signal. Output to the unit 806.
- the first prediction mode execution unit 900-1, the second prediction mode execution unit 900-2, and the nth prediction mode execution unit 900-n are provided in the depth information use intra prediction unit 115 of the image encoding device 100.
- the same processing as that of the execution unit 200-1, the second prediction mode execution unit 200-2, and the nth prediction mode execution unit 200-n is performed.
- the prediction mode (necessary for prediction) in each sub-block is used.
- Information is input from the prediction mode selection unit 902, and the corresponding prediction mode is executed in units of sub-blocks.
- the prediction mode it is the content shown in FIG.
- the depth use prediction mode execution unit 901 acquires information necessary for prediction (specifically, information indicating the direction of prediction) from the prediction mode selection unit 902, and acquires a depth block decoded signal from the depth map decoding unit 811.
- the depth use prediction mode execution unit 901 uses the acquired information and signal to generate a predicted image block signal as performed by the depth use prediction mode execution unit 201 of the image encoding device 100.
- the information necessary for prediction is information regarding the direction of prediction selected by the depth use prediction mode execution unit 201.
- the configuration of the depth usage prediction mode execution unit 901 is basically the same as the configuration of the depth usage prediction mode execution unit 201.
- the depth control prediction mode is implemented as the final process in which the boundary control prediction image generation unit 300 of the image encoding device 100 selects the prediction block in the horizontal direction and the prediction block in the vertical direction based on the correlation between the input image and the input image.
- the boundary control predicted image generation unit 300 of the unit 901 is different in that selection is performed using information necessary for prediction.
- the depth use prediction mode execution unit 901 generates the same predicted image block signal as the depth use prediction mode execution unit 201 at the time of encoding.
- FIG. 15 is a flowchart showing an image decoding process performed by the image decoding apparatus 800 according to this embodiment.
- the image decoding apparatus 800 acquires the encoded data E including the encoded data E1 of the image and the encoded data E2 of the depth map from the image encoding apparatus 100 via the communication network 500. Thereafter, the process proceeds to step S602.
- Step S602 The encoded data input unit 813 divides the acquired encoded data E1 of the image into processing blocks corresponding to a predetermined size (for example, 16 pixels in the vertical direction ⁇ 16 pixels in the horizontal direction) to generate entropy.
- the data is output to the decoding unit 801.
- the depth map encoded data input unit 814 inputs depth map encoded data synchronized with the encoded data input to the encoded data input unit 813 from the outside of the image decoding apparatus 800, and the encoded data input unit 813.
- the data is divided into the same processing units as the division performed in the above and output to the intra processing unit 821.
- the image decoding apparatus 800 repeats the processing in steps S603 to S608 for each image block in the frame.
- the entropy decoding unit 801 performs entropy decoding on the encoded data output from the encoded data input unit 813, and generates a difference image block code and predictive encoding information.
- the entropy decoding unit 801 outputs the difference image block code to the inverse quantization unit 802, and outputs the prediction coding information to the prediction scheme control unit 805.
- the prediction scheme control unit 805 acquires the prediction coding information from the entropy decoding unit 801, and extracts information regarding the prediction scheme PM and the prediction coding information corresponding to the prediction scheme PM.
- each block may be processed in parallel, or only one of the processes may be performed in accordance with the prediction method PM.
- Step S604 The inter processing unit 820 acquires the inter prediction coding information IPE output from the prediction scheme control unit 805 and the decoded image block signal DB output from the adding unit 804, and performs inter processing.
- the inter processing unit 820 outputs the generated inter predicted image block signal to the selection unit 806. The contents of the inter processing will be described later.
- a reset image block signal an image block signal in which all pixel values are 0
- the process proceeds to step S606.
- Step S605 The intra processing unit 821 acquires the intra prediction encoded information TPE output from the prediction scheme control unit 805 and the decoded image block signal DB output from the adding unit 804, and performs intra prediction.
- the intra processing unit 821 outputs the generated intra predicted image block signal to the selection unit 806.
- the intra prediction process will be described later. In the first process, when the process of the adding unit 804 is not completed, a reset image block signal (an image block signal in which all pixel values are 0) is input. If the process of the intra estimation part 821 is completed, it will progress to step S606.
- Step S606 The selection unit 806 acquires information on the prediction method PM output from the prediction method control unit 805, and outputs the inter prediction image block signal output from the inter processing unit 820 or the intra prediction output from the intra processing unit 821. An image signal is selected and output to the adder 804. Thereafter, the process proceeds to step S607.
- Step S607 The inverse quantization unit 802 performs the inverse process of the quantization performed by the quantization unit 104 of the image coding device 100 on the difference image block code input from the entropy decoding unit 801.
- the inverse quantization unit 802 outputs the generated decoded frequency domain signal to the inverse orthogonal transform unit 803.
- the inverse orthogonal transform unit 803 obtains the inversely quantized decoded frequency domain signal from the inverse quantization unit 802, and performs the inverse orthogonal transform process of the orthogonal transform process performed by the orthogonal transform unit 103 of the image encoding device 100. Then, the difference image (decoded difference image block signal) is decoded.
- the inverse orthogonal transform unit 803 outputs the decoded decoded difference image block signal to the adding unit 804.
- the adding unit 804 adds the predicted image block signal output from the selection unit 806 to the decoded difference image block signal output from the inverse orthogonal transform unit 803 to generate a decoded image block signal DB.
- the adding unit 804 outputs the decoded decoded image block signal DB to the image output unit 812, the inter processing unit 820, and the intra processing unit 821. Thereafter, the process proceeds to step S608.
- Step S608 The image output unit 812 generates the output image signal R ′ by arranging the decoded image block signal DB output by the adding unit 804 at a corresponding position in the image. If the processes in steps S603 to S607 have not been completed for all the blocks in the frame, the block to be processed is changed and the process returns to step S602.
- the image output unit 812 When outputting the generated output image signal R ′ to the outside of the image decoding device 800 (display device 600), the image output unit 812, for example, has 5 frames (I picture (I0), B picture ( B3), B picture (B2), B picture (B4) and P picture (P1)).
- FIG. 16 is a flowchart for explaining the inter processing in step S604.
- the deblocking / filtering unit 807 acquires the decoded image block signal DB from the adding unit 804 that is external to the inter processing unit 820, and performs the FIR filter processing performed at the time of encoding.
- the deblocking filter unit 807 outputs the corrected corrected block signal to the frame memory 808. Thereafter, the process proceeds to step S702.
- Step S ⁇ b> 702 The frame memory 808 holds the correction block signal output from the deblocking filter unit 807 as part of the image together with information that can identify the frame number. Thereafter, the process proceeds to step S703. (Step S703)
- the motion compensation unit 809 acquires the inter prediction coding information IPE from the prediction scheme control unit 805, and extracts a corresponding prediction block signal from the frame memory.
- the motion compensation unit 809 outputs the prediction image block signal extracted from the frame memory to the selection unit 806 as an inter prediction image block signal. Thereafter, the inter processing is terminated.
- FIG. 17 is a flowchart illustrating the intra processing in step S605.
- the depth map decoding unit 811 acquires depth map encoded data divided into processing units from the depth map encoded data input unit 814, and decodes a depth map having a larger amount of information using variable length decoding, for example. To do.
- the depth map decoding unit 811 outputs the decoded depth map (depth block decoded signal) to the depth information use intra prediction unit 810. Thereafter, the process proceeds to step S802.
- Step S802 The first prediction mode execution unit 900-1 to the n-th prediction mode execution unit 900-n generate a prediction image block signal using the decoded image block signal DB output from the addition unit 804.
- the prediction mode execution unit that performs processing in units of sub-blocks specifically, the first prediction mode execution unit 900-1 and the second prediction mode execution unit 900-2 are employed in the image coding apparatus 100.
- Information indicating the prediction mode of each sub-block is acquired from the prediction mode selection unit 902, and a prediction image block signal is generated.
- First prediction mode execution unit 900-1 to n-th prediction mode execution unit 900-n output the generated first to n-th prediction image block signals to prediction mode selection unit 902.
- the depth use prediction mode execution unit 901 includes a decoded image block signal DB output from the addition unit 108, a depth block decoded signal output from the depth depth map decoding unit 811, and information necessary for prediction output from the prediction mode selection unit 902. (Specifically, information indicating the direction of prediction) is used to perform the same processing as the processing performed by the depth use prediction mode execution unit 201 in FIG. 3 to generate a depth use prediction image.
- the depth use prediction mode execution unit 901 outputs the generated prediction image signal to the prediction mode selection unit 902. Thereafter, the process proceeds to step S803.
- the prediction mode selection unit 902 extracts information indicating the prediction mode employed by the image encoding device 100 from the intra prediction encoding information TPE input from the prediction method control unit 805, and predicts the corresponding prediction mode.
- the image block signal is output to the selection unit 806 as an intra prediction image block signal.
- the prediction mode selection unit 902 further extracts the prediction mode of each subblock and outputs the information to the corresponding prediction mode execution unit. Then, intra prediction is complete
- the prediction mode selection unit 902 extracts information regarding the prediction direction and outputs the information to the depth use prediction mode execution unit 901.
- the image encoding device 100 described above includes the depth input unit 118 and the depth map encoding unit 116, and the image decoding device 800 includes the depth map encoded data input unit 814 and the depth map decoding unit 811. It is not limited to this.
- information regarding the depth map corresponding to the input image may be made available in the image decoding apparatus 800 by a separate means.
- the image encoding device 100 and the image decoding device 800 are configured to receive the depth map via a communication line from a server device that stores the depth map in correspondence with video information installed outside or offline. May be. Therefore, a video title indicating video information can be searched through a communication line, and when the video information is selected, a corresponding depth map can be received.
- the image encoding device 100 also acquires an image of a viewpoint different from the input image, and between the pixels included in the input image and the pixels included in the image of the viewpoint different from the input image.
- a depth map generation unit that generates a depth map having a pixel value as a value representing the parallax of the image may be provided. In that case, the depth map generation unit outputs the generated depth map to the depth input unit 118.
- the image decoding apparatus 800 generates a second output image having a viewpoint different from the output image based on the output image and the depth map of the same frame as the output image, and outputs the second output image to the outside. Also good.
- the image encoding apparatus 100 inputs the input image signal every 5 frames.
- the image encoding apparatus 100 is not limited to this and may input every arbitrary number of frames.
- the image decoding apparatus 800 outputs the output image signal every 5 frames.
- the image decoding apparatus 800 is not limited to this and may output every arbitrary number of frames. .
- the image to be encoded is a moving image, but it may be a still image.
- the image to be encoded is a multi-viewpoint image, and the depth-use prediction mode is used only in the viewpoint image with the corresponding depth map.
- the conventional prediction mode is used for the viewpoint image without the corresponding depth map. May be used.
- the present embodiment has two prediction modes for performing control to suppress continuous pixel prediction at the boundary of the depth map indicating the distance to the subject when performing intra prediction. Since only two prediction modes are added as compared with the prior art, it is possible to improve the accuracy of the predicted image while suppressing an increase in code amount due to an increase in the number of prediction modes. And the accuracy of the prediction image is improved while suppressing the increase in the amount of code due to the increase in the number of prediction modes, so the residual between the prediction image and the input image is minimized, realizing highly efficient image encoding and decoding can do. If the depth-based prediction mode is used instead of the conventional prediction mode, the number of prediction modes does not increase, and therefore the increase in code amount can be further suppressed.
- a part of the image coding apparatus 100 and the image decoding apparatus 800 in the above-described embodiment for example, the subtraction unit 102, the orthogonal transformation unit 103, the quantization unit 104, the entropy coding unit 105, the inverse quantization unit 106, and the inverse Orthogonal transformation unit 107, addition unit 108, prediction scheme control unit 109, selection unit 110, deblocking filter unit 111, motion compensation unit 113, motion vector detection unit 114, depth information use intra prediction unit 115, depth map encoding unit 116, a depth map decoding unit 117, an entropy decoding unit 801, an inverse quantization unit 802, an inverse orthogonal transform unit 803, an addition unit 804, a prediction scheme control unit 805, a selection unit 806, a deblocking filter unit 807, a motion compensation unit 809, a depth information use intra prediction unit 810 and a depth map decoding unit 811 are connected. It may be realized by Yuta.
- the program for realizing the control function may be recorded on a computer-readable recording medium, and the program recorded on the recording medium may be read into a computer system and executed.
- the “computer system” here is a computer system built in the image encoding device 100 or the image decoding device 800, and includes an OS and hardware such as peripheral devices.
- the “computer-readable recording medium” refers to a storage device such as a flexible medium, a magneto-optical disk, a portable medium such as a ROM or a CD-ROM, and a hard disk incorporated in a computer system.
- the “computer-readable recording medium” is a medium that dynamically holds a program for a short time, such as a communication line when transmitting a program via a network such as the Internet or a communication line such as a telephone line,
- a volatile memory inside a computer system serving as a server or a client may be included and a program that holds a program for a certain period of time.
- the program may be a program for realizing a part of the functions described above, and may be a program capable of realizing the functions described above in combination with a program already recorded in a computer system.
- part or all of the image encoding device 100 and the image decoding device 800 in the above-described embodiment may be realized as an integrated circuit such as an LSI (Large Scale Integration).
- LSI Large Scale Integration
- Each functional block of the image encoding device 100 and the image decoding device 800 may be individually made into a processor, or a part or all of them may be integrated into a processor.
- the method of circuit integration is not limited to LSI, and may be realized by a dedicated circuit or a general-purpose processor. Further, in the case where an integrated circuit technology that replaces LSI appears due to progress in semiconductor technology, an integrated circuit based on the technology may be used.
- an image for performing intra prediction when an input image is encoded, an image for performing intra prediction that predicts a pixel value of the processing target pixel using pixel values of peripheral pixels around the processing target pixel.
- the encoding device wherein when performing the intra prediction, a screen that suppresses using the peripheral pixels having a subject boundary represented by the input image between the processing target pixels and the peripheral pixels.
- An inner prediction unit is provided.
- Another aspect of the present invention is the image encoding device described above, wherein the intra-screen prediction unit detects a boundary of the subject using information indicating a distance to the subject of the input image.
- a boundary detection unit is provided.
- the intra prediction unit includes a pixel adjacent to the processing target pixel in a predetermined direction among the peripheral pixels, and the processing target.
- the pixel value of the processing target pixel is predicted using the pixel adjacent in the predetermined direction, and the pixel adjacent in the predetermined direction and the processing target.
- a predicted image generation unit is provided that suppresses predicting the pixel value of the processing target pixel using a pixel adjacent in the predetermined direction.
- Another aspect of the present invention is the above-described image encoding device, in which the in-screen prediction unit represents a peripheral pixel used when predicting a pixel value of the processing target pixel.
- a prediction image generation unit that determines at least based on the difference between the information indicating the distance to the subject and the information indicating the distance to the subject represented by the processing target pixel.
- the intra-screen prediction unit uses the peripheral pixels and the processing target pixels as peripheral pixels used when predicting the pixel value of the processing target pixel.
- a predicted image generation unit that determines at least based on the distance.
- intra-screen prediction is performed to predict the pixel value of the processing target pixel using the pixel values of peripheral pixels around the processing target pixel.
- the image decoding device when performing the intra prediction, it is possible to suppress use of the peripheral pixels having a subject boundary represented by the encoded image between the processing target pixels and the peripheral pixels.
- An in-screen prediction unit is provided.
- Another aspect of the present invention is the above-described image decoding device, wherein the in-screen prediction unit detects a boundary of the subject using information indicating a distance to the subject of the encoded image.
- a boundary detection unit is provided.
- the intra prediction unit includes a pixel adjacent to the processing target pixel in a predetermined direction among the peripheral pixels, and the processing target pixel. If there is no boundary between the subject and the pixel, the pixel value of the processing target pixel is predicted using pixels adjacent in the predetermined direction, and the pixel adjacent to the predetermined direction and the processing target pixel A prediction image generation unit that suppresses prediction of a pixel value of the processing target pixel using pixels adjacent in the predetermined direction when the boundary of the subject is between the two.
- Another aspect of the present invention is the above-described image decoding device, wherein the intra-screen prediction unit determines a peripheral pixel used when predicting a pixel value of the processing target pixel to a subject represented by the peripheral pixel.
- a predicted image generation unit that determines at least based on the difference between the information indicating the distance to the subject and the information indicating the distance to the subject represented by the processing target pixel.
- Another aspect of the present invention is the above-described image decoding device, wherein the intra-screen prediction unit uses the peripheral pixels and the processing target as peripheral pixels used when predicting a pixel value of the processing target pixel.
- a predicted image generation unit that determines at least based on the distance to the pixel;
- intra-screen prediction is performed to predict the pixel value of the processing target pixel using the pixel values of peripheral pixels around the processing target pixel.
- the intra prediction when the intra prediction is performed, use of the peripheral pixels having a boundary of a subject represented by the input image between the processing target pixels and the peripheral pixels is suppressed. Have a process.
- intra-screen prediction is performed to predict the pixel value of the processing target pixel using the pixel values of peripheral pixels around the processing target pixel.
- the intra prediction when the intra prediction is performed, use of the peripheral pixels having a boundary of a subject represented by the encoded image between the processing target pixels and the peripheral pixels is suppressed. Have a process.
- intra-screen prediction is performed to predict the pixel value of the processing target pixel using the pixel values of peripheral pixels around the processing target pixel.
- the computer of the image encoding device performs the intra prediction, the use of the peripheral pixels having the boundary of the subject represented by the input image between the processing target pixels and the peripheral pixels is suppressed. It is a program for functioning as an in-screen prediction unit.
- intra-screen prediction is performed to predict the pixel value of the processing target pixel using the pixel values of peripheral pixels around the processing target pixel.
- the computer of the image decoding apparatus performs the intra prediction, the use of the peripheral pixels having a boundary of the subject represented by the encoded image between the peripheral pixels and the processing target pixel is suppressed. It is a program for functioning as an in-screen prediction unit.
- DESCRIPTION OF SYMBOLS 10 Moving image transmission system 100 ... Image coding apparatus 101 ... Image input part 102 ... Subtraction part 103 ... Orthogonal transformation part 104 ... Quantization part 105 ... Entropy coding part 106 ... Inverse quantization part 107 ... Inverse orthogonal transformation part 108 DESCRIPTION OF SYMBOLS ... Addition unit 109 ... Prediction method control unit 110 ... Selection unit 111 ... Deblocking filter unit 112 ... Frame memory unit 113 ... Motion compensation unit 114 ... Motion vector detection unit 115 ... Depth information utilization intra prediction unit 116 ... Depth map encoding 117: Depth map decoding unit 118 ... Depth input unit 120 ...
- Inter prediction unit 121 ... Intra prediction unit 200-1 ... First prediction mode execution unit 200-2 ... Second prediction mode execution unit 200-n ... nth prediction mode Execution unit 201 ... depth use prediction mode execution unit 202 ... prediction mode selection unit 300 ... boundary Control predicted image generation unit 301 ... Boundary prediction control unit 302 ... Subject boundary detection unit 500 ... Communication network 600 ... Display device 800 ... Image decoding device 801 ... Entropy decoding unit 802 ... Inverse quantization unit 803 ... Inverse orthogonal transformation unit 804 ... Addition 805 ... Prediction method control unit 806 ... Selection unit 807 ... Deblocking filter unit 808 ... Frame memory 809 ... Motion compensation unit 810 ...
- Depth information use intra prediction unit 811 Depth map decoding unit 812 ... Image output unit 813 ... Encoding Data input unit 814 ; Depth map encoded data input unit 820 ... Inter processing unit 821 ... Intra processing unit 900-1 ... First prediction mode execution unit 900-2 ... Second prediction mode execution unit 900-n ... nth prediction mode Implementation unit 901 ... Depth use prediction mode implementation unit 902 ... Prediction mode selection unit
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
La présente invention porte sur un dispositif de codage d'image qui, lors du codage d'une image d'entrée, utilise des valeurs de pixel de pixels environnants dans une périphérie d'un pixel à traiter de manière à effectuer une prédiction intra-image pour prédire la valeur de pixel du pixel à traiter, et est caractérisé en ce qu'il comprend une unité de prédiction intra-image qui détermine la valeur prédictive de chaque pixel sur la base d'informations indiquant la distance à un sujet pour le pixel à traiter et d'informations indiquant la distance au sujet pour les pixels environnants.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2011-117425 | 2011-05-25 | ||
| JP2011117425A JP2014150297A (ja) | 2011-05-25 | 2011-05-25 | 画像符号化装置、画像復号装置、画像符号化方法、画像復号方法およびプログラム |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2012161318A1 true WO2012161318A1 (fr) | 2012-11-29 |
Family
ID=47217383
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2012/063503 Ceased WO2012161318A1 (fr) | 2011-05-25 | 2012-05-25 | Dispositif de codage d'image, dispositif de décodage d'image, procédé de codage d'image, procédé de décodage d'image et programme |
Country Status (2)
| Country | Link |
|---|---|
| JP (1) | JP2014150297A (fr) |
| WO (1) | WO2012161318A1 (fr) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN116320395A (zh) * | 2022-12-27 | 2023-06-23 | 维沃移动通信有限公司 | 图像处理方法、装置、电子设备及可读存储介质 |
Families Citing this family (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP5711636B2 (ja) * | 2011-09-26 | 2015-05-07 | 日本電信電話株式会社 | 画像符号化方法、画像復号方法、画像符号化装置、画像復号装置、画像符号化プログラム、画像復号プログラム |
| JP5729825B2 (ja) * | 2011-09-26 | 2015-06-03 | 日本電信電話株式会社 | 画像符号化方法、画像復号方法、画像符号化装置、画像復号装置、画像符号化プログラム及び画像復号プログラム |
| JP5759357B2 (ja) * | 2011-12-13 | 2015-08-05 | 日本電信電話株式会社 | 映像符号化方法、映像復号方法、映像符号化装置、映像復号装置、映像符号化プログラム及び映像復号プログラム |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2009090884A1 (fr) * | 2008-01-18 | 2009-07-23 | Panasonic Corporation | Procédé de codage d'image et procédé de décodage d'image |
| JP2010056701A (ja) * | 2008-08-27 | 2010-03-11 | Nippon Telegr & Teleph Corp <Ntt> | 画面内予測符号化方法,画面内予測復号方法,これらの装置,およびそれらのプログラム並びにプログラムを記録した記録媒体 |
-
2011
- 2011-05-25 JP JP2011117425A patent/JP2014150297A/ja not_active Withdrawn
-
2012
- 2012-05-25 WO PCT/JP2012/063503 patent/WO2012161318A1/fr not_active Ceased
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2009090884A1 (fr) * | 2008-01-18 | 2009-07-23 | Panasonic Corporation | Procédé de codage d'image et procédé de décodage d'image |
| JP2010056701A (ja) * | 2008-08-27 | 2010-03-11 | Nippon Telegr & Teleph Corp <Ntt> | 画面内予測符号化方法,画面内予測復号方法,これらの装置,およびそれらのプログラム並びにプログラムを記録した記録媒体 |
Non-Patent Citations (2)
| Title |
|---|
| FENG ZOU ET AL.: "EDGE-BASED ADAPTIVE DIRECTIONAL INTRA PREDICTION", 28TH PICTURE CODING SYMPOSIUM, IEEE, 8 December 2010 (2010-12-08), pages 366 - 369 * |
| MIN-KOO KANG ET AL.: "Geometry-based Block Partitioning for Efficient Intra Prediction in Depth Video Coding", VISUAL INFORMATION PROCESSING AND COMMUNICATION, PROC. OF THE SPIE-IS&T ELECTRIC IMAGING, SPIE, vol. 7543, 17 January 2010 (2010-01-17), pages 75430A-1 - 75430A-11 * |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN116320395A (zh) * | 2022-12-27 | 2023-06-23 | 维沃移动通信有限公司 | 图像处理方法、装置、电子设备及可读存储介质 |
Also Published As
| Publication number | Publication date |
|---|---|
| JP2014150297A (ja) | 2014-08-21 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN111971960B (zh) | 用于基于帧间预测模式处理图像的方法及其装置 | |
| KR102658929B1 (ko) | 인터 예측 모드 기반 영상 처리 방법 및 이를 위한 장치 | |
| US9020030B2 (en) | Smoothing overlapped regions resulting from geometric motion partitioning | |
| CN102845062B (zh) | 用于几何运动分割的定点实施方案 | |
| US8204120B2 (en) | Method for intra prediction coding of image data | |
| JP6039178B2 (ja) | 画像符号化装置、画像復号装置、並びにそれらの方法及びプログラム | |
| KR102711456B1 (ko) | 크로마 성분에 대한 영상 디코딩 방법 및 그 장치 | |
| KR102775626B1 (ko) | 크로마 성분에 대한 영상 디코딩 방법 및 그 장치 | |
| US20240171749A1 (en) | Image encoding/decoding method and device for performing prof, and method for transmitting bitstream | |
| WO2012161318A1 (fr) | Dispositif de codage d'image, dispositif de décodage d'image, procédé de codage d'image, procédé de décodage d'image et programme | |
| KR102702823B1 (ko) | 크로마 양자화 파라미터 데이터에 대한 영상 디코딩 방법 및 그 장치 | |
| US20240422323A1 (en) | Image encoding/decoding method and device for performing bdof, and method for transmitting bitstream | |
| US20250211776A1 (en) | Method and apparatus for encoding/decoding image and recording medium storing bitstream | |
| KR20250067941A (ko) | Mrl(multi reference line)을 이용한 인트라 예측 모드에 기반한 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장하는 기록 매체 | |
| KR20250055513A (ko) | Mrl(multi reference line)을 이용한 인트라 예측 모드에 기반한 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장하는 기록 매체 | |
| WO2013077304A1 (fr) | Dispositif de codage d'image, dispositif de décodage d'image et procédés et programmes correspondants | |
| JP6232117B2 (ja) | 画像符号化方法、画像復号方法、及び記録媒体 | |
| KR20250127338A (ko) | 인트라 예측에 기반한 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장하는 기록 매체 | |
| KR20250169243A (ko) | Mip 기반 영상 부호화/복호화 방법, 비트스트림을 전송하는 방법 및 비트스트림을 저장한 기록 매체 | |
| KR20250111745A (ko) | 인트라 예측에 기반한 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장하는 기록 매체 | |
| WO2025038845A1 (fr) | Procédés et dispositifs de mode de prédiction basé sur un filtre d'extrapolation | |
| KR20250143781A (ko) | 영상 인코딩/디코딩 방법 및 장치, 그리고 비트스트림을 저장한 기록 매체 | |
| KR20250136871A (ko) | 영상 인코딩/디코딩 방법 및 장치, 그리고 비트스트림을 저장한 기록 매체 | |
| KR20240052698A (ko) | 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체 | |
| EP4651480A1 (fr) | Procédé de codage/décodage d'image basé sur une prédiction intra, procédé de transmission de flux binaire, et support d'enregistrement stockant un flux binaire |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 12788713 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 12788713 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: JP |