[go: up one dir, main page]

WO2012008180A1 - Dispositif de codage d'image - Google Patents

Dispositif de codage d'image Download PDF

Info

Publication number
WO2012008180A1
WO2012008180A1 PCT/JP2011/056044 JP2011056044W WO2012008180A1 WO 2012008180 A1 WO2012008180 A1 WO 2012008180A1 JP 2011056044 W JP2011056044 W JP 2011056044W WO 2012008180 A1 WO2012008180 A1 WO 2012008180A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
pixel
thinned
pixels
prediction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/JP2011/056044
Other languages
English (en)
Japanese (ja)
Inventor
雅俊 近藤
真生 濱本
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kokusai Denki Electric Inc
Original Assignee
Hitachi Kokusai Electric Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Kokusai Electric Inc filed Critical Hitachi Kokusai Electric Inc
Publication of WO2012008180A1 publication Critical patent/WO2012008180A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/132Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/11Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding

Definitions

  • the present invention relates to an image encoding apparatus, and more particularly to an image encoding technique for compressing video data using pixel correlation, and more particularly, intra prediction that generates a predicted image using correlation between adjacent pixels in a screen.
  • the present invention relates to an image coding apparatus suitable for improving coding efficiency when performing the above.
  • H. H.264 / AVC encoding is defined in Non-Patent Document 1 below.
  • Intra prediction creates a predicted image of a processing target image using an encoded image (reference image) in the encoding target picture, and transmits a difference between the predicted image and the processing target image. This is a prediction method for reducing the amount of information. In this way, intra prediction is a prediction method that uses only the spatial correlation of images, so that a video distorted due to a stream transmission error or the like can be restored to a normal video.
  • a prediction image is generated from neighboring pixels of a target block for a block to be encoded, and a residual component obtained by comparing the target block with the prediction image is compressed and encoded.
  • the intra prediction defined by the H.264 / AVC encoding method it is possible to perform prediction processing by selecting from three types of block sizes of 4 ⁇ 4, 8 ⁇ 8, and 16 ⁇ 16 pixels as the block size for performing the prediction processing.
  • the optimum method can be selected from a plurality of prepared prediction methods for each block size.
  • Nine types of prediction methods are defined for 4 ⁇ 4 and 8 ⁇ 8 pixel block size intra prediction, and four types of prediction methods are defined for 16 ⁇ 16 pixel block size intra prediction.
  • FIG. 39 is a diagram illustrating a positional relationship between a target 4 ⁇ 4 block 11 and a reference pixel 12 used for generating a predicted image of the block.
  • FIG. 40 is a conceptual diagram of nine types of prediction methods of 4 ⁇ 4 blocks.
  • FIG. 41 is a diagram showing the order in which macroblocks in a picture are processed in encoding.
  • FIG. 42 is a diagram showing an order of processing 4 ⁇ 4 blocks in a macro block in encoding.
  • a pixel used for predictive image generation is referred to as a “predictive reference pixel” or simply a “reference pixel”.
  • a predicted image can be generated from the reference pixels 12 described above using nine types of prediction methods (hereinafter referred to as “prediction mode”).
  • a predicted image is generated by copying the upper four pixels downward.
  • a predicted image is generated by copying the left four pixels in the right direction.
  • intra prediction a prediction image is generated according to a certain fixed direction from the reference pixels on the left, upper, and upper right.
  • a predicted image is generated with an average value of neighboring eight pixels.
  • an input picture is divided into units called 16 ⁇ 16 pixel macroblocks (MB), and encoding processing is performed in units of MB.
  • MB 16 ⁇ 16 pixel macroblocks
  • the order in which MBs in a picture are processed is as shown in FIG. 41, and processing is performed in the raster scan order from the upper left of the picture.
  • the MB is divided into 4 ⁇ 4 blocks for processing, and the processing order is as shown in FIG.
  • the reference pixels used for predictive pixel generation are already processed block pixels.
  • block 4 in FIG. 42 can use the pixels of blocks 1, 2, and 3 that have already been processed, but cannot use blocks 5, 7, 9, 10, and 13 that have not yet been processed.
  • JVT Joint VideoTeam
  • the conventional intra prediction method has a problem that the reference pixels such as the right and the lower cannot be used due to the restriction of the processing order of the blocks shown in FIG. 41 and FIG.
  • the processing order of macroblocks is from left to right on the picture and from top to bottom.
  • the reference image that can be used for the prediction image creation process based on intra prediction is limited to the upper direction neighboring pixel and the left direction neighboring pixel 2 only in the encoding target macroblock.
  • the prediction accuracy decreases as the lower right image of the encoding target macroblock as shown in FIG. 40, and the difference between the input image and the prediction image There is a problem that the prediction error is large.
  • the latest video compression coding standard H.264.
  • H.264 / AVC a normal 16 ⁇ 16 pixel macroblock is divided into 8 ⁇ 8 pixel or 4 ⁇ 4 pixel sub-blocks, and encoding processing is performed, so that prediction errors can be reduced by improving prediction accuracy.
  • an increase in the amount of information accompanying sub-block division that is, an increase in the amount of information in the macroblock header occurs.
  • a prediction image can be generated bidirectionally from the upper and lower reference pixels and a more accurate prediction image can be generated.
  • the prediction method has a problem that it cannot be realized due to processing order restrictions.
  • the present invention has been made to solve the above-described problems.
  • the prediction accuracy is improved and the encoding efficiency is improved.
  • the image encoding apparatus of the present invention creates N (N is a natural number of 2 or more) thinned images from an encoding target pixel block when encoding the encoding target pixel block using intra prediction.
  • the first thinned image to the Nth thinned image are processed in order.
  • the pixel of the i ⁇ 1 th thinned image is used as the reference pixel from the pixel of the first thinned image.
  • the prediction pixel of the pixel of the i th thinned image is located around the prediction pixel when the pixel of the i th thinned image is around the prediction pixel from the pixel of the first thinned image.
  • a pixel of the i-1th thinned image is generated from a pixel of a certain first thinned image as a reference pixel.
  • the image encoding apparatus that performs intra prediction of the present invention it is possible to prepare nine peripheral reference pixels for a prediction target pixel, and a suitable intra prediction image can be obtained, thereby improving encoding efficiency. Can be high.
  • H.264 In the intra prediction of the image coding apparatus according to the H.264 / AVC standard, prediction accuracy can be improved and coding efficiency can be improved.
  • FIG. 20 is a diagram showing the relationship between the first block and the reference pixel in an easy-to-understand manner by rewriting FIG. It is a figure which shows the prediction object block and reference pixel in a 2nd block. It is a figure which shows the reference pixel around the pixel p in a 2nd block. It is a conceptual diagram of each mode when a reference pixel is generated from surrounding pixels. It is a figure which shows the prediction object block and reference pixel in a 3rd block. It is a figure which shows the reference pixel around the pixel o in a 3rd block.
  • FIG. 1 A first embodiment according to the present invention will be described below with reference to FIGS. (I) Outline of Intra Prediction Method First, an outline of an intra prediction method according to the first embodiment of the present invention will be described using FIG. 1 and FIG.
  • FIG. 1 is a diagram showing a comparison between a pixel having a high prediction accuracy and a pixel having a low prediction accuracy by an intra prediction method according to the related art.
  • FIG. 2 is a diagram showing a comparison between pixels with high prediction accuracy and pixels with low prediction accuracy according to the intra prediction method according to the first embodiment of the present invention.
  • FIG. 3 is a diagram for explaining the creation of a thinned image.
  • a prediction image of the encoding target macroblock 201 is created using only the reference pixels in the left direction or the upward direction. For this reason, as shown in FIG. 1, although the prediction accuracy of a pixel close to the reference pixel is high, the prediction accuracy of a pixel far from the reference pixel is low.
  • the intra prediction method according to the present embodiment predicts surrounding pixels as reference pixels in order to divide the encoding target pixel block 301 and predict pixels of some thinned images. As shown in FIG. 2, the reference pixel and the pixel to be predicted are close in distance, and high prediction accuracy can be realized using reference images in the vertical and horizontal directions. In addition, since the macroblock header information can be greatly reduced as compared with the conventional intra prediction method, it contributes to the improvement of the coding efficiency. Furthermore, the intra prediction method according to the present embodiment can be used in combination with the conventional intra prediction method.
  • an encoding target pixel block 301 is used as an encoding target pixel block 301 at the time of encoding, and a thinned image is generated by dividing into four.
  • a slice is an image area that is a basic unit of encoding processing and decoding processing, and each slice on the same picture can perform encoding processing and decoding processing independently of other slices. .
  • FIG. 4 is a flowchart showing an image encoding process using the intra prediction method according to the first embodiment of the present invention.
  • FIG. 5 is a diagram illustrating a reconstructed image of the first thinned image and a predicted image of the second thinned image.
  • FIG. 6 is a diagram illustrating a reference relationship between the first thinned image and the second thinned image.
  • FIG. 7 is a diagram illustrating a reconstructed image of the first thinned image and a predicted image of the third thinned image.
  • FIG. 8 is a diagram showing a reference relationship between the first thinned image and the third thinned image.
  • FIG. 9 shows a reconstructed image of the first thinned image, a reconstructed image of the second thinned image, It is a figure which shows the estimated image of a 4th thinning image.
  • FIG. 10 is a diagram illustrating a reference relationship between the second thinned image, the third thinned image, and the fourth thinned image.
  • an input image for each slice is acquired (step 401). This is because the intra prediction method according to the present embodiment creates a predicted image in units of slices.
  • a thinned image is created based on the image obtained in step 401 of acquiring the slice unit input image (step 402). As shown in FIG. 3, the thinned image is created by sampling pixels at equal intervals to obtain a first thinned image 511, a second thinned image 512, a third thinned image 513, and a fourth thinned image. 514 is created.
  • the horizontal pixel number of the slice unit input image 501 is H (H is an even number), the vertical pixel number is V (V is an even number), and the pixel at the coordinates (x, y) of the slice unit input image 501 is Pi (x, y).
  • the pixel at the coordinate (x, y) of the first thinned image 511 is Pi1 (x, y)
  • the pixel at the coordinate (x, y) of the second thinned image 512 is Pi2 (x, y)
  • the third thinned image is
  • the coordinate (i, j) Pi1 (i, j), Pi2 (i, j), Pi3 (i, j), Pi4 (i, j) can be expressed as follows. However, i is an integer that satisfies 0 ⁇ i ⁇ (H / 2-1), and j is an integer that satisfies 0 ⁇ j ⁇ (V / 2-1).
  • the first thinned image 511 created by the thinned image creation processing 402 is subjected to coding processing by the conventional intra prediction method (step 403).
  • the conventional intra prediction method is H.264.
  • H.264 / AVC or the like is used to refer to pixels that are adjacent in the left direction or the upper direction for each macroblock (16 ⁇ 16 pixels) in the first thinned-out image 511, and in units of 16 ⁇ 16 pixels. This is a method of adaptively creating a predicted image in units of 8 ⁇ 8 pixels or 4 ⁇ 4 pixels.
  • a prediction image is created by an intra prediction method or the like (step 411).
  • the difference (prediction error) between the predicted image created in step 411 and the input image is calculated (step 412).
  • orthogonal transformation processing is performed on the prediction error calculated in step 412 to obtain data after orthogonal transformation (step 413).
  • quantization processing is performed on the orthogonally transformed data obtained in step 413 to obtain quantized data (step 414).
  • step 415 entropy coding processing is performed on the quantized data obtained in step 414 (step 415).
  • the encoding process of the first thinned image 511 requires particularly high image quality, it is preferable to set the quantization step in the quantization process 414 small.
  • step 403 of the coding process of the first thinned image 511 a prediction image is created for the second thinned image 512 created by the thinned image creating process 402, and the coding process is performed (step 404).
  • the reconstructed image 601 of the first thinned image that has already been encoded is used as a reference image for creating a predicted image of the second thinned image 512.
  • the reconstructed image is created by performing inverse quantization processing and inverse orthogonal transform processing on the quantized data obtained by the quantization processing 414 and adding the predicted image created by the predicted image creation processing 411. This is the same image as the image obtained by the decoding process.
  • the second thinned image 512 can be considered as an interpolated pixel in the horizontal direction with respect to the first thinned image 511, as shown in FIG. Therefore, it is effective to apply an image with interpolation pixels created by applying a horizontal filter to the reconstructed image 601 of the first thinned image as the predicted image of the second thinned image 512.
  • a predicted image of the second thinned image 512 (hereinafter, a predicted image 611 of the second thinned image) is created by creating an interpolation pixel using a horizontal 2-tap filter.
  • the pixel of the coordinate (x, y) of the predicted image 611 of the second thinned image is set as Pp2 (x, y), and the pixel of the coordinate (x, y) of the reconstructed image 601 of the first thinned image is Pr1 (x, y).
  • the pixel Pp2 (n, m) 612 at the coordinates (n, m) of the predicted image 611 of the second thinned image can be expressed by the following (formula 5). That is, the average of the reconstructed image 601 of the first thinned image on both the left and right sides of the pixel of the predicted image 611 of the second thinned image is taken.
  • the reason why the numerator is +1 is to round off the last bit.
  • n is an integer that satisfies 0 ⁇ n ⁇ (H / 2-1)
  • m is an integer that satisfies 0 ⁇ m ⁇ (V / 2-1).
  • H and V are respectively the number of horizontal pixels and the number of vertical pixels of the slice unit input image 501, both of which are even numbers.
  • n (H / 2-1)
  • a pixel of Pr1 (H / 2, y) that is, a pixel outside the screen of the reconstructed image 601 of the first thinned image is required.
  • Pr1 (n, y) Pr1 (H / 2-1, y), that is, copy the screen edge pixel to make it an off-screen pixel, etc. That's fine.
  • Pp2 (n, m) (Pr1 (n, m) + Pr1 (n + 1, m) +1) / 2 ... (Formula 5)
  • the prediction image 611 of the second thinned image may be created by pixel interpolation using a multi-tap filter or a two-dimensional filter that can express higher frequency components. .
  • step 404 of the encoding process of the second thinned image 512 a predicted image is created for the third thinned image 513 created by the thinned image creating process 402, and the coding process is performed (step 405).
  • the reconstructed image 601 of the first thinned image that has already been encoded is used as a reference image.
  • the third thinned image 513 can be considered as a horizontal interpolation pixel with respect to the first thinned image 511 as shown in FIG. Therefore, it is effective to apply an image with interpolation pixels created by applying a vertical filter to the reconstructed image 601 of the first thinned image as the predicted image of the third thinned image 513.
  • a predicted image of the third thinned image 513 hereinafter, a predicted image 811 of the third thinned image
  • a predicted image 811 of the third thinned image is created by interpolation pixel creation using a vertical 2-tap filter. That is, the average of the reconstructed image 601 of the first thinned image on both the upper and lower sides of the pixel of the predicted image 811 of the third thinned image is taken.
  • the pixel of the coordinate (x, y) of the predicted image 811 of the third thinned image is set as Pp3 (x, y), and the pixel of the coordinate (x, y) of the reconstructed image 601 of the first thinned image is Pr1 (x, y). If y), the pixel Pp3 (n, m) 812 at the coordinates (n, m) of the predicted image 811 of the third thinned image can be expressed by the following (formula 6).
  • n is an integer that satisfies 0 ⁇ n ⁇ (H / 2-1)
  • m is an integer that satisfies 0 ⁇ m ⁇ (V / 2-1).
  • H and V are respectively the number of horizontal pixels and the number of vertical pixels of the slice unit input image 501, both of which are even numbers.
  • m (V / 2-1)
  • a pixel of Pr1 (x, V / 2) that is, an off-screen pixel of the reconstructed image 601 of the first thinned image is required.
  • step 405 of the coding process of the third thinned image 513 a predicted image is created for the fourth thinned image 514 created by the thinned image creating process 402, and the coding process is performed (step 406).
  • the predicted image of the fourth thinned image 514 is generated using the reconstructed image 1001 of the second thinned image and the reconstructed image 1002 of the third thinned image that have already been encoded as reference images. Use.
  • the fourth thinned image 514 can be considered as an interpolation pixel in the vertical direction with respect to the second thinned image 512, and is considered as an interpolation pixel in the horizontal direction with respect to the third thinned image 513. be able to. Therefore, it is possible to apply to the predicted image of the fourth thinned image 514 an image with interpolation pixels created by applying a two-dimensional filter to the reconstructed image 1001 of the second thinned image and the reconstructed image 1002 of the third thinned image. It is effective.
  • a predicted image of the fourth thinned image 514 (hereinafter, a predicted image 1011 of the second thinned image) is created by creating an interpolated pixel using a two-dimensional filter with two taps each in the horizontal and vertical directions.
  • the pixel of the coordinate (x, y) of the predicted image 1011 of the fourth thinned image is set as Pp4 (x, y), and the pixel of the coordinate (x, y) of the reconstructed image 1001 of the second thinned image is Pr2 (x, y).
  • y When the pixel of the coordinate (x, y) of the reconstructed image 1002 of the third thinned image is set to Pr3 (x, y), the pixel Pp4 of the coordinate (n, m) of the predicted image 1011 of the fourth thinned image (N, m) 1012 can be expressed by the following (formula 7).
  • the average of the reconstructed image 1011 of the second thinned image on the upper and lower sides of the pixel of the predicted image 1011 of the fourth thinned image and the reconstructed image 1002 of the third thinned image on both the left and right sides are taken.
  • the reason why +2 is used in the numerator is to round off the last bit.
  • n is an integer that satisfies 0 ⁇ n ⁇ (H / 2-1)
  • m is an integer that satisfies 0 ⁇ m ⁇ (V / 2-1).
  • H and V are respectively the number of horizontal pixels and the number of vertical pixels of the slice unit input image 501, both of which are even numbers.
  • the pixel of Pr3 (H / 2, y) or Pr2 (x, V / 2) Pixels are required.
  • FIG. 11 is a configuration diagram of the image encoding device according to the first embodiment of the present invention.
  • the image coding apparatus 100 includes a thinned image creation unit 101, a thinned image buffer 102, a first intra predicted image creation unit 103, a prediction error calculation processing unit 104, and an orthogonal transformation process.
  • the unit 112 is provided.
  • the thinned image creating unit 101 is a processing unit that performs the thinned image creating process 402 and creates four thinned images from the first thinned image 511 to the fourth thinned image 514 based on the input image 151.
  • the thinned image buffer 102 temporarily stores the first thinned image 511, the second thinned image 512, the third thinned image 513, and the fourth thinned image 514, and a buffer for adjusting the output timing of each thinned image.
  • the first thinned image 511, the second thinned image 512, the third thinned image 513, and the fourth thinned image 514 are output in this order.
  • the first intra prediction processing unit 103 is a processing unit that creates a predicted image by the conventional intra prediction method.
  • the prediction error calculation processing unit 104 is a processing unit that calculates a difference (that is, a prediction error) between the thinned image of the input image and the predicted image, and is a processing unit that executes the prediction error calculation processing 412.
  • the orthogonal transformation processing unit 105 is a processing unit that executes the orthogonal transformation processing 413.
  • the quantization processing unit 106 is a processing unit that executes the quantization processing 414.
  • the inverse quantization processing unit 107 is a processing unit that performs inverse quantization processing on the quantized data output from the quantization processing unit 106.
  • the inverse orthogonal transform processing unit 108 is a processing unit that performs an inverse orthogonal transform process on the dequantized data output from the inverse quantization processing unit 107.
  • the reconstructed image creation unit 109 is a processing unit that creates the reconstructed image by adding the data after inverse orthogonal transform output from the inverse orthogonal transform processing unit 108 and the predicted image input to the prediction error calculation processing unit 104.
  • the reconstructed image buffer 110 is a buffer for storing the reconstructed image created by the reconstructed image creating unit 109.
  • the second intra prediction processing unit 111 acquires a reconstructed image of a necessary thinned image from the reconstructed image buffer 110, and predicts the second thinned image 512, the third thinned image 513, and the fourth thinned image 514. Create.
  • the second intra prediction processing unit 111 performs the predicted image generation process according to step 404 of the second decimation image encoding process, step 405 of the third decimation image encoding process, and step 406 of the fourth decimation image encoding process. It is a processing unit that performs.
  • the entropy encoding processing unit 112 is a processing unit that performs entropy encoding processing 415 and outputs a bit stream 152.
  • the image coding apparatus 100 is greatly different from the conventional image coding apparatus in that it includes the above-described thinned image creating unit 101, the thinned image buffer 102, the reconstructed image buffer 110, and the second intra prediction processing unit 111. .
  • (IV) Configuration of Bitstream Next, the configuration of a bitstream obtained as an output of image encoding processing in the intra prediction method of the present embodiment will be described using FIG.
  • FIG. 12 is a configuration diagram of a bitstream obtained as an output of the image encoding process in the intra prediction method according to the first embodiment of the present invention.
  • the bit stream obtained as the output of the image encoding process includes slice header information 1201, code information 1211 regarding the first decimation image, code information 1212 regarding the second decimation image, and code regarding the third decimation image. It consists of information 1213 and code information 1214 regarding the fourth thinned image. These pieces of information are output in the order of slice header information 1201, code information 1211 related to the first thinned image, code information 1212 related to the second thinned image, code information 1213 related to the third thinned image, and code information 1214 related to the fourth thinned image.
  • the code information 1214 related to the fourth thinned image is followed by the next slice header information 1202. Since the first decimation image is encoded by the same method as the conventional intra prediction method, the code information 1211 related to the first decimation image includes macroblock header information.
  • FIG. 13 is a flowchart showing an image decoding process according to the first embodiment of the present invention.
  • FIG. 14 is a diagram for explaining how a combined image is created from a decoded image of a thinned image.
  • the first thinned image is decoded (step 1301).
  • Decryption processing is performed according to the following procedure.
  • post-quantization data is created from a bit stream (step 1311).
  • a prediction error is generated from the data after the orthogonal transform (step 1313).
  • step 1315 the prediction error and the prediction image are added to create a decoded image.
  • a predicted image is created by the conventional intra prediction method.
  • step 1302 the decoding process of the second thinned image is performed.
  • step 1303 the decoding process of the third thinned image is performed.
  • the same predicted image creation process as the predicted image creation process in step 411 in the third thinned image encoding process in step 405 is performed.
  • the decoding process of the fourth thinned image is performed (step 1304).
  • the same predicted image creation process as the predicted image creation process in step 411 in the fourth thinned image encoding process in step 406 is performed.
  • the decoded image 1401 and the second decimation of the first decimation image decoded by the respective decoding processes are performed.
  • the decoded image 1402, the decoded image 1403 of the third thinned image, and the decoded image 1404 of the fourth thinned image are combined to create a combined image 1411 (step 1305).
  • This process is equivalent to performing the reverse process to the thinned image creation process 402. Since the intra prediction method according to the present invention uses slices as the basic unit, the combined image 1411 is an image in units of slices.
  • FIG. 15 is a block diagram of an image decoding apparatus according to the first embodiment of the present invention.
  • the image decoding apparatus 1500 includes an entropy decoding processing unit 1501, an inverse quantization processing unit 107, an inverse orthogonal transform processing unit 108, a reconstructed image creating unit 109, a reconstructed image buffer 110, a 1 intra prediction processing unit 103, second intra prediction processing unit 111, thinned image buffer 1502, and combined image creation unit 1503.
  • the inverse quantization processing unit 107, the inverse orthogonal transform processing unit 108, the reconstructed image creation unit 109, the reconstructed image buffer 110, and the first intra prediction processing unit 103 are the same as those described for the image encoding device 100 in FIG. is there.
  • the entropy decoding processing unit 1501 is a processing unit that decodes a bitstream and creates post-quantization data and the like, and performs processing corresponding to the entropy decoding processing in step 1311.
  • the thinned image buffer 1502 is a buffer that temporarily stores a decoded image of the thinned image.
  • the combined image unit 1503 is a processing unit that performs the combined image creation shown in the combined image creation processing in step 1305.
  • the image decoding apparatus 1500 first converts the code information 1211 related to the first decimation image shown in FIG. 12 into quantized data or the like by the entropy decoding processing unit 1501, and performs inverse quantization. Decoding of the first decimation image is performed by performing inverse quantization processing and inverse orthogonal transformation processing in the quantization processing unit 107 and the inverse orthogonal transform processing unit 108, and adding the predicted image created by the first predicted image creation processing unit 103 The image is acquired and stored in the thinned image buffer 1502.
  • the code information 1212 related to the second thinned image shown in FIG. 12 is converted into quantized data by the entropy decoding processing unit 1501, and the inverse quantization processing unit 107 and the inverse orthogonal transform processing unit 108 respectively perform inverse processing.
  • Quantization processing and inverse orthogonal transform processing are performed, and a decoded image of the second thinned image is acquired by adding the predicted image created by the second predicted image creation processing unit 111 and stored in the thinned image buffer 1502.
  • the code information 1213 related to the third thinned image and the code information 1213 related to the fourth thinned image shown in FIG. 12 are processed in the same manner as the code information 1212 related to the second thinned image, and the acquired third thinned image is decoded.
  • the image and the decoded image of the fourth thinned image are stored in the thinned image buffer 1502.
  • the combined image creation unit 1503 reads the decoded images of all the thinned images from the thinned image buffer 1502, creates a combined image, and outputs it as a decoded image 1551.
  • the image decoding device 1500 is greatly different from the conventional image decoding device in that it includes the reconstructed image buffer 110, the second intra prediction processing unit 111, the thinned image buffer 1502, and the combined image creation unit 1503 described above. is there.
  • an image encoding process has been described in which an encoding target pixel block is divided and several thinned images are generated and used as predicted images in intra prediction.
  • This embodiment describes an image encoding device and an image decoding device that can improve the encoding efficiency by using the intra prediction method according to the present invention and the conventional intra prediction method in combination.
  • FIG. 16 is a configuration diagram of an image encoding device according to the second embodiment of the present invention.
  • FIG. 17 is a block diagram of an image decoding apparatus according to the second embodiment of the present invention.
  • the image encoding device 1600 includes a first image encoding device 1601, a second image encoding device 1602, an encoding efficiency statistical processing unit 1606, a stream buffer 1603, a stream buffer 1604, and an output selection circuit. 1605.
  • the first form image encoding apparatus 100 is an image encoding apparatus using the intra prediction method according to the first embodiment described in FIG.
  • the second image encoding device 1602 uses the H.
  • This is an image coding apparatus using a conventional intra prediction method such as H.264 / AVC.
  • the coding efficiency statistical processing unit 1606 uses a bit stream obtained from the first image coding device 1601 and a bit stream obtained from the second image coding device based on statistical data that is an index of code amount and image quality in units of slices. Among these, it is a processing unit that outputs a control signal for selecting a bitstream that further improves the encoding efficiency.
  • the stream buffer 1603 is a buffer that temporarily stores the bit stream output from the first image encoding device 1601, and has a function of managing and outputting the bit stream data for each slice unit.
  • the stream buffer 1604 is a buffer for temporarily storing the bit stream output from the second image encoding device 1602 and has a function of managing and outputting the bit stream data for each slice unit.
  • the output selection processing unit 1605 switches and outputs one of the bit streams output from the stream buffer 1603 and the stream buffer 1604 in units of slices in accordance with the control signal output from the coding efficiency statistical processing unit 1606. Furthermore, it has a function of assigning 1 bit of code for identifying the intra prediction method according to the first embodiment and the conventional intra prediction method as slice header information of the bitstream.
  • the image encoding device 1600 performs image encoding processing on the input image 1651 by the first image encoding device 1601 and the second image encoding device 1602, and outputs the obtained bit streams to the stream buffer 1603 and the stream buffer 1604, respectively. To do. Further, the first image encoding device 1601 outputs statistical data 1631 serving as a generated code amount and image quality index in units of slices to the encoding efficiency statistical processing unit 1606. The second image encoding device 1602 also outputs statistical data 1632 serving as a generated code amount and image quality index in units of slices to the encoding efficiency statistical processing unit 1606.
  • the coding efficiency statistical processing unit 1606 outputs control information 1633 for selecting a bitstream that further improves the coding efficiency to the output selection processing 1605 based on the input statistical data, and the output selection processing 1605 Based on the control information 1633, one of the bit streams output from the stream buffer 1603 and the stream buffer 1604 is switched in units of slices and output as a bit stream 1652. At that time, 1 bit of code for identifying the first intra prediction method and the conventional intra prediction method is added to the slice header of the adopted bitstream.
  • the image decoding apparatus 1700 includes an entropy decoding processing unit 1701, an inverse quantization processing unit 107, an inverse orthogonal transform processing unit 108, a reconstructed image creating unit 109, a reconstructed image buffer. 110, a first intra prediction processing unit 103, a second intra prediction processing unit 111, a thinned image buffer 1502, a combined image creation unit 1503, an output selection processing unit 1702, and a delay buffer 1703.
  • the inverse quantization processing unit 107, the inverse orthogonal transform processing unit 108, the reconstructed image creation unit 109, the reconstructed image buffer 110, the first intra prediction processing unit 103, and the second intra prediction processing unit 111 are the first implementation.
  • the thinned-out image buffer 1502 and the combined image creating unit 1503 are the same as those described with reference to FIG.
  • the entropy decoding processing unit 1701 is the intra prediction method according to the first embodiment or the bit stream below the slice header information to be processed from the slice header information of the input bit stream 1652 or the conventional intra prediction method. This is a processing unit that has a function of outputting the determination result to the output selection processing unit 1702 as the determination result information 1731 and decoding the bit stream and outputting the quantized data and the like.
  • the output selection processing unit 1702 selects the output of the delay buffer 1703 and the output of the combined image creation unit 1503 based on the determination result information 1731 and outputs it as a decoded image 1751.
  • the delay buffer 1703 calculates a time difference between a processing delay time when obtaining a decoded image from the bit stream of the intra prediction method according to the first embodiment and a processing delay time when obtaining a decoded image from the bit stream of the conventional intra prediction method. It is a buffer for absorption, and requires a capacity capable of storing decoded image data for about one slice.
  • the entropy decoding processing unit 1701 of the image decoding device 1700 determines that the bit stream is the bit stream of the intra prediction method according to the first embodiment
  • the dequantized data obtained by decoding is subjected to the inverse quantization processing unit 107
  • An image is obtained by the inverse orthogonal transform processing unit 108, the reconstructed image creating unit 109, the reconstructed image buffer 110, the first intra prediction processing unit 103, the second intra prediction processing unit 111, the thinned image buffer 1502, and the combined image creating unit 1503.
  • Processing similar to that of the decoding device 1500 is performed to obtain a decoded image of the intra prediction method according to the first embodiment.
  • the entropy decoding processing unit 1701 determines that the bit stream of the conventional intra prediction method is used, the quantized data obtained by decoding is converted into an inverse quantization processing unit 107, an inverse orthogonal transform processing unit 108, and a reconstructed image.
  • the creation unit 109 and the first intra prediction processing unit 103 create a decoded image of the conventional intra prediction method and store it in the delay buffer 1703. If the determination result information 1731 is a result determined to be the intra prediction method according to the first embodiment, the output selection processing unit 1702 outputs the decoded image of the combined image creation unit 1503 and determines the conventional intra prediction method. If so, the decoded image of the buffer 1703 is output.
  • the MB is divided into a plurality of basic blocks in a manner different from the first embodiment, and for intra prediction of the pixels of a certain basic block, the pixels of the other basic block are referred to.
  • a method for creating a prediction pixel will be described.
  • FIG. 18 is a diagram showing how to divide an MB into 4 ⁇ 4 blocks and the processing order of the divided blocks.
  • the input 16 ⁇ 16 pixel MB is divided into 8 ⁇ 8 pixel blocks, and the corresponding pixels are extracted as follows, and the extracted pixel group is configured as a 4 ⁇ 4 block.
  • Each of these blocks is a basic block for performing prediction from the first block to the fourth block.
  • First block: p (2x, 2y) ... x, y 1, 2, 3, 4
  • Second block: p (2x-1,2y-1) x, y 1,2,3,4
  • p (2x-1, 2y) x, y 1, 2, 3, 4
  • p (x, y) indicates a pixel position in the 8x8 block, and can take positions 1 to 8 in the x direction and 1 to 8 in the y direction.
  • the processing order in the 8 ⁇ 8 block is performed in the order of the first block, the second block, the third block, and the fourth block.
  • FIG. 19 is a diagram showing a prediction target block and a reference pixel in the first block.
  • FIG. 20 is a diagram showing the relationship between the first block and the reference pixel in an easy-to-understand manner by rewriting FIG.
  • the prediction target pixels in the first block are a to p pixels in FIG.
  • pixels A to H, I to L, M, N to U, and V to Y can be used as reference pixels in the first block.
  • the white pixel in FIG. 19 cannot be used as a reference pixel.
  • every other pixel is taken as shown in FIG. To do. This is based on the idea of using a reference pixel closer to the prediction target pixel.
  • the conventional H.264 method is used. Intra prediction is performed on a 4 ⁇ 4 pixel block in nine modes used in the H.264 / AVC encoding scheme.
  • FIG. 21 is a diagram illustrating a prediction target block and a reference pixel in the second block.
  • FIG. 22 is a diagram illustrating reference pixels around the pixel p in the second block.
  • FIG. 23 is a conceptual diagram of each mode when a reference pixel is generated from surrounding pixels.
  • the pixel corresponding to the first block can be used as the reference pixel.
  • A, C, F, and H in FIG. 22 are pixels of the first block and can be used as prediction reference pixels.
  • B, D, E, and G portions that cannot be used as reference pixels are predicted from the A, C, F, and H pixels of the first block before the pixel p is predicted. Generated as a reference pixel.
  • each prediction reference pixel is an average of surrounding reference pixels, and is calculated by the following equation.
  • B (A + C + 1) >> 1
  • D (A + F + 1) >> 1
  • E (C + H + 1) >> 1
  • G (F + H + 1) >> 1
  • A, B, C... Representing each pixel also represent a pixel value. “+” Indicates addition of pixel values, “>>” indicates a right shift of bits, and “>> 1” indicates division by two. In parentheses, +1 means rounding.
  • the prediction target pixel is predicted using nine reference pixels around the prediction target pixel.
  • the prediction reference pixels around a to o are similarly generated according to the relative position to the target pixel.
  • the number of modes is set to nine.
  • the prediction target pixel generation method is the same as that of the third and fourth blocks. It will be the same.
  • FIG. 24 is a diagram illustrating a prediction target block and a reference pixel in the third block.
  • FIG. 25 is a diagram illustrating reference pixels around the pixel o in the third block.
  • FIG. 26 is a diagram illustrating reference pixels around the pixel p in the third block.
  • the pixels corresponding to the first and second blocks can be used as reference pixels.
  • a prediction pixel generation method will be described with reference to FIGS. 25 and 26 by taking the vicinity of the prediction target pixels o and p as an example.
  • K, B, L, M, G, and H are pixels of the first block
  • I, J, D, and E are pixels of the second block.
  • A, C, F, and H portions that cannot be used as reference pixels are generated as predicted reference pixels.
  • Each prediction reference pixel is an average of surrounding reference pixels.
  • A (I + K + B + D + 2) >> 2
  • C (J + B + L + E + 2) >> 2
  • F (2D + M + G + 2)
  • 2 H (2E + G + N + 2) >> 2
  • the formula for F is obtained by adding D and dividing by 4 using symmetry. The same applies to the formula for H.
  • peripheral pixels C, E, and H that cannot be used as reference pixels are generated for the periphery of the pixel p (A and F are shown in the generation method for the periphery of the pixel o).
  • a weight of 3 is assigned to the closer B, and a weight of A is set to 1, so that the closer pixels are reflected.
  • H the formula of H.
  • FIG. 27 is a diagram illustrating prediction target blocks and reference pixels in the fourth block.
  • FIG. 28 is a diagram showing reference pixels around the pixel p in the fourth block.
  • the pixels corresponding to the first, second, and third blocks can be used as reference pixels.
  • the prediction pixel generation method will be described with reference to FIG. 28 by taking the vicinity of the prediction target pixel p as an example.
  • D and E are pixels of the first block
  • B is a pixel of the second block
  • a and C are pixels of the second block.
  • the reference pixel is generated by the following equation.
  • F (A + 3D + 2)
  • 2 H (C + 3E + 2)
  • 2 G (F + H + 1)
  • reference pixels of pixels whose relative positions correspond to F, G, and H are generated in the same manner as p.
  • the target block when the target block is located at the upper end of the picture, there is no upper MB, so the upper, upper right, and upper left reference pixels cannot be used.
  • the left and upper left reference pixels cannot be used because there is no left MB.
  • H. The same process as H.264 / AVC and the usable mode is only mode 2 (DC), and the DC prediction value is predicted as an intermediate value (128 if the number of bits of the input pixel is 8 bits).
  • FIG. 29 is a diagram illustrating a prediction target block and a reference pixel in the second block of the MB at the slice boundary.
  • FIG. 30 is a diagram illustrating reference pixels around the pixel a in the second block.
  • FIG. 31 is a diagram showing reference pixels around the pixel b in the second block.
  • FIG. 32 is a diagram showing reference pixels around the pixel e in the second block.
  • the reference pixel outside the thick frame in FIG. 29 cannot be used.
  • a method of predicting these reference pixels will be described with reference to FIGS. 30, 31, and 32 by taking the vicinity of the prediction target pixels a, b, and e as an example.
  • the reference pixels A to G around the prediction target pixel a all have the same value as H.
  • the prediction method of the reference pixels A, B, and C around the prediction target pixel b is performed as follows.
  • D, E, and G the average of the pixels at both ends may be taken as described above.
  • A copies the pixel value of A using symmetry. The same applies to C.
  • FIG. 33 is a diagram illustrating prediction target blocks and reference pixels in the third block of the MB at the slice boundary.
  • FIG. 34 is a diagram showing reference pixels around the pixel a in the third block.
  • FIG. 35 is a diagram showing reference pixels around the pixel d in the third block.
  • the reference pixel outside the thick frame in FIG. 33 cannot be used.
  • a method of predicting these reference pixels will be described with reference to FIGS. 34 and 35, taking the vicinity of the prediction target pixels a and d as an example.
  • the prediction method of the reference pixels A, B, and C around the prediction target pixel a is performed as follows.
  • the prediction method of the reference pixels A, B, C, E, and H around the prediction target pixel d is performed as follows.
  • F may be an average of the peripheral pixels described above.
  • a predicted image generation method in the fourth block of the MB at the slice boundary will be described with reference to FIGS.
  • FIG. 36 is a diagram illustrating a prediction target block and reference pixels in the fourth block of the MB at the slice boundary.
  • FIG. 37 is a diagram showing reference pixels around the pixel a in the fourth block.
  • FIG. 38 is a diagram illustrating reference pixels around the pixel d in the fourth block.
  • the reference pixel outside the thick frame in FIG. 36 cannot be used.
  • a method of predicting these reference pixels will be described with reference to FIGS. 37 and 38 by taking the vicinity of the prediction target pixels a and m as an example.
  • the prediction method of the reference pixels A, D, and F around the prediction target pixel a is performed as follows.
  • the prediction method of the reference pixels A and D around the prediction target pixel m is performed as follows.
  • F may be an average of surrounding pixels as described above.
  • Video coding apparatus 101 Thinned image creation part 102 ... Thinned image buffer 103 ... First intra prediction processing part 104 ... Prediction error calculation processing part 105 ...
  • Orthogonal transformation processing part 106 ... Quantum Processing unit 107 ... Inverse quantization processing unit 108 ... Inverse orthogonal transform processing unit 109 ... Reconstructed image creation unit 110 ... Reconstructed image buffer 111 ... Second intra prediction image creation 112 ... Entropy encoding processing unit 151 ... Input image 152: Bit stream 201 ... Encoding target macroblock 211 ... Upward proximity pixel 212 ... Left proximity pixel 301 ...
  • Slice header information 1202 ... Slice header information 1211 ... Code information 1212 related to the first thinned image ... Code information 1213 related to the second thinned image ... Code information 1214 related to the third thinned image ... Code information 1401 related to the fourth thinned image ... First Decoded image 1402 of the decimation image ... Decoded image 1403 of the second decimation image ... Decoded image 1403 of the third decimation image ... Decoded image 1411 of the fourth decimation image ... Combined image 1500 ... Image decoding device 1501 ... Entropy decoding processing unit 1502 ... Thinned-out image buffer 1503 ... Combined image creation unit 1551 ... Decoded image 1600 ...
  • Image encoding process 1601 First image encoding device 1602 .
  • Second image encoding device 1603 ... Stream buffer 1604 ... Stream buffer 1605 ...
  • Output selection circuit 160 ... Coding efficiency statistical processing unit 1631 ...
  • Statistical data 1632 serving as an index of generated code amount and image quality in slice units .
  • Statistical data 1633 serving as an index of generated code amount and image quality in slice units ...
  • Control information 1651 ... Input image 1652 ... bits Stream 1700 ... Image decoding apparatus 1701 ...
  • Entropy decoding processing section 1702 ...
  • Output selection processing section 1703 ... Delay buffer 1731 ... Determination result information 1751 ... Decoded image

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

Dans une prédiction intra-image (intraprédiction) d'un dispositif de codage d'image basé sur la norme H.264/AVC, N images amincies (N représentant un nombre naturel supérieur ou égal à 2) sont produites, la Nième image amincie est traitée dans l'ordre à partir de la première image amincie, et pendant la production des pixels prédits pour la prédiction intra-image des pixels d'une xième image amincie (x=2,...,N), les pixels de la xième image amincie provenant des pixels de la première image amincie sont utilisés comme pixels de référence. A ce moment, quand les pixels de la xième image amincie provenant des pixels de la première image amincie sont proches des pixels prédits, les pixels prédits pour les pixels de la xième image amincie produisent les pixels de la xième image amincie à partir des pixels de la première image amincie, qui sont proches des pixels prédits comme pixels de référence. L'invention permet ainsi d'améliorer la précision de prédiction et l'efficacité du codage dans la prédiction intra-image (intraprédiction).
PCT/JP2011/056044 2010-07-16 2011-03-15 Dispositif de codage d'image Ceased WO2012008180A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2010161899A JP5578974B2 (ja) 2010-07-16 2010-07-16 画像符号化装置
JP2010-161899 2010-07-16

Publications (1)

Publication Number Publication Date
WO2012008180A1 true WO2012008180A1 (fr) 2012-01-19

Family

ID=45469194

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2011/056044 Ceased WO2012008180A1 (fr) 2010-07-16 2011-03-15 Dispositif de codage d'image

Country Status (2)

Country Link
JP (1) JP5578974B2 (fr)
WO (1) WO2012008180A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS62269488A (ja) * 1986-05-16 1987-11-21 Mitsubishi Electric Corp 画像符号化伝送装置
JPH0393377A (ja) * 1989-09-06 1991-04-18 Hitachi Ltd 高能率符号化装置
JP2007096672A (ja) * 2005-09-28 2007-04-12 Hitachi Kokusai Electric Inc 画像処理装置

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04219074A (ja) * 1990-08-31 1992-08-10 Toshiba Corp 画像符号化装置

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS62269488A (ja) * 1986-05-16 1987-11-21 Mitsubishi Electric Corp 画像符号化伝送装置
JPH0393377A (ja) * 1989-09-06 1991-04-18 Hitachi Ltd 高能率符号化装置
JP2007096672A (ja) * 2005-09-28 2007-04-12 Hitachi Kokusai Electric Inc 画像処理装置

Also Published As

Publication number Publication date
JP5578974B2 (ja) 2014-08-27
JP2012023671A (ja) 2012-02-02

Similar Documents

Publication Publication Date Title
KR101774392B1 (ko) 화면 내 예측 방법 및 이러한 방법을 사용하는 장치
RU2603543C2 (ru) Способ и устройство для кодирования видео и способ и устройство для декодирования видео
JP6005087B2 (ja) 画像復号装置、画像復号方法、画像符号化装置、画像符号化方法及び符号化データのデータ構造
JP6807987B2 (ja) 画像符号化装置、動画像復号装置、動画像符号化データ及び記録媒体
KR102187246B1 (ko) 영상 정보 부호화 방법 및 복호화 방법
US20100118945A1 (en) Method and apparatus for video encoding and decoding
KR101614828B1 (ko) 화상 부호화 및 복호 방법, 장치, 프로그램
JP2020510374A (ja) 映像コーディングシステムにおける変換方法及びその装置
JP2017118571A (ja) 符号化データ
KR20170108367A (ko) 인트라 예측 기반의 비디오 신호 처리 방법 및 장치
MX2015003512A (es) Dispositivo de codificacion de prediccion de video, metodo de codificacion de prediccion de video, dispositivo de decodificacion de prediccion de video y metodo de decodificacion de prediccion de video.
WO2013001730A1 (fr) Appareil de codage d'images, appareil de décodage d'images, procédé de codage d'images et procédé de décodage d'images
KR20130037422A (ko) 두 개의 후보 인트라 예측 모드를 이용한 화면 내 예측 모드의 부/복호화 방법 및 이러한 방법을 사용하는 장치
KR20200139116A (ko) 영상 정보 부호화 방법 및 복호화 방법
WO2016194380A1 (fr) Dispositif de codage d'images animées, procédé de codage d'images animées et support d'enregistrement pour mémoriser un programme de codage d'images animées
KR20170122351A (ko) 화면 내 예측 방향성에 따른 적응적 부호화 순서를 사용하는 비디오 코딩 방법 및 장치
JP5578974B2 (ja) 画像符号化装置
KR102038818B1 (ko) 화면 내 예측 방법 및 이러한 방법을 사용하는 장치
JP2011049816A (ja) 動画像符号化装置、動画像復号装置、動画像符号化方法、動画像復号方法、およびプログラム
KR20110087871A (ko) 인트라 모드를 이용한 쿼터 픽셀 해상도를 갖는 영상 보간 방법 및 장치
KR20170125724A (ko) 인트라 예측 부호화 및 복호화 방법과 상기 방법을 수행하는 장치
KR20120008321A (ko) 서브샘플링을 이용한 적응적 스캐닝 및 확장된 템플릿 매칭 방법 및 장치
WO2011064932A1 (fr) Dispositifs de codage et de décodage d'image

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11806513

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11806513

Country of ref document: EP

Kind code of ref document: A1