WO2018107777A1 - Procédé et système d'annotation d'image de vidéo - Google Patents
Procédé et système d'annotation d'image de vidéo Download PDFInfo
- Publication number
- WO2018107777A1 WO2018107777A1 PCT/CN2017/096481 CN2017096481W WO2018107777A1 WO 2018107777 A1 WO2018107777 A1 WO 2018107777A1 CN 2017096481 W CN2017096481 W CN 2017096481W WO 2018107777 A1 WO2018107777 A1 WO 2018107777A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- canvas
- annotation
- video image
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/235—Processing of additional data, e.g. scrambling of additional data or processing content descriptors
- H04N21/2353—Processing of additional data, e.g. scrambling of additional data or processing content descriptors specifically adapted to content descriptors, e.g. coding, compressing or processing of metadata
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/182—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/8126—Monomedia components thereof involving additional data, e.g. news, sports, stocks, weather forecasts
- H04N21/8133—Monomedia components thereof involving additional data, e.g. news, sports, stocks, weather forecasts specifically related to the content, e.g. biography of the actors in a movie, detailed information about an article seen in a video program
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/84—Generation or processing of descriptive data, e.g. content descriptors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/845—Structuring of content, e.g. decomposing content into time segments
- H04N21/8455—Structuring of content, e.g. decomposing content into time segments involving pointers to the content, e.g. pointers to the I-frames of the video stream
Definitions
- the present invention relates to the field of video image processing technologies, and in particular, to a method and system for marking video images.
- Video annotation refers to superimposing lines, words, etc. on the image in order to more clearly express the meaning of the image content during video playback. For example, when analyzing a video, some key characters and items in the video need to be circled to emphasize, and even some text descriptions are attached.
- the video when the video is marked, it is generally necessary to extract each frame image first. Since the image format extracted from the video is often in the YUV format, the YUV image needs to be converted into an RGB image, and then the annotation information is superimposed on the RGB image. , then convert the RGB image into a YUV image and encode it into a video.
- the video image is formatted multiple times, and in the process of converting the RGB image of different video images into the YUV image, the drawing of the line or the text is required, resulting in the resources of the system CPU. High consumption.
- a method for labeling video images comprising the following steps:
- the pixel value of the labeled content on the first annotation image is assigned to the corresponding position on the video image to be labeled, and the labeled video image is obtained.
- a labeling system for video images comprising:
- a canvas creating unit configured to create a canvas, and determine a correspondence between a pixel point on the canvas and a pixel point of the video image to be labeled
- An annotation drawing unit is configured to receive the labeling command, and draw the labeling content at a specified position of the canvas according to the labeling command and the corresponding relationship;
- An image conversion unit configured to convert the canvas into a first annotation image in a first format, where the first format is an image format of the video image to be labeled;
- the pixel assignment unit is configured to assign a pixel value of the labeled content on the first annotation image to a corresponding position on the video image to be labeled according to the correspondence relationship, to obtain the labeled video image.
- the canvas is created, and the pixel point of the canvas corresponds to the pixel point of the video image to be labeled, and the labeling content is drawn at the designated position of the canvas according to the received labeling command and the corresponding relationship of the pixel points, and the canvas is drawn.
- Performing format conversion to obtain a first annotation image assigning pixel values of the annotation content on the first annotation image to corresponding positions on the video image to be labeled according to pixel correspondence, obtaining an annotated video image, thereby completing the video image Label.
- the use of the canvas to draw the annotation content avoids the operation of drawing the annotation content in each frame of the video image, and does not need to perform multiple format conversion on the video image, thereby greatly reducing the video image annotation time to the system.
- CPU resource consumption is used to draw the annotation content.
- FIG. 1 is a schematic flow chart of a method for marking a video image according to an embodiment of the present invention
- FIG. 2 is a schematic structural diagram of an annotation system of a video image of one embodiment
- FIG. 3 is a schematic structural diagram of an annotation system of a video image of one embodiment
- FIG. 4 is a schematic structural diagram of an annotation system of a video image of one of the embodiments.
- FIG. 1 it is a schematic flowchart of a method for marking a video image according to the present invention.
- the method for marking a video image in this embodiment includes the following steps:
- Step S101 Create a canvas, and determine a correspondence relationship between a pixel point on the canvas and a pixel point of the video image to be labeled;
- Step S102 Receive an annotation command, and draw the annotation content at a specified position of the canvas according to the annotation command and the correspondence relationship;
- the labeling command is a command for labeling the specified position in the labeled video image. Since the pixel point on the canvas corresponds to the pixel point of the video image to be labeled, the designation of the canvas can be determined according to the labeling command and the correspondence relationship. position;
- Step S103 Convert the canvas into a first annotation image of the first format, where the first format is an image format of the video image to be labeled;
- the canvas is converted into a first annotation image, and the image format of the first annotation image is the same as the image format of the video image to be labeled, which facilitates subsequent annotation processing;
- Step S104 Assign a pixel value of the labeled content on the first annotation image to a corresponding position on the video image to be labeled according to the correspondence relationship, to obtain an annotated video image.
- the pixel value of the labeled content on the first labeled image is assigned to the corresponding position on the video image to be labeled, which is equivalent to marking the corresponding position on the video image to be labeled;
- the canvas is created, and the pixel points of the canvas correspond to the pixel points of the video image to be labeled, and the label content is drawn at a specified position of the canvas according to the received labeling command and the corresponding relationship of the pixel points, and the canvas is format converted.
- the first annotation image is obtained, and the pixel value of the annotation content on the first annotation image is assigned to the corresponding position on the video image to be labeled according to the pixel point correspondence relationship, and the labeled video image is obtained, thereby completing the labeling of the video image.
- the use of the canvas to draw the annotation content avoids the operation of drawing the annotation content in each frame of the video image, and does not need to perform multiple format conversion on the video image, thereby greatly reducing the video image annotation time to the system. CPU resource consumption.
- the step of converting the canvas to the first annotation image of the first format comprises the steps of:
- the second annotation image is converted into the first annotation image, and the image of the first format and the image of the second format have a conversion relationship.
- the image format directly converted by the canvas is not the image format of the video image. Therefore, after converting the canvas into the second annotation image, the second annotation image needs to be converted into The first annotation image with the same format as the video image is convenient for labeling the video image to be labeled.
- the canvas is converted twice, the effective pixel on the canvas is only the pixel of the labeled content, compared to the video image. The conversion of pixel points has low resource consumption for the system CPU.
- the size of the canvas is the same as the size of the video image to be labeled.
- the size of the canvas is the same as the size of the video image to be labeled, which is convenient for quickly determining the correspondence between the pixel points on the canvas and the pixel points of the video image to be labeled, and can also quickly determine the marked content on the canvas.
- the specified location is convenient for quickly determining the correspondence between the pixel points on the canvas and the pixel points of the video image to be labeled, and can also quickly determine the marked content on the canvas. The specified location.
- the step of converting the canvas to the second annotation image of the second format comprises the steps of:
- a rectangular image is captured on the canvas, the rectangular image being the smallest rectangular image covering the content of the annotation, and the rectangular image being converted into the second annotation image.
- the content of the annotation does not occupy the entire video image, and the annotation content on the canvas only occupies a part of the canvas. Therefore, when acquiring the second annotation image, the annotation content may be only for the annotation content, that is, on the canvas.
- the rectangular image is intercepted, and the rectangular image is a minimum rectangular image covering the content of the annotation, and only the rectangular image is converted into the second annotation image, thereby reducing the calculation amount of the conversion and further reducing the resource consumption of the system CPU.
- the step of creating a canvas further includes the following steps:
- setting a background pixel value different from the pixel value of the labeled content is advantageous for distinguishing between the background and the labeled content, and also for determining the content of the labeled content during subsequent evaluation processing.
- the step of assigning pixel values of the labeled content on the first annotation image to corresponding locations on the video image to be labeled includes the following steps:
- the pixel value of the current pixel on the first annotation image is different from the background pixel value of the first annotation image, the pixel value of the current pixel is assigned to the corresponding position on the video image to be labeled.
- the pixel point of the label content is determined, the determination process is simple, and the efficiency of the assignment operation can be improved.
- the pixel values of all the pixels of the first annotation image corresponding to the canvas may be compared with the background pixel values of the first annotation image, or may correspond to the smallest rectangular image covering the annotation content on the canvas.
- the pixel values of all the pixels of a labeled image are compared with the background pixel values of the first labeled image, respectively.
- the step of assigning pixel values of the labeled content on the first annotation image to corresponding locations on the video image to be labeled includes the following steps:
- the background pixel on the first annotation image can be quickly excluded, thereby improving the efficiency of the assignment operation.
- the method for marking a video image further includes the following steps:
- the step of obtaining the annotated video image further includes the following steps:
- the annotated video image is encoded into an annotated video.
- the video to be labeled is decoded, and the video image to be labeled is obtained.
- the labeled video image is encoded into an annotation video to complete the labeling of the video, and the whole process is coherent and orderly.
- Improve the efficiency of video annotation and there are multiple frames of video images after video decoding.
- the content of the annotations in the multi-frame continuous video images in a certain period of time is the same, because the canvas and the video image are independent objects with corresponding relationships, in the multi-frame Continuous video images do not need to be labeled
- the drawing of the marked content multiple times further reduces the resource consumption of the system CPU.
- the first format is a YUV image format and the second format is an RGB image format.
- the content of the canvas is image data, which can be directly converted into an RGB image format.
- the video image obtained after video decoding is generally a YUV image format, and the conversion between the two image formats is simple, and the system CPU The resource consumption is small.
- the present invention also provides an annotation system for video images.
- an embodiment of the annotation system for video images of the present invention will be described in detail.
- FIG. 2 it is a schematic structural diagram of an annotation system for a video image according to the present invention.
- the annotation system of the video image in this embodiment includes the following units:
- a canvas creating unit 210 configured to create a canvas, and determine a correspondence between a pixel point on the canvas and a pixel point of the video image to be labeled;
- An annotation drawing unit 220 is configured to receive an annotation command, and draw the annotation content at a specified position of the canvas according to the annotation command and the correspondence relationship;
- the image conversion unit 230 is configured to convert the canvas into a first annotation image in a first format, where the first format is an image format of the video image to be labeled;
- the pixel assignment unit 240 is configured to assign a pixel value of the labeled content on the first annotation image to a corresponding position on the video image to be labeled according to the correspondence relationship, to obtain the labeled video image.
- the image conversion unit 230 converts the canvas into a second annotation image in a second format, the second format is an image format directly associated with the canvas; and converting the second annotation image into a first annotation image, first The formatted image has a conversion relationship with the image of the second format.
- the size of the canvas is the same as the size of the video image to be labeled.
- the image conversion unit 230 intercepts a rectangular image on the canvas, the rectangular image being the smallest rectangular image covering the annotation content, and the rectangular image being converted into the second annotation image.
- the annotation system of the video image further includes a background setting.
- the unit 250 is configured to set a background pixel value of the canvas, and the background pixel value is different from the pixel value of the labeled content.
- the pixel assignment unit 240 assigns the pixel value of the current pixel point to the video image to be labeled when the pixel value of the current pixel point on the first annotation image is different from the background pixel value of the first annotation image. Corresponding location.
- the pixel assignment unit 240 does not process the corresponding position on the video image to be labeled when the pixel value of the current pixel on the first annotation image is the same as the background pixel value of the first annotation image.
- the labeling system of the video image further includes a video decoding unit 260 and a video encoding unit 270;
- the video decoding unit 260 is configured to acquire a video to be labeled, decode the video to be labeled, and obtain a video image to be labeled;
- Video encoding unit 270 is for encoding the annotated video image into an annotated video.
- the first format is a YUV image format and the second format is an RGB image format.
- the labeling system of the video image of the present invention has a one-to-one correspondence with the labeling method of the video image of the present invention, and the technical features and the beneficial effects thereof described in the embodiment of the labeling method of the video image are applicable to the embodiment of the labeling system of the video image. in.
- ordinal numbers such as “first” and “second” are merely used to distinguish the objects involved, and are not intended to limit the objects themselves.
- the specific steps of the labeling method of the video image may be:
- the YUV video image is processed as follows:
- the labeled YUV video image is encoded into a video, and the labeled content can be displayed on the video screen.
- the specific embodiment adopts the method of labeling the canvas, avoiding the labeling on the video changed image every time, and does not need to perform the YUV ⁇ RGB ⁇ YUV format conversion on the video image, and when the label content does not change, the labeling canvas is also Without the need to redraw each time, the above method greatly reduces the resource consumption of the system CPU.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Library & Information Science (AREA)
- Editing Of Facsimile Originals (AREA)
- Image Analysis (AREA)
Abstract
La présente invention concerne un procédé et un système d'annotation d'image de vidéo. Le procédé comprend les étapes suivantes : établir un canevas, des pixels du canevas correspondant à des pixels d'une image de vidéo à annoter ; tracer du contenu d'annotation à la position désignée du canevas selon une commande d'annotation reçue et la correspondance des pixels, et effectuer une conversion de format sur le canevas pour obtenir une première image annotée ; et attribuer une valeur de pixel du contenu d'annotation sur la première image annotée à une position correspondante sur l'image de vidéo à annoter en fonction de la correspondance des pixels pour obtenir une image de vidéo annotée afin d'obtenir une annotation de l'image de vidéo. Selon la présente solution, le contenu d'annotation est tracé en utilisant le canevas, l'opération de tracé du contenu d'annotation dans chaque image de vidéo est évitée, et la réalisation d'une conversion de format répétée sur les images de vidéo n'est pas nécessaire, de sorte que la consommation de ressources d'une CPU de système soit fortement réduite lorsque les images de vidéo sont annotées.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201611161505.1 | 2016-12-15 | ||
| CN201611161505.1A CN106791937B (zh) | 2016-12-15 | 2016-12-15 | 视频图像的标注方法和系统 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2018107777A1 true WO2018107777A1 (fr) | 2018-06-21 |
Family
ID=58891413
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2017/096481 Ceased WO2018107777A1 (fr) | 2016-12-15 | 2017-08-08 | Procédé et système d'annotation d'image de vidéo |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN106791937B (fr) |
| WO (1) | WO2018107777A1 (fr) |
Cited By (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109360253A (zh) * | 2018-09-28 | 2019-02-19 | 共享智能铸造产业创新中心有限公司 | 一种大像素bmp格式图像的绘制方法 |
| CN110851630A (zh) * | 2019-10-14 | 2020-02-28 | 武汉市慧润天成信息科技有限公司 | 一种深度学习标注样本的管理系统及方法 |
| CN110991296A (zh) * | 2019-11-26 | 2020-04-10 | 腾讯科技(深圳)有限公司 | 视频标注方法、装置、电子设备及计算机可读存储介质 |
| CN111191708A (zh) * | 2019-12-25 | 2020-05-22 | 浙江省北大信息技术高等研究院 | 自动化样本关键点标注方法、装置及系统 |
| CN111489283A (zh) * | 2019-01-25 | 2020-08-04 | 鸿富锦精密工业(武汉)有限公司 | 图片格式转换方法、装置及计算机存储介质 |
| CN112346807A (zh) * | 2020-11-06 | 2021-02-09 | 广州小鹏自动驾驶科技有限公司 | 一种图像标注方法和装置 |
Families Citing this family (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106791937B (zh) * | 2016-12-15 | 2020-08-11 | 广东威创视讯科技股份有限公司 | 视频图像的标注方法和系统 |
| CN107333087B (zh) * | 2017-06-27 | 2020-05-08 | 京东方科技集团股份有限公司 | 一种基于视频会话的信息共享方法和装置 |
| CN107995538B (zh) * | 2017-12-18 | 2020-02-28 | 威创集团股份有限公司 | 视频批注方法及系统 |
| CN110706228B (zh) * | 2019-10-16 | 2022-08-05 | 京东方科技集团股份有限公司 | 图像的标记方法和系统、及存储介质 |
| CN113014960B (zh) * | 2019-12-19 | 2023-04-11 | 腾讯科技(深圳)有限公司 | 一种在线制作视频的方法、装置及存储介质 |
| CN117915022A (zh) * | 2022-10-11 | 2024-04-19 | 中兴通讯股份有限公司 | 图像处理方法、装置和终端 |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CA2716434A1 (fr) * | 2010-03-01 | 2011-09-01 | Dundas Data Visualization, Inc. | Systemes et procedes pour determiner le positionnement et la mise a l'echelle d'elements graphiques |
| CN105872679A (zh) * | 2015-12-31 | 2016-08-17 | 乐视网信息技术(北京)股份有限公司 | 弹幕显示方法和装置 |
| CN106162301A (zh) * | 2015-04-14 | 2016-11-23 | 北京奔流网络信息技术有限公司 | 一种信息推送方法 |
| CN106791937A (zh) * | 2016-12-15 | 2017-05-31 | 广东威创视讯科技股份有限公司 | 视频图像的标注方法和系统 |
Family Cites Families (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US8564590B2 (en) * | 2007-06-29 | 2013-10-22 | Microsoft Corporation | Imparting three-dimensional characteristics in a two-dimensional space |
| CN101499172A (zh) * | 2009-03-06 | 2009-08-05 | 深圳华为通信技术有限公司 | 控件绘制方法及装置 |
| CN102419743A (zh) * | 2011-07-06 | 2012-04-18 | 北京汇冠新技术股份有限公司 | 一种批注方法及系统 |
| CN102968809B (zh) * | 2012-12-07 | 2015-12-09 | 成都理想境界科技有限公司 | 在增强现实领域实现虚拟信息标注及绘制标注线的方法 |
| JP2015150865A (ja) * | 2014-02-19 | 2015-08-24 | セイコーエプソン株式会社 | 印刷装置およびその印刷制御方法 |
| CN109388329A (zh) * | 2015-12-16 | 2019-02-26 | 广州视睿电子科技有限公司 | 远程批注同步的方法与系统 |
-
2016
- 2016-12-15 CN CN201611161505.1A patent/CN106791937B/zh active Active
-
2017
- 2017-08-08 WO PCT/CN2017/096481 patent/WO2018107777A1/fr not_active Ceased
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CA2716434A1 (fr) * | 2010-03-01 | 2011-09-01 | Dundas Data Visualization, Inc. | Systemes et procedes pour determiner le positionnement et la mise a l'echelle d'elements graphiques |
| CN106162301A (zh) * | 2015-04-14 | 2016-11-23 | 北京奔流网络信息技术有限公司 | 一种信息推送方法 |
| CN105872679A (zh) * | 2015-12-31 | 2016-08-17 | 乐视网信息技术(北京)股份有限公司 | 弹幕显示方法和装置 |
| CN106791937A (zh) * | 2016-12-15 | 2017-05-31 | 广东威创视讯科技股份有限公司 | 视频图像的标注方法和系统 |
Cited By (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109360253A (zh) * | 2018-09-28 | 2019-02-19 | 共享智能铸造产业创新中心有限公司 | 一种大像素bmp格式图像的绘制方法 |
| CN109360253B (zh) * | 2018-09-28 | 2023-08-11 | 共享智能装备有限公司 | 一种大像素bmp格式图像的绘制方法 |
| CN111489283A (zh) * | 2019-01-25 | 2020-08-04 | 鸿富锦精密工业(武汉)有限公司 | 图片格式转换方法、装置及计算机存储介质 |
| CN111489283B (zh) * | 2019-01-25 | 2023-08-11 | 鸿富锦精密工业(武汉)有限公司 | 图片格式转换方法、装置及计算机存储介质 |
| CN110851630A (zh) * | 2019-10-14 | 2020-02-28 | 武汉市慧润天成信息科技有限公司 | 一种深度学习标注样本的管理系统及方法 |
| CN110991296A (zh) * | 2019-11-26 | 2020-04-10 | 腾讯科技(深圳)有限公司 | 视频标注方法、装置、电子设备及计算机可读存储介质 |
| CN110991296B (zh) * | 2019-11-26 | 2023-04-07 | 腾讯科技(深圳)有限公司 | 视频标注方法、装置、电子设备及计算机可读存储介质 |
| CN111191708A (zh) * | 2019-12-25 | 2020-05-22 | 浙江省北大信息技术高等研究院 | 自动化样本关键点标注方法、装置及系统 |
| CN112346807A (zh) * | 2020-11-06 | 2021-02-09 | 广州小鹏自动驾驶科技有限公司 | 一种图像标注方法和装置 |
Also Published As
| Publication number | Publication date |
|---|---|
| CN106791937A (zh) | 2017-05-31 |
| CN106791937B (zh) | 2020-08-11 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN106791937B (zh) | 视频图像的标注方法和系统 | |
| CN107622504B (zh) | 用于处理图片的方法和装置 | |
| CN103269416A (zh) | 采用并行处理方式实现视频图像拼接显示的装置及方法 | |
| CN110263301B (zh) | 用于确定文字的颜色的方法和装置 | |
| CN110662080B (zh) | 面向机器的通用编码方法 | |
| CN111862110A (zh) | 一种绿幕抠像方法、系统、设备和可读存储介质 | |
| US20180005387A1 (en) | Detection and location of active display regions in videos with static borders | |
| CN102930537A (zh) | 一种图像检测方法及系统 | |
| US20150030312A1 (en) | Method of sparse representation of contents of high-resolution video images supporting content editing and propagation | |
| CN110782387A (zh) | 图像处理方法、装置、图像处理器及电子设备 | |
| WO2020108010A1 (fr) | Procédé et appareil de traitement vidéo, dispositif électronique et support d'enregistrement | |
| CN103186780A (zh) | 视频字幕识别方法及装置 | |
| WO2020108060A1 (fr) | Procédé et appareil de traitement vidéo, dispositif électronique et support de stockage | |
| US10290110B2 (en) | Video overlay modification for enhanced readability | |
| CN110189384A (zh) | 基于Unity3D的图像压缩方法、装置、计算机设备和存储介质 | |
| WO2020034981A1 (fr) | Procédé permettant de générer des informations codées et procédé permettant de reconnaître des informations codées | |
| US8798391B2 (en) | Method for pre-processing an image in facial recognition system | |
| CN107113464A (zh) | 内容提供装置、显示装置及其控制方法 | |
| CN105430299B (zh) | 拼接墙信号源标注方法和系统 | |
| US20200294246A1 (en) | Selectively identifying data based on motion data from a digital video to provide as input to an image processing model | |
| US20160343148A1 (en) | Methods and systems for identifying background in video data using geometric primitives | |
| CN106611406A (zh) | 图像校正方法和图像校正设备 | |
| CN105451008A (zh) | 图像处理系统及色彩饱和度补偿方法 | |
| CN110570441B (zh) | 一种超高清低延时视频控制方法及系统 | |
| CN103730097B (zh) | 超高分辨率图像的显示方法与系统 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17882163 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 28/10/2019) |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 17882163 Country of ref document: EP Kind code of ref document: A1 |