[go: up one dir, main page]

WO2018107777A1 - Method and system for annotating video image - Google Patents

Method and system for annotating video image Download PDF

Info

Publication number
WO2018107777A1
WO2018107777A1 PCT/CN2017/096481 CN2017096481W WO2018107777A1 WO 2018107777 A1 WO2018107777 A1 WO 2018107777A1 CN 2017096481 W CN2017096481 W CN 2017096481W WO 2018107777 A1 WO2018107777 A1 WO 2018107777A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
canvas
annotation
video image
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2017/096481
Other languages
French (fr)
Chinese (zh)
Inventor
董友球
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vtron Group Co Ltd
Original Assignee
Vtron Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vtron Group Co Ltd filed Critical Vtron Group Co Ltd
Publication of WO2018107777A1 publication Critical patent/WO2018107777A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • H04N21/2353Processing of additional data, e.g. scrambling of additional data or processing content descriptors specifically adapted to content descriptors, e.g. coding, compressing or processing of metadata
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/182Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8126Monomedia components thereof involving additional data, e.g. news, sports, stocks, weather forecasts
    • H04N21/8133Monomedia components thereof involving additional data, e.g. news, sports, stocks, weather forecasts specifically related to the content, e.g. biography of the actors in a movie, detailed information about an article seen in a video program
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/84Generation or processing of descriptive data, e.g. content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8455Structuring of content, e.g. decomposing content into time segments involving pointers to the content, e.g. pointers to the I-frames of the video stream

Definitions

  • the present invention relates to the field of video image processing technologies, and in particular, to a method and system for marking video images.
  • Video annotation refers to superimposing lines, words, etc. on the image in order to more clearly express the meaning of the image content during video playback. For example, when analyzing a video, some key characters and items in the video need to be circled to emphasize, and even some text descriptions are attached.
  • the video when the video is marked, it is generally necessary to extract each frame image first. Since the image format extracted from the video is often in the YUV format, the YUV image needs to be converted into an RGB image, and then the annotation information is superimposed on the RGB image. , then convert the RGB image into a YUV image and encode it into a video.
  • the video image is formatted multiple times, and in the process of converting the RGB image of different video images into the YUV image, the drawing of the line or the text is required, resulting in the resources of the system CPU. High consumption.
  • a method for labeling video images comprising the following steps:
  • the pixel value of the labeled content on the first annotation image is assigned to the corresponding position on the video image to be labeled, and the labeled video image is obtained.
  • a labeling system for video images comprising:
  • a canvas creating unit configured to create a canvas, and determine a correspondence between a pixel point on the canvas and a pixel point of the video image to be labeled
  • An annotation drawing unit is configured to receive the labeling command, and draw the labeling content at a specified position of the canvas according to the labeling command and the corresponding relationship;
  • An image conversion unit configured to convert the canvas into a first annotation image in a first format, where the first format is an image format of the video image to be labeled;
  • the pixel assignment unit is configured to assign a pixel value of the labeled content on the first annotation image to a corresponding position on the video image to be labeled according to the correspondence relationship, to obtain the labeled video image.
  • the canvas is created, and the pixel point of the canvas corresponds to the pixel point of the video image to be labeled, and the labeling content is drawn at the designated position of the canvas according to the received labeling command and the corresponding relationship of the pixel points, and the canvas is drawn.
  • Performing format conversion to obtain a first annotation image assigning pixel values of the annotation content on the first annotation image to corresponding positions on the video image to be labeled according to pixel correspondence, obtaining an annotated video image, thereby completing the video image Label.
  • the use of the canvas to draw the annotation content avoids the operation of drawing the annotation content in each frame of the video image, and does not need to perform multiple format conversion on the video image, thereby greatly reducing the video image annotation time to the system.
  • CPU resource consumption is used to draw the annotation content.
  • FIG. 1 is a schematic flow chart of a method for marking a video image according to an embodiment of the present invention
  • FIG. 2 is a schematic structural diagram of an annotation system of a video image of one embodiment
  • FIG. 3 is a schematic structural diagram of an annotation system of a video image of one embodiment
  • FIG. 4 is a schematic structural diagram of an annotation system of a video image of one of the embodiments.
  • FIG. 1 it is a schematic flowchart of a method for marking a video image according to the present invention.
  • the method for marking a video image in this embodiment includes the following steps:
  • Step S101 Create a canvas, and determine a correspondence relationship between a pixel point on the canvas and a pixel point of the video image to be labeled;
  • Step S102 Receive an annotation command, and draw the annotation content at a specified position of the canvas according to the annotation command and the correspondence relationship;
  • the labeling command is a command for labeling the specified position in the labeled video image. Since the pixel point on the canvas corresponds to the pixel point of the video image to be labeled, the designation of the canvas can be determined according to the labeling command and the correspondence relationship. position;
  • Step S103 Convert the canvas into a first annotation image of the first format, where the first format is an image format of the video image to be labeled;
  • the canvas is converted into a first annotation image, and the image format of the first annotation image is the same as the image format of the video image to be labeled, which facilitates subsequent annotation processing;
  • Step S104 Assign a pixel value of the labeled content on the first annotation image to a corresponding position on the video image to be labeled according to the correspondence relationship, to obtain an annotated video image.
  • the pixel value of the labeled content on the first labeled image is assigned to the corresponding position on the video image to be labeled, which is equivalent to marking the corresponding position on the video image to be labeled;
  • the canvas is created, and the pixel points of the canvas correspond to the pixel points of the video image to be labeled, and the label content is drawn at a specified position of the canvas according to the received labeling command and the corresponding relationship of the pixel points, and the canvas is format converted.
  • the first annotation image is obtained, and the pixel value of the annotation content on the first annotation image is assigned to the corresponding position on the video image to be labeled according to the pixel point correspondence relationship, and the labeled video image is obtained, thereby completing the labeling of the video image.
  • the use of the canvas to draw the annotation content avoids the operation of drawing the annotation content in each frame of the video image, and does not need to perform multiple format conversion on the video image, thereby greatly reducing the video image annotation time to the system. CPU resource consumption.
  • the step of converting the canvas to the first annotation image of the first format comprises the steps of:
  • the second annotation image is converted into the first annotation image, and the image of the first format and the image of the second format have a conversion relationship.
  • the image format directly converted by the canvas is not the image format of the video image. Therefore, after converting the canvas into the second annotation image, the second annotation image needs to be converted into The first annotation image with the same format as the video image is convenient for labeling the video image to be labeled.
  • the canvas is converted twice, the effective pixel on the canvas is only the pixel of the labeled content, compared to the video image. The conversion of pixel points has low resource consumption for the system CPU.
  • the size of the canvas is the same as the size of the video image to be labeled.
  • the size of the canvas is the same as the size of the video image to be labeled, which is convenient for quickly determining the correspondence between the pixel points on the canvas and the pixel points of the video image to be labeled, and can also quickly determine the marked content on the canvas.
  • the specified location is convenient for quickly determining the correspondence between the pixel points on the canvas and the pixel points of the video image to be labeled, and can also quickly determine the marked content on the canvas. The specified location.
  • the step of converting the canvas to the second annotation image of the second format comprises the steps of:
  • a rectangular image is captured on the canvas, the rectangular image being the smallest rectangular image covering the content of the annotation, and the rectangular image being converted into the second annotation image.
  • the content of the annotation does not occupy the entire video image, and the annotation content on the canvas only occupies a part of the canvas. Therefore, when acquiring the second annotation image, the annotation content may be only for the annotation content, that is, on the canvas.
  • the rectangular image is intercepted, and the rectangular image is a minimum rectangular image covering the content of the annotation, and only the rectangular image is converted into the second annotation image, thereby reducing the calculation amount of the conversion and further reducing the resource consumption of the system CPU.
  • the step of creating a canvas further includes the following steps:
  • setting a background pixel value different from the pixel value of the labeled content is advantageous for distinguishing between the background and the labeled content, and also for determining the content of the labeled content during subsequent evaluation processing.
  • the step of assigning pixel values of the labeled content on the first annotation image to corresponding locations on the video image to be labeled includes the following steps:
  • the pixel value of the current pixel on the first annotation image is different from the background pixel value of the first annotation image, the pixel value of the current pixel is assigned to the corresponding position on the video image to be labeled.
  • the pixel point of the label content is determined, the determination process is simple, and the efficiency of the assignment operation can be improved.
  • the pixel values of all the pixels of the first annotation image corresponding to the canvas may be compared with the background pixel values of the first annotation image, or may correspond to the smallest rectangular image covering the annotation content on the canvas.
  • the pixel values of all the pixels of a labeled image are compared with the background pixel values of the first labeled image, respectively.
  • the step of assigning pixel values of the labeled content on the first annotation image to corresponding locations on the video image to be labeled includes the following steps:
  • the background pixel on the first annotation image can be quickly excluded, thereby improving the efficiency of the assignment operation.
  • the method for marking a video image further includes the following steps:
  • the step of obtaining the annotated video image further includes the following steps:
  • the annotated video image is encoded into an annotated video.
  • the video to be labeled is decoded, and the video image to be labeled is obtained.
  • the labeled video image is encoded into an annotation video to complete the labeling of the video, and the whole process is coherent and orderly.
  • Improve the efficiency of video annotation and there are multiple frames of video images after video decoding.
  • the content of the annotations in the multi-frame continuous video images in a certain period of time is the same, because the canvas and the video image are independent objects with corresponding relationships, in the multi-frame Continuous video images do not need to be labeled
  • the drawing of the marked content multiple times further reduces the resource consumption of the system CPU.
  • the first format is a YUV image format and the second format is an RGB image format.
  • the content of the canvas is image data, which can be directly converted into an RGB image format.
  • the video image obtained after video decoding is generally a YUV image format, and the conversion between the two image formats is simple, and the system CPU The resource consumption is small.
  • the present invention also provides an annotation system for video images.
  • an embodiment of the annotation system for video images of the present invention will be described in detail.
  • FIG. 2 it is a schematic structural diagram of an annotation system for a video image according to the present invention.
  • the annotation system of the video image in this embodiment includes the following units:
  • a canvas creating unit 210 configured to create a canvas, and determine a correspondence between a pixel point on the canvas and a pixel point of the video image to be labeled;
  • An annotation drawing unit 220 is configured to receive an annotation command, and draw the annotation content at a specified position of the canvas according to the annotation command and the correspondence relationship;
  • the image conversion unit 230 is configured to convert the canvas into a first annotation image in a first format, where the first format is an image format of the video image to be labeled;
  • the pixel assignment unit 240 is configured to assign a pixel value of the labeled content on the first annotation image to a corresponding position on the video image to be labeled according to the correspondence relationship, to obtain the labeled video image.
  • the image conversion unit 230 converts the canvas into a second annotation image in a second format, the second format is an image format directly associated with the canvas; and converting the second annotation image into a first annotation image, first The formatted image has a conversion relationship with the image of the second format.
  • the size of the canvas is the same as the size of the video image to be labeled.
  • the image conversion unit 230 intercepts a rectangular image on the canvas, the rectangular image being the smallest rectangular image covering the annotation content, and the rectangular image being converted into the second annotation image.
  • the annotation system of the video image further includes a background setting.
  • the unit 250 is configured to set a background pixel value of the canvas, and the background pixel value is different from the pixel value of the labeled content.
  • the pixel assignment unit 240 assigns the pixel value of the current pixel point to the video image to be labeled when the pixel value of the current pixel point on the first annotation image is different from the background pixel value of the first annotation image. Corresponding location.
  • the pixel assignment unit 240 does not process the corresponding position on the video image to be labeled when the pixel value of the current pixel on the first annotation image is the same as the background pixel value of the first annotation image.
  • the labeling system of the video image further includes a video decoding unit 260 and a video encoding unit 270;
  • the video decoding unit 260 is configured to acquire a video to be labeled, decode the video to be labeled, and obtain a video image to be labeled;
  • Video encoding unit 270 is for encoding the annotated video image into an annotated video.
  • the first format is a YUV image format and the second format is an RGB image format.
  • the labeling system of the video image of the present invention has a one-to-one correspondence with the labeling method of the video image of the present invention, and the technical features and the beneficial effects thereof described in the embodiment of the labeling method of the video image are applicable to the embodiment of the labeling system of the video image. in.
  • ordinal numbers such as “first” and “second” are merely used to distinguish the objects involved, and are not intended to limit the objects themselves.
  • the specific steps of the labeling method of the video image may be:
  • the YUV video image is processed as follows:
  • the labeled YUV video image is encoded into a video, and the labeled content can be displayed on the video screen.
  • the specific embodiment adopts the method of labeling the canvas, avoiding the labeling on the video changed image every time, and does not need to perform the YUV ⁇ RGB ⁇ YUV format conversion on the video image, and when the label content does not change, the labeling canvas is also Without the need to redraw each time, the above method greatly reduces the resource consumption of the system CPU.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Library & Information Science (AREA)
  • Editing Of Facsimile Originals (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a method and system for annotating a video image. The method comprises: establishing a canvas, pixels of the canvas corresponding to pixels of a video image to be annotated; drawing annotation content at the appointed position of the canvas according to a received annotation command and the correspondence of pixels, and performing format conversion on the canvas to obtain a first annotated image; and assigning a pixel value of the annotation content on the first annotated image to corresponding position on the video image to be annotated according to the correspondence of pixels to obtain an annotated video image so as to achieve annotation of the video image. According to the present solution, the annotation content is drawn by using the canvas, the operation of drawing the annotation content in each video image is avoided, and repeated format conversion does not need to be carried out on the video images, so that the resource consumption of a system CPU is greatly reduced when the video images are annotated.

Description

视频图像的标注方法和系统Video image annotation method and system 技术领域Technical field

本发明涉及视频图像处理技术领域,特别是涉及一种视频图像的标注方法和系统。The present invention relates to the field of video image processing technologies, and in particular, to a method and system for marking video images.

背景技术Background technique

视频标注是指在视频播放过程中,为了更清晰地表达出图像内容的含义,在图像上叠加显示线条、文字等。例如在分析一段视频时需要对视频中的一些关键人物、物品圈出来以示强调,甚至附加一些文字说明。Video annotation refers to superimposing lines, words, etc. on the image in order to more clearly express the meaning of the image content during video playback. For example, when analyzing a video, some key characters and items in the video need to be circled to emphasize, and even some text descriptions are attached.

传统技术中,对视频进行标注时一般要先提取出每一帧图像,由于从视频中提取出的图像格式往往是YUV格式,需要将YUV图像转成RGB图像,再在RGB图像上叠加标注信息,然后把RGB图像转成YUV图像,再编码为视频。In the conventional technology, when the video is marked, it is generally necessary to extract each frame image first. Since the image format extracted from the video is often in the YUV format, the YUV image needs to be converted into an RGB image, and then the annotation information is superimposed on the RGB image. , then convert the RGB image into a YUV image and encode it into a video.

在对视频进行标注的过程中对视频图像进行了多次的格式转换,而且在不同的视频图像的RGB图像转成YUV图像的过程中均需要进行标注线条或文字的绘制,导致系统CPU的资源消耗较高。In the process of labeling the video, the video image is formatted multiple times, and in the process of converting the RGB image of different video images into the YUV image, the drawing of the line or the text is required, resulting in the resources of the system CPU. High consumption.

发明内容Summary of the invention

基于此,有必要针对传统的对视频进行标注的方式对系统CPU的资源消耗较高的问题,提供一种视频图像的标注方法和系统。Based on this, it is necessary to provide a method and system for labeling video images in view of the traditional method of labeling video to high resource consumption of the system CPU.

一种视频图像的标注方法,包括以下步骤:A method for labeling video images, comprising the following steps:

创建画布,确定画布上的像素点与待标注视频图像的像素点的对应关系;Creating a canvas to determine a correspondence between pixels on the canvas and pixels of the video image to be labeled;

接收标注命令,根据标注命令和对应关系在画布的指定位置绘制标注内容;Receiving an annotation command, and drawing the annotation content at a specified position of the canvas according to the annotation command and the correspondence relationship;

将画布转换为第一格式的第一标注图像,第一格式为待标注视频图像的图像格式;Converting the canvas into a first annotation image of the first format, the first format being an image format of the video image to be labeled;

根据对应关系,将第一标注图像上的标注内容的像素值赋值到待标注视频图像上的对应位置,获得标注视频图像。 According to the correspondence relationship, the pixel value of the labeled content on the first annotation image is assigned to the corresponding position on the video image to be labeled, and the labeled video image is obtained.

一种视频图像的标注系统,包括:A labeling system for video images, comprising:

画布创建单元,用于创建画布,确定画布上的像素点与待标注视频图像的像素点的对应关系;a canvas creating unit, configured to create a canvas, and determine a correspondence between a pixel point on the canvas and a pixel point of the video image to be labeled;

标注绘制单元,用于接收标注命令,根据标注命令和对应关系在画布的指定位置绘制标注内容;An annotation drawing unit is configured to receive the labeling command, and draw the labeling content at a specified position of the canvas according to the labeling command and the corresponding relationship;

图像转换单元,用于将画布转换为第一格式的第一标注图像,第一格式为待标注视频图像的图像格式;An image conversion unit, configured to convert the canvas into a first annotation image in a first format, where the first format is an image format of the video image to be labeled;

像素赋值单元,用于根据对应关系,将第一标注图像上的标注内容的像素值赋值到待标注视频图像上的对应位置,获得标注视频图像。The pixel assignment unit is configured to assign a pixel value of the labeled content on the first annotation image to a corresponding position on the video image to be labeled according to the correspondence relationship, to obtain the labeled video image.

根据上述视频图像的标注方法和系统,其是创建画布,画布像素点与待标注视频图像的像素点相对应,根据接收的标注命令和像素点对应关系在画布的指定位置绘制标注内容,对画布进行格式转换,得到第一标注图像,根据像素点对应关系将第一标注图像上的标注内容的像素值赋值到待标注视频图像上的对应位置,获得标注视频图像,以此完成对视频图像的标注。在本方案中,利用画布绘制标注内容,避免了在每一帧视频图像中都进行绘制标注内容的操作,而且也无需对视频图像进行多次的格式转换,大大降低了视频图像标注时对系统CPU的资源消耗。According to the labeling method and system of the above video image, the canvas is created, and the pixel point of the canvas corresponds to the pixel point of the video image to be labeled, and the labeling content is drawn at the designated position of the canvas according to the received labeling command and the corresponding relationship of the pixel points, and the canvas is drawn. Performing format conversion to obtain a first annotation image, assigning pixel values of the annotation content on the first annotation image to corresponding positions on the video image to be labeled according to pixel correspondence, obtaining an annotated video image, thereby completing the video image Label. In this scheme, the use of the canvas to draw the annotation content avoids the operation of drawing the annotation content in each frame of the video image, and does not need to perform multiple format conversion on the video image, thereby greatly reducing the video image annotation time to the system. CPU resource consumption.

附图说明DRAWINGS

图1为其中一个实施例的视频图像的标注方法的流程示意图;1 is a schematic flow chart of a method for marking a video image according to an embodiment of the present invention;

图2为其中一个实施例的视频图像的标注系统的结构示意图;2 is a schematic structural diagram of an annotation system of a video image of one embodiment;

图3为其中一个实施例的视频图像的标注系统的结构示意图;3 is a schematic structural diagram of an annotation system of a video image of one embodiment;

图4为其中一个实施例的视频图像的标注系统的结构示意图。4 is a schematic structural diagram of an annotation system of a video image of one of the embodiments.

具体实施方式detailed description

为使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步的详细说明。应当理解,此处所描述的具体实施方式 仅仅用以解释本发明,并不限定本发明的保护范围。The present invention will be further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein The invention is only intended to be illustrative, and does not limit the scope of the invention.

参见图1所示,为本发明的视频图像的标注方法的流程示意图。该实施例中的视频图像的标注方法,包括以下步骤:Referring to FIG. 1 , it is a schematic flowchart of a method for marking a video image according to the present invention. The method for marking a video image in this embodiment includes the following steps:

步骤S101:创建画布,确定画布上的像素点与待标注视频图像的像素点的对应关系;Step S101: Create a canvas, and determine a correspondence relationship between a pixel point on the canvas and a pixel point of the video image to be labeled;

步骤S102:接收标注命令,根据标注命令和对应关系在画布的指定位置绘制标注内容;Step S102: Receive an annotation command, and draw the annotation content at a specified position of the canvas according to the annotation command and the correspondence relationship;

在本步骤中,标注命令是对待标注视频图像中的指定位置进行标注的命令,由于画布上的像素点与待标注视频图像的像素点相对应,因此根据标注命令和对应关系可以确定画布的指定位置;In this step, the labeling command is a command for labeling the specified position in the labeled video image. Since the pixel point on the canvas corresponds to the pixel point of the video image to be labeled, the designation of the canvas can be determined according to the labeling command and the correspondence relationship. position;

步骤S103:将画布转换为第一格式的第一标注图像,第一格式为待标注视频图像的图像格式;Step S103: Convert the canvas into a first annotation image of the first format, where the first format is an image format of the video image to be labeled;

在本步骤中,将画布转换成第一标注图像,第一标注图像的图像格式与待标注视频图像的图像格式相同,便于后续的标注处理;In this step, the canvas is converted into a first annotation image, and the image format of the first annotation image is the same as the image format of the video image to be labeled, which facilitates subsequent annotation processing;

步骤S104:根据对应关系,将第一标注图像上的标注内容的像素值赋值到待标注视频图像上的对应位置,获得标注视频图像。Step S104: Assign a pixel value of the labeled content on the first annotation image to a corresponding position on the video image to be labeled according to the correspondence relationship, to obtain an annotated video image.

在本步骤中,将第一标注图像上的标注内容的像素值赋值到待标注视频图像上的对应位置,相当于在待标注视频图像上的对应位置进行标注;In this step, the pixel value of the labeled content on the first labeled image is assigned to the corresponding position on the video image to be labeled, which is equivalent to marking the corresponding position on the video image to be labeled;

在本实施例中,其是创建画布,画布像素点与待标注视频图像的像素点相对应,根据接收的标注命令和像素点对应关系在画布的指定位置绘制标注内容,对画布进行格式转换,得到第一标注图像,根据像素点对应关系将第一标注图像上的标注内容的像素值赋值到待标注视频图像上的对应位置,获得标注视频图像,以此完成对视频图像的标注。在本方案中,利用画布绘制标注内容,避免了在每一帧视频图像中都进行绘制标注内容的操作,而且也无需对视频图像进行多次的格式转换,大大降低了视频图像标注时对系统CPU的资源消耗。In this embodiment, the canvas is created, and the pixel points of the canvas correspond to the pixel points of the video image to be labeled, and the label content is drawn at a specified position of the canvas according to the received labeling command and the corresponding relationship of the pixel points, and the canvas is format converted. The first annotation image is obtained, and the pixel value of the annotation content on the first annotation image is assigned to the corresponding position on the video image to be labeled according to the pixel point correspondence relationship, and the labeled video image is obtained, thereby completing the labeling of the video image. In this scheme, the use of the canvas to draw the annotation content avoids the operation of drawing the annotation content in each frame of the video image, and does not need to perform multiple format conversion on the video image, thereby greatly reducing the video image annotation time to the system. CPU resource consumption.

在其中一个实施例中,将画布转换为第一格式的第一标注图像的步骤包括以下步骤: In one of the embodiments, the step of converting the canvas to the first annotation image of the first format comprises the steps of:

将画布转换为第二格式的第二标注图像,第二格式为与画布直接关联的图像格式;Converting the canvas into a second annotation image of the second format, the second format being an image format directly associated with the canvas;

将第二标注图像转换为第一标注图像,第一格式的图像与第二格式的图像具备转换关系。The second annotation image is converted into the first annotation image, and the image of the first format and the image of the second format have a conversion relationship.

在本实施例中,由于画布和视频图像的属性不同,画布直接转换后的图像格式不是视频图像的图像格式,因此在将画布转换为第二标注图像后,还需要将第二标注图像转换为与视频图像格式相同的第一标注图像,便于对待标注视频图像进行标注,虽对画布进行了两次转换,但画布上的有效像素点仅为标注内容的像素点,相比于对视频图像全部像素点的转换,对系统CPU的资源消耗较低。In this embodiment, since the attributes of the canvas and the video image are different, the image format directly converted by the canvas is not the image format of the video image. Therefore, after converting the canvas into the second annotation image, the second annotation image needs to be converted into The first annotation image with the same format as the video image is convenient for labeling the video image to be labeled. Although the canvas is converted twice, the effective pixel on the canvas is only the pixel of the labeled content, compared to the video image. The conversion of pixel points has low resource consumption for the system CPU.

在其中一个实施例中,画布的尺寸与待标注视频图像的尺寸大小相同。In one of the embodiments, the size of the canvas is the same as the size of the video image to be labeled.

在本实施例中,画布的尺寸与待标注视频图像的尺寸大小相同,有利于快速确定画布上的像素点与待标注视频图像的像素点的对应关系,而且也可以快速确定标注内容在画布上的指定位置。In this embodiment, the size of the canvas is the same as the size of the video image to be labeled, which is convenient for quickly determining the correspondence between the pixel points on the canvas and the pixel points of the video image to be labeled, and can also quickly determine the marked content on the canvas. The specified location.

在其中一个实施例中,将画布转换为第二格式的第二标注图像的步骤包括以下步骤:In one of the embodiments, the step of converting the canvas to the second annotation image of the second format comprises the steps of:

在画布上截取矩形图像,矩形图像是涵盖标注内容的最小矩形图像,将矩形图像转换为第二标注图像。A rectangular image is captured on the canvas, the rectangular image being the smallest rectangular image covering the content of the annotation, and the rectangular image being converted into the second annotation image.

在本实施例中,一般标注内容并不会占满整个视频图像,在画布上的标注内容也是只占据了画布的一部分,因此在获取第二标注图像时,可以只针对标注内容,即在画布上截取矩形图像,该矩形图像为涵盖标注内容的最小矩形图像,只将矩形图像转换为第二标注图像,以此可以减少转换的计算量,进一步降低对系统CPU的资源消耗。In this embodiment, generally, the content of the annotation does not occupy the entire video image, and the annotation content on the canvas only occupies a part of the canvas. Therefore, when acquiring the second annotation image, the annotation content may be only for the annotation content, that is, on the canvas. The rectangular image is intercepted, and the rectangular image is a minimum rectangular image covering the content of the annotation, and only the rectangular image is converted into the second annotation image, thereby reducing the calculation amount of the conversion and further reducing the resource consumption of the system CPU.

在其中一个实施例中,创建画布的步骤之后还包括以下步骤:In one of the embodiments, the step of creating a canvas further includes the following steps:

设置画布的背景像素值,背景像素值与标注内容的像素值不同。Sets the background pixel value of the canvas, and the background pixel value is different from the pixel value of the label content.

在本实施例中,设置与标注内容的像素值不同的背景像素值,有利于区分背景和标注内容,也便于后续的赋值处理时标注内容的确定。 In this embodiment, setting a background pixel value different from the pixel value of the labeled content is advantageous for distinguishing between the background and the labeled content, and also for determining the content of the labeled content during subsequent evaluation processing.

在其中一个实施例中,将第一标注图像上的标注内容的像素值赋值到待标注视频图像上的对应位置的步骤包括以下步骤:In one of the embodiments, the step of assigning pixel values of the labeled content on the first annotation image to corresponding locations on the video image to be labeled includes the following steps:

在第一标注图像上的当前像素点的像素值与第一标注图像的背景像素值不同时,将当前像素点的像素值赋值到待标注视频图像上的对应位置。When the pixel value of the current pixel on the first annotation image is different from the background pixel value of the first annotation image, the pixel value of the current pixel is assigned to the corresponding position on the video image to be labeled.

在本实施例中,根据第一标注图像上的当前像素点的像素值与第一标注图像的背景像素值的不同,确定标注内容的像素点,确定过程简单,可以提高赋值操作的效率。In this embodiment, according to the difference between the pixel value of the current pixel point on the first annotation image and the background pixel value of the first annotation image, the pixel point of the label content is determined, the determination process is simple, and the efficiency of the assignment operation can be improved.

可选的,可以将与画布对应的第一标注图像的所有像素点的像素值分别与第一标注图像的背景像素值进行比较,也可以将与画布上涵盖标注内容的最小矩形图像对应的第一标注图像的所有像素点的像素值分别与第一标注图像的背景像素值进行比较。Optionally, the pixel values of all the pixels of the first annotation image corresponding to the canvas may be compared with the background pixel values of the first annotation image, or may correspond to the smallest rectangular image covering the annotation content on the canvas. The pixel values of all the pixels of a labeled image are compared with the background pixel values of the first labeled image, respectively.

在其中一个实施例中,将第一标注图像上的标注内容的像素值赋值到待标注视频图像上的对应位置的步骤包括以下步骤:In one of the embodiments, the step of assigning pixel values of the labeled content on the first annotation image to corresponding locations on the video image to be labeled includes the following steps:

在第一标注图像上的当前像素点的像素值与第一标注图像的背景像素值相同时,对待标注视频图像上的对应位置不作处理。When the pixel value of the current pixel on the first annotation image is the same as the background pixel value of the first annotation image, the corresponding position on the video image to be labeled is not processed.

在本实施例中,根据第一标注图像上的当前像素点的像素值与第一标注图像的背景像素值的相同,可以快速排除第一标注图像上的背景像素点,从而提高赋值操作的效率。In this embodiment, according to the same pixel value of the current pixel point on the first annotation image and the background pixel value of the first annotation image, the background pixel on the first annotation image can be quickly excluded, thereby improving the efficiency of the assignment operation. .

在其中一个实施例中,视频图像的标注方法还包括以下步骤:In one embodiment, the method for marking a video image further includes the following steps:

获取待标注视频,对待标注视频进行解码,获得待标注视频图像;Obtaining a video to be labeled, decoding the labeled video, and obtaining a video image to be labeled;

获得标注视频图像的步骤之后还包括以下步骤:The step of obtaining the annotated video image further includes the following steps:

将标注视频图像编码成标注视频。The annotated video image is encoded into an annotated video.

在本实施例中,对待标注视频进行解码,得到待标注视频图像,对待标注视频图像标注完成后,再将标注视频图像编码成标注视频,以完成对视频的标注,整个过程连贯有序,可以提高视频标注的效率,而且,视频解码后有多帧视频图像,在某一时段的多帧连续视频图像中的标注内容相同,由于画布与视频图像是具备对应关系的独立对象,在对多帧连续视频图像进行标注时无需进 行多次的标注内容的绘制,进一步降低了系统CPU的资源消耗。In this embodiment, the video to be labeled is decoded, and the video image to be labeled is obtained. After the labeling of the video image is completed, the labeled video image is encoded into an annotation video to complete the labeling of the video, and the whole process is coherent and orderly. Improve the efficiency of video annotation, and there are multiple frames of video images after video decoding. The content of the annotations in the multi-frame continuous video images in a certain period of time is the same, because the canvas and the video image are independent objects with corresponding relationships, in the multi-frame Continuous video images do not need to be labeled The drawing of the marked content multiple times further reduces the resource consumption of the system CPU.

在其中一个实施例中,第一格式为YUV图像格式,第二格式为RGB图像格式。In one of the embodiments, the first format is a YUV image format and the second format is an RGB image format.

在本实施例中,画布的内容是绘制图像数据,一般可以直接转换成RGB图像格式,视频解码后得到的视频图像一般是YUV图像格式,这两种图像格式之间的转换简单,对系统CPU的资源消耗较小。In this embodiment, the content of the canvas is image data, which can be directly converted into an RGB image format. The video image obtained after video decoding is generally a YUV image format, and the conversion between the two image formats is simple, and the system CPU The resource consumption is small.

根据上述视频图像的标注方法,本发明还提供一种视频图像的标注系统,以下就本发明的视频图像的标注系统的实施例进行详细说明。According to the above method for labeling video images, the present invention also provides an annotation system for video images. Hereinafter, an embodiment of the annotation system for video images of the present invention will be described in detail.

参见图2所示,为本发明的视频图像的标注系统的结构示意图。该实施例中的视频图像的标注系统,包括以下单元:Referring to FIG. 2, it is a schematic structural diagram of an annotation system for a video image according to the present invention. The annotation system of the video image in this embodiment includes the following units:

画布创建单元210,用于创建画布,确定画布上的像素点与待标注视频图像的像素点的对应关系;a canvas creating unit 210, configured to create a canvas, and determine a correspondence between a pixel point on the canvas and a pixel point of the video image to be labeled;

标注绘制单元220,用于接收标注命令,根据标注命令和对应关系在画布的指定位置绘制标注内容;An annotation drawing unit 220 is configured to receive an annotation command, and draw the annotation content at a specified position of the canvas according to the annotation command and the correspondence relationship;

图像转换单元230,用于将画布转换为第一格式的第一标注图像,第一格式为待标注视频图像的图像格式;The image conversion unit 230 is configured to convert the canvas into a first annotation image in a first format, where the first format is an image format of the video image to be labeled;

像素赋值单元240,用于根据对应关系,将第一标注图像上的标注内容的像素值赋值到待标注视频图像上的对应位置,获得标注视频图像。The pixel assignment unit 240 is configured to assign a pixel value of the labeled content on the first annotation image to a corresponding position on the video image to be labeled according to the correspondence relationship, to obtain the labeled video image.

在其中一个实施例中,图像转换单元230将画布转换为第二格式的第二标注图像,第二格式为与画布直接关联的图像格式;将第二标注图像转换为第一标注图像,第一格式的图像与第二格式的图像具备转换关系。In one embodiment, the image conversion unit 230 converts the canvas into a second annotation image in a second format, the second format is an image format directly associated with the canvas; and converting the second annotation image into a first annotation image, first The formatted image has a conversion relationship with the image of the second format.

在其中一个实施例中,画布的尺寸与待标注视频图像的尺寸大小相同。In one of the embodiments, the size of the canvas is the same as the size of the video image to be labeled.

在其中一个实施例中,图像转换单元230在画布上截取矩形图像,矩形图像是涵盖标注内容的最小矩形图像,将矩形图像转换为第二标注图像。In one of the embodiments, the image conversion unit 230 intercepts a rectangular image on the canvas, the rectangular image being the smallest rectangular image covering the annotation content, and the rectangular image being converted into the second annotation image.

在其中一个实施例中,如图3所示,视频图像的标注系统还包括背景设置 单元250,背景设置单元250用于设置画布的背景像素值,背景像素值与标注内容的像素值不同。In one embodiment, as shown in FIG. 3, the annotation system of the video image further includes a background setting. The unit 250 is configured to set a background pixel value of the canvas, and the background pixel value is different from the pixel value of the labeled content.

在其中一个实施例中,像素赋值单元240在第一标注图像上的当前像素点的像素值与第一标注图像的背景像素值不同时,将当前像素点的像素值赋值到待标注视频图像上的对应位置。In one embodiment, the pixel assignment unit 240 assigns the pixel value of the current pixel point to the video image to be labeled when the pixel value of the current pixel point on the first annotation image is different from the background pixel value of the first annotation image. Corresponding location.

在其中一个实施例中,像素赋值单元240在第一标注图像上的当前像素点的像素值与第一标注图像的背景像素值相同时,对待标注视频图像上的对应位置不作处理。In one embodiment, the pixel assignment unit 240 does not process the corresponding position on the video image to be labeled when the pixel value of the current pixel on the first annotation image is the same as the background pixel value of the first annotation image.

在其中一个实施例中,如图4所示,视频图像的标注系统还包括视频解码单元260和视频编码单元270;In one embodiment, as shown in FIG. 4, the labeling system of the video image further includes a video decoding unit 260 and a video encoding unit 270;

视频解码单元260用于获取待标注视频,对待标注视频进行解码,获得待标注视频图像;The video decoding unit 260 is configured to acquire a video to be labeled, decode the video to be labeled, and obtain a video image to be labeled;

视频编码单元270用于将标注视频图像编码成标注视频。Video encoding unit 270 is for encoding the annotated video image into an annotated video.

在其中一个实施例中,第一格式为YUV图像格式,第二格式为RGB图像格式。In one of the embodiments, the first format is a YUV image format and the second format is an RGB image format.

本发明的视频图像的标注系统与本发明的视频图像的标注方法一一对应,在上述视频图像的标注方法的实施例阐述的技术特征及其有益效果均适用于视频图像的标注系统的实施例中。The labeling system of the video image of the present invention has a one-to-one correspondence with the labeling method of the video image of the present invention, and the technical features and the beneficial effects thereof described in the embodiment of the labeling method of the video image are applicable to the embodiment of the labeling system of the video image. in.

在本发明中,“第一”、“第二”等序数词只是为了对所涉及的对象进行区分,并不是对对象本身进行限定。In the present invention, ordinal numbers such as "first" and "second" are merely used to distinguish the objects involved, and are not intended to limit the objects themselves.

在一个具体的实施例中,视频图像的标注方法的具体步骤可以为:In a specific embodiment, the specific steps of the labeling method of the video image may be:

获取视频,对视频进行解码获得解码后的YUV视频图像;Acquiring a video, decoding the video to obtain a decoded YUV video image;

创建标注画布,为画布设置好背景色,确定画布上的像素点与待标注的YUV视频图像的像素点的对应关系;Create a label canvas, set a background color for the canvas, and determine a correspondence between pixels on the canvas and pixels of the YUV video image to be labeled;

以windows系统为例,创建画布需先创建内存设备上下文环境,基于上下文环境创建设备兼容位图和画图对象。为了区分标注的线条和背景色,还需要设置一个不用来标注的颜色作为背景色; Taking the windows system as an example, to create a canvas, you first need to create a memory device context, and create device compatible bitmaps and drawing objects based on the context. In order to distinguish the line and background color of the label, you also need to set a color that is not used for labeling as the background color;

接收标注命令,将标注绘制画布上;经过标注之后,画布上非背景色的点即为有标注的点。Receive the label command to draw the label on the canvas; after labeling, the point on the canvas that is not the background color is the marked point.

从画布中提取标注RGB图像;Extract the labeled RGB image from the canvas;

将标注RGB图像转换为标注YUV图像;Converting an annotated RGB image to a labeled YUV image;

对YUV视频图像进行如下处理:The YUV video image is processed as follows:

判断标注YUV图像中的像素点的像素值是否为标注YUV图像中的背景像素值,若是则对YUV视频图像中对应的像素点不作处理,若不是则将标注YUV图像中该像素点的像素值赋值到YUV视频图像中对应的像素点。Determining whether the pixel value of the pixel in the marked YUV image is the background pixel value in the labeled YUV image, if not, processing the corresponding pixel point in the YUV video image, if not, marking the pixel value of the pixel in the YUV image Assigned to the corresponding pixel in the YUV video image.

将经过标注处理后的YUV视频图像编码成视频,视频画面上可以显示标注内容。The labeled YUV video image is encoded into a video, and the labeled content can be displayed on the video screen.

本具体实施例采用标注画布的方法,避免了每次在视频变化的图像上进行标注,不需要对视频图像进行YUV→RGB→YUV的格式转换,而当标注内容没有发生变化时,标注画布也不需要每次重绘,通过以上方法,大大降低了对系统CPU的资源消耗。The specific embodiment adopts the method of labeling the canvas, avoiding the labeling on the video changed image every time, and does not need to perform the YUV→RGB→YUV format conversion on the video image, and when the label content does not change, the labeling canvas is also Without the need to redraw each time, the above method greatly reduces the resource consumption of the system CPU.

以上所述实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。The technical features of the above-described embodiments may be arbitrarily combined. For the sake of brevity of description, all possible combinations of the technical features in the above embodiments are not described. However, as long as there is no contradiction between the combinations of these technical features, All should be considered as the scope of this manual.

以上所述实施例仅表达了本发明的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干变形和改进,这些都属于本发明的保护范围。因此,本发明专利的保护范围应以所附权利要求为准。 The above-described embodiments are merely illustrative of several embodiments of the present invention, and the description thereof is more specific and detailed, but is not to be construed as limiting the scope of the invention. It should be noted that a number of variations and modifications may be made by those skilled in the art without departing from the spirit and scope of the invention. Therefore, the scope of the invention should be determined by the appended claims.

Claims (6)

一种视频图像的标注方法,其特征在于,包括以下步骤:A method for marking a video image, comprising the steps of: 创建画布,确定所述画布上的像素点与待标注视频图像的像素点的对应关系;Creating a canvas to determine a correspondence between a pixel on the canvas and a pixel of the video image to be labeled; 接收标注命令,根据所述标注命令和所述对应关系在所述画布的指定位置绘制标注内容;Receiving an annotation command, and drawing the annotation content at a specified position of the canvas according to the annotation command and the correspondence relationship; 将所述画布转换为第一格式的第一标注图像,所述第一格式为所述待标注视频图像的图像格式;Converting the canvas into a first annotation image of a first format, where the first format is an image format of the video image to be labeled; 根据所述对应关系,将所述第一标注图像上的标注内容的像素值赋值到所述待标注视频图像上的对应位置,获得标注视频图像。And assigning, according to the correspondence, a pixel value of the labeled content on the first labeled image to a corresponding position on the to-be-labeled video image, to obtain an annotated video image. 根据权利要求1所述的视频图像的标注方法,其特征在于,所述将所述画布转换为第一格式的第一标注图像的步骤包括以下步骤:The method for marking a video image according to claim 1, wherein the step of converting the canvas into the first annotation image of the first format comprises the following steps: 将所述画布转换为第二格式的第二标注图像,所述第二格式为与所述画布直接关联的图像格式;Converting the canvas into a second annotation image in a second format, the second format being an image format directly associated with the canvas; 将所述第二标注图像转换为所述第一标注图像,第一格式的图像与第二格式的图像具备转换关系。Converting the second annotation image into the first annotation image, and the image of the first format and the image of the second format have a conversion relationship. 根据权利要求1所述的视频图像的标注方法,其特征在于,所述画布的尺寸与待标注视频图像的尺寸大小相同。The method for marking a video image according to claim 1, wherein the size of the canvas is the same as the size of the video image to be labeled. 根据权利要求2所述的视频图像的标注方法,其特征在于,所述将所述画布转换为第二格式的第二标注图像的步骤包括以下步骤:The method for marking a video image according to claim 2, wherein the step of converting the canvas into the second annotation image of the second format comprises the following steps: 在所述画布上截取矩形图像,所述矩形图像是涵盖所述标注内容的最小矩形图像,将所述矩形图像转换为所述第二标注图像。A rectangular image is captured on the canvas, the rectangular image being a smallest rectangular image covering the labeled content, and the rectangular image is converted into the second labeled image. 根据权利要求1所述的视频图像的标注方法,其特征在于,所述创建画布的步骤之后还包括以下步骤:The method for marking a video image according to claim 1, wherein the step of creating a canvas further comprises the following steps: 设置所述画布的背景像素值,所述背景像素值与所述标注内容的像素值不同。A background pixel value of the canvas is set, the background pixel value being different from a pixel value of the labeled content. 根据权利要求5所述的视频图像的标注方法,其特征在于,所述将所述 The method of marking a video image according to claim 5, wherein said said
PCT/CN2017/096481 2016-12-15 2017-08-08 Method and system for annotating video image Ceased WO2018107777A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611161505.1 2016-12-15
CN201611161505.1A CN106791937B (en) 2016-12-15 2016-12-15 Video image annotation method and system

Publications (1)

Publication Number Publication Date
WO2018107777A1 true WO2018107777A1 (en) 2018-06-21

Family

ID=58891413

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/096481 Ceased WO2018107777A1 (en) 2016-12-15 2017-08-08 Method and system for annotating video image

Country Status (2)

Country Link
CN (1) CN106791937B (en)
WO (1) WO2018107777A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109360253A (en) * 2018-09-28 2019-02-19 共享智能铸造产业创新中心有限公司 A kind of method for drafting of big pixel B MP format-pattern
CN110851630A (en) * 2019-10-14 2020-02-28 武汉市慧润天成信息科技有限公司 Management system and method for deep learning labeled samples
CN110991296A (en) * 2019-11-26 2020-04-10 腾讯科技(深圳)有限公司 Video annotation method and device, electronic equipment and computer-readable storage medium
CN111191708A (en) * 2019-12-25 2020-05-22 浙江省北大信息技术高等研究院 Automatic sample key point labeling method, device and system
CN111489283A (en) * 2019-01-25 2020-08-04 鸿富锦精密工业(武汉)有限公司 Picture format conversion method and device and computer storage medium
CN112346807A (en) * 2020-11-06 2021-02-09 广州小鹏自动驾驶科技有限公司 Image annotation method and device

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106791937B (en) * 2016-12-15 2020-08-11 广东威创视讯科技股份有限公司 Video image annotation method and system
CN107333087B (en) * 2017-06-27 2020-05-08 京东方科技集团股份有限公司 Information sharing method and device based on video session
CN107995538B (en) * 2017-12-18 2020-02-28 威创集团股份有限公司 Video annotation method and system
CN110706228B (en) * 2019-10-16 2022-08-05 京东方科技集团股份有限公司 Image marking method and system, and storage medium
CN113014960B (en) * 2019-12-19 2023-04-11 腾讯科技(深圳)有限公司 Method, device and storage medium for online video production
CN117915022A (en) * 2022-10-11 2024-04-19 中兴通讯股份有限公司 Image processing method, device and terminal

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2716434A1 (en) * 2010-03-01 2011-09-01 Dundas Data Visualization, Inc. Systems and methods for determining positioning and sizing of graphical elements
CN105872679A (en) * 2015-12-31 2016-08-17 乐视网信息技术(北京)股份有限公司 Barrage display method and device
CN106162301A (en) * 2015-04-14 2016-11-23 北京奔流网络信息技术有限公司 A kind of information-pushing method
CN106791937A (en) * 2016-12-15 2017-05-31 广东威创视讯科技股份有限公司 The mask method and system of video image

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8564590B2 (en) * 2007-06-29 2013-10-22 Microsoft Corporation Imparting three-dimensional characteristics in a two-dimensional space
CN101499172A (en) * 2009-03-06 2009-08-05 深圳华为通信技术有限公司 ActiveX drafting method and device
CN102419743A (en) * 2011-07-06 2012-04-18 北京汇冠新技术股份有限公司 Annotating method and system
CN102968809B (en) * 2012-12-07 2015-12-09 成都理想境界科技有限公司 The method of virtual information mark and drafting marking line is realized in augmented reality field
JP2015150865A (en) * 2014-02-19 2015-08-24 セイコーエプソン株式会社 Printer and printing control method for the same
CN109388329A (en) * 2015-12-16 2019-02-26 广州视睿电子科技有限公司 Method and system for remote annotation synchronization

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2716434A1 (en) * 2010-03-01 2011-09-01 Dundas Data Visualization, Inc. Systems and methods for determining positioning and sizing of graphical elements
CN106162301A (en) * 2015-04-14 2016-11-23 北京奔流网络信息技术有限公司 A kind of information-pushing method
CN105872679A (en) * 2015-12-31 2016-08-17 乐视网信息技术(北京)股份有限公司 Barrage display method and device
CN106791937A (en) * 2016-12-15 2017-05-31 广东威创视讯科技股份有限公司 The mask method and system of video image

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109360253A (en) * 2018-09-28 2019-02-19 共享智能铸造产业创新中心有限公司 A kind of method for drafting of big pixel B MP format-pattern
CN109360253B (en) * 2018-09-28 2023-08-11 共享智能装备有限公司 Drawing method of large-pixel BMP format image
CN111489283A (en) * 2019-01-25 2020-08-04 鸿富锦精密工业(武汉)有限公司 Picture format conversion method and device and computer storage medium
CN111489283B (en) * 2019-01-25 2023-08-11 鸿富锦精密工业(武汉)有限公司 Picture format conversion method and device and computer storage medium
CN110851630A (en) * 2019-10-14 2020-02-28 武汉市慧润天成信息科技有限公司 Management system and method for deep learning labeled samples
CN110991296A (en) * 2019-11-26 2020-04-10 腾讯科技(深圳)有限公司 Video annotation method and device, electronic equipment and computer-readable storage medium
CN110991296B (en) * 2019-11-26 2023-04-07 腾讯科技(深圳)有限公司 Video annotation method and device, electronic equipment and computer-readable storage medium
CN111191708A (en) * 2019-12-25 2020-05-22 浙江省北大信息技术高等研究院 Automatic sample key point labeling method, device and system
CN112346807A (en) * 2020-11-06 2021-02-09 广州小鹏自动驾驶科技有限公司 Image annotation method and device

Also Published As

Publication number Publication date
CN106791937A (en) 2017-05-31
CN106791937B (en) 2020-08-11

Similar Documents

Publication Publication Date Title
CN106791937B (en) Video image annotation method and system
CN107622504B (en) Method and device for processing pictures
CN103269416A (en) Device and method for achieving video image tiled display by adoption of parallel processing mode
CN110263301B (en) Method and apparatus for determining the color of text
CN110662080B (en) Machine-Oriented Universal Coding Methods
CN111862110A (en) A green screen keying method, system, device and readable storage medium
US20180005387A1 (en) Detection and location of active display regions in videos with static borders
CN102930537A (en) Image detection method and system
US20150030312A1 (en) Method of sparse representation of contents of high-resolution video images supporting content editing and propagation
CN110782387A (en) Image processing method and device, image processor and electronic equipment
WO2020108010A1 (en) Video processing method and apparatus, electronic device and storage medium
CN103186780A (en) Video caption identifying method and device
WO2020108060A1 (en) Video processing method and apparatus, and electronic device and storage medium
US10290110B2 (en) Video overlay modification for enhanced readability
CN110189384A (en) Image compression method, device, computer equipment and storage medium based on Unity3D
WO2020034981A1 (en) Method for generating encoded information and method for recognizing encoded information
US8798391B2 (en) Method for pre-processing an image in facial recognition system
CN107113464A (en) Content providing device, display device and control method thereof
CN105430299B (en) Splicing wall signal source mask method and system
US20200294246A1 (en) Selectively identifying data based on motion data from a digital video to provide as input to an image processing model
US20160343148A1 (en) Methods and systems for identifying background in video data using geometric primitives
CN106611406A (en) Image correction method and image correction device
CN105451008A (en) Image processing system and color saturation compensation method
CN110570441B (en) Ultra-high definition low-delay video control method and system
CN103730097B (en) The display packing of ultrahigh resolution image and system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17882163

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 28/10/2019)

122 Ep: pct application non-entry in european phase

Ref document number: 17882163

Country of ref document: EP

Kind code of ref document: A1