WO2013113191A1 - Method and system for rapid video image transmission - Google Patents
Method and system for rapid video image transmission Download PDFInfo
- Publication number
- WO2013113191A1 WO2013113191A1 PCT/CN2012/073736 CN2012073736W WO2013113191A1 WO 2013113191 A1 WO2013113191 A1 WO 2013113191A1 CN 2012073736 W CN2012073736 W CN 2012073736W WO 2013113191 A1 WO2013113191 A1 WO 2013113191A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- video
- screen
- module
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/422—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
- H04N21/4223—Cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/436—Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
- H04N21/43615—Interfacing a Home Network, e.g. for connecting the client to a plurality of peripherals
Definitions
- the present invention relates to the field of video image processing, and in particular, to a method and system for fast transmission of video images. Background technique
- server video stream screen sharing solves the problem of the wiring length, one server device is only for one client device, and the problem that the playback content of one server device is simultaneously displayed on multiple client devices cannot be realized;
- the object of the present invention is to provide a method and system for fast transmission of video images, which saves bandwidth consumption and realizes images of multi-screen sharing of terminals in a smart home. Fast transfer.
- the method for fast image transmission comprises the steps of:
- the video sending end performs an image screen capture on its screen and caches it;
- the video receiving end receives the area image and combines it with the buffered image as the received image, and buffers the combined image.
- the determining step of the pixel in step B includes:
- the judgment of the pixel difference is realized by judging the pixel attribute deviation.
- the determining step of the area in step B includes:
- Rows and columns different from the image pixels of the previous frame are determined in units of rows and columns, respectively, and the regions formed by the intersection of the rows and columns are the different regions.
- the step of determining a row and a column different from the pixel of the previous frame image includes:
- the rows and columns for which the sum of the gradation differences of the respective columns and the rows are greater than a threshold are respectively determined as the determined rows and columns.
- determining that the sum of the gray level differences of each column and each row is greater than a threshold value includes:
- the positions of the end line and the end column in which the image pixels are different are determined, thereby determining the regions in which the adjacent two image pixels are different.
- it also includes:
- the screen shot in step A is: Screen capture of the statistical area.
- the screen cut rate is reduced, thereby further reducing the bandwidth occupation.
- the method further includes:
- the full screen image is taken as the screen shot of the step A.
- the application being activated includes: determining that a mouse, keyboard, or other application window is activated.
- the attribute value includes a color attribute and a color depth attribute of the pixel.
- the invention also provides a system for fast transmission of video images, comprising a video transmitting end and a video receiving end,
- the video sending end includes:
- a screen capture module configured to perform an image screen capture on a screen of the video sending end
- a video sending end storage module configured to cache the image of each frame cut by the screen capture module
- a comparison module configured to compare the current screen capture image with a previous frame image buffered by the storage module to determine a different area of the image pixel
- the sending module is configured to output different areas of the image pixels in a wireless form.
- the video receiver includes: a receiving module, configured to receive a different area of an image pixel sent by a sending module of the video sending end;
- An integration module configured to combine different regions of the received image pixels with the cached image as the received image
- the video receiving end storage module is configured to cache the integrated image of the integrated module.
- FIG. 1 is a flowchart of a method for quickly transmitting video images according to the present invention
- FIG. 2 is a flowchart of a method for establishing a wireless shared connection between a video sending end and a video receiving end in step S10 of the present invention
- step S20 of the present invention is a flowchart of a method for intercepting a shared video file by a video sending end in step S20 of the present invention
- FIG. 4 is a flow chart of a method for obtaining a grayscale deviation region by comparing a current frame image with a previous frame image in step S40 of the present invention
- FIG. 5 is a schematic structural diagram of a system for fast transmission of video images according to the present invention. detailed description
- the main principle of the present invention is: a wireless sharing connection is established between the video transmitting end and the video receiving end, and when the video transmitting end encodes the video image to the video receiving end through the screen capture, the video transmitting end simultaneously caches the image, and The buffered image is subjected to filtering processing. Comparing the current frame image with the image of the previous frame of the buffer, acquiring the grayscale deviation region (ie, the region different from the image pixel) in the two frame images, and encoding only the image portion corresponding to the grayscale deviation region. send. The video receiving end integrates the grayscale deviation area into the previous frame image and buffers it. Thereby, the bandwidth consumption and the encoding speed are saved, and the image fast transmission when the multi-screen sharing of the terminal in the smart home is realized.
- the grayscale deviation region ie, the region different from the image pixel
- the method for fast image transmission includes the following steps:
- Step S10 Establish a wireless shared connection between the video sending end and the video receiving end. As shown in FIG. 2, the step S10 includes:
- Step S101 The video sending end sends the video sharing request information to the video receiving end through a wireless communication network, such as a WIFI network.
- a wireless communication network such as a WIFI network.
- Step S102 The video receiving end receives the video sharing request information, determines whether the video sharing is acceptable according to the value of the shared identification bit stored by the video receiving end, and feeds back the corresponding information according to the information.
- the shared identifier bit When it is detected that the shared identifier bit is set to 1, it indicates that the video receiving end is sharing video with other video transmitting ends, and can no longer share with the video receiving end, and proceeds to step S103; when the shared identification bit is set to 0. , indicating that the video receiving end does not share video with other video transmitting ends, and is in an idle state. At this time, video sharing with the video receiving end may be performed, and the process proceeds to step S104.
- Step S103 The video receiving end sends a feedback feedback rejection request message to the video sending end, and the process ends.
- Step S104 The video receiving end feeds back the information that allows the sharing request, and establishes a video sharing connection.
- the video receiving end After the wireless sharing connection is established between the video sending end and the video receiving end, the video receiving end stores the address information of the video sending end, and changes the shared identification bit to 1.
- the video transmitting end and the video receiving end complete the establishment of the video sharing connection.
- Step S20 The video sending end performs a frame-by-frame screen capture on its entire screen, that is, the desktop.
- Common screenshots include GDI functions, DirectX functions, or Windows Media API functions.
- the GDI function is used to perform screen capture on the desktop.
- the GDI is based on "the desktop is also a window, and the desktop also has a window handle (HWND)", and the time for one screen capture is only 4 sec.
- GDI is an executable program that accepts access requests from Windows applications, but the application does not have direct access to the output device (screen), so the operation of the output device is done through the device context (DC, Device Contex).
- DC is a data structure in Windows that contains the GDI functions needed.
- the image captured by the GDI function is not directly output by the output device, but the image is copied to the DC.
- Each window on the screen corresponds to a DC, and the operation of the DC is reflected on its corresponding screen window.
- the screen capture step in step S20 includes:
- Step S201 Acquire a window of the current screen desktop.
- the window handle of the desktop is obtained by calling the GetDesktopWindow function.
- Step S202 Obtain a DC of a current screen desktop window.
- This step calls the GetDC function to get the DC of the desktop window, which is used to get the contents of the desktop window.
- Step S203 Create a DC and a bitmap compatible with the window DC, and select the bitmap into a compatible DC.
- the CreateCompatible function is first created by creating the compatible DC ( CreateCompatibleDC ) and the bitmap ( CreateCompatibleBitmap ).
- the DC is used to get the pixel value of the entire desktop window.
- the bitmap is created based on the size of the current screen, the same size as the entire desktop window.
- bitmaps can be read and written to store full-screen pixels of memory. Then, call the SelectObject function to select the created bitmap into a compatible DC.
- Step S204 Copy the contents of the desktop window DC to the compatible DC.
- This step calls the BitBlt function to copy the desktop window DC in step S202 to the step.
- the bitmap on the compatible DC is the image at the time of the screen capture, thereby completing the screenshot of the current screen desktop.
- Step S205 Release the created DC.
- This step calls the Release function to release the created DC, which frees the memory to ensure that other programs run smoothly.
- Step S30 Filtering and buffering the screen image.
- the brightness of the image of the screen is not uniformly changed, so the image generates a lot of noise.
- image noise is eliminated to overcome image interference.
- Step S40 Comparing the current frame image with the pixel attribute of the previous frame image to obtain a grayscale deviation region.
- Step S40 includes:
- Step S401 Converting the color map of the acquired image RGB numerical matrix into a grayscale image and performing a difference operation between the current frame image and the previous frame image to obtain a grayscale difference value, and the operation result is stored in the same array I (xi, yj ).
- the color attribute of the pixel is first changed, and the screen image is converted from the color image to the gray image.
- Converted Grayscale Each pixel of the image represents the grayscale value of that pixel with one byte. The gray value is between 0 and 255. The larger the value, the whiter the pixel (ie, the higher the brightness). The more the value is 'J, the darker it is. In this embodiment, the comparison of the pixel color depth attributes is performed.
- the current frame image is compared with the gray value of each pixel of the previous frame image, and the operation result is stored in an array I ( X i, yj ).
- X in the array represents the abscissa and y represents the ordinate; i and j represent the ordinal numbers of the abscissa or ordinate pixel respectively.
- Step S402 The array obtained in step 401 is separately projected on the X-axis Y-axis and integrated.
- the image taken by the screen is an arrangement matrix of M*N, and the above array is projected on the X-axis.
- /( , ⁇ /') represents the above-mentioned gradation difference value of the pixel
- /( , ⁇ /') represents the above-mentioned gradation difference value of the pixel
- F yj represents all the pixels of the j-line.
- the gamma difference sums (ie the integral).
- Steps S403 to S414 are based on the integral judgment process, and F yj and F xi are taken from Bottom up, from left to right, compared with the preset integral threshold F0, when the current row (column) F yj (or F xi ) is greater than F0, it indicates the gray scale deviation of the row (or column), Continue to judge whether the next line (column) is greater than F0, until ⁇ (or ⁇ d ) of a certain number of consecutive rows (columns) is greater than F0, indicating that the gray scale deviation area appears.
- the above judging method is continued to judge that the ⁇ 3 ⁇ 4 (or ⁇ d) of the continuous line (column) is smaller than F0, indicating that the gradation deviation area ends.
- the rectangular grayscale deviation region of the current frame image and the previous frame image that is, the region of the image to be transmitted, is determined. Specific steps are as follows:
- Step S403 Determine the integration result of the current line, that is, whether ⁇ is greater than F0. If yes, go to step S405, otherwise go to step S404.
- Step S404 Add 1 to the ordinate j of the current line (i.e., move to the next line), and clear K1 to 0, and return to step S403 until the integral operation of all the lines included in the image is completed.
- K1 represents the number of rows in which the grayscale deviation occurs continuously.
- the integration result of the current line is less than F0, it means that the gray level deviation between the current frame image and the previous frame image at each pixel of the line is not enough to be recognized by the naked eye, thereby being regarded as the current frame image and the previous frame.
- the image has the same gray level of pixels in this row, increments the ordinate j of the current line by 1, compares the integration result of the next line with F0, and clears K1.
- Step S405 When the integration result is greater than the preset F0, K1 is incremented by 1. In this step, if the integration result of the current line is greater than F0, it means that the grayscale deviation of each pixel point of the current frame image and the previous frame image can be recognized by the naked eye, and K1 is incremented by one.
- Step S406 It is judged whether K1 is greater than a preset number threshold. ⁇ If yes, go to step 408, otherwise go to step 407.
- the accuracy of the grayscale deviation area recognition can be improved, and the area of the grayscale deviation can be avoided as long as there is a little difference.
- Step S407 Add 1 to the ordinate j of the current line, and return to step S403 until The result of all the line integrations included in the image ends with the comparison of F0.
- K1 is less than ⁇ , indicating that the grayscale deviation has not yet formed a degree deviation region, so the vertical coordinate j of the current row is incremented by 1, and the integration operation of the next row is performed and compared with F0.
- Step S408 Record the ordinate yj l (ie, the number of rows) of the current line, and clear the K1 value.
- K1 is greater than K, indicating that the current frame image and the previous frame image have gray-scale deviation regions, so the ordinate yj l of the starting line where the gray-scale deviation occurs is recorded, and the K1 value is cleared.
- Step S409 It is judged whether the integration result is smaller than a preset F0. If yes, go to step S411, otherwise go to step S410.
- Step S410 Add 1 to the ordinate j of the current line and clear K2, and K2 indicates the number of lines in which the gradation deviation does not occur continuously, and return to step S409 until the integral operation of all the lines included in the image is completed.
- Step S411 If the integration result is less than the preset integration threshold, K2 is incremented by 1.
- Step S412 It is judged whether K2 is greater than a preset number threshold K:, and if so, it indicates that the current grayscale deviation area ends, and the process proceeds to step 414, otherwise, the process proceeds to step 413.
- K can be the same as the value.
- Step S413 The vertical coordinate j of the current line is incremented by 1, and the process returns to step S409 until the result of comparing all the lines included in the image with F0 is ended.
- Step S414 Record the ordinate yj2 of the current line, and clear the K2 value.
- K2 is greater than K:, indicating that the current frame image and the previous frame image end in the gray deviation region of all the lines, the ordinate yj2 of the end line of the gradation deviation region is recorded, and the K2 value is cleared.
- the area between yj l and yj2 is the interval of the gray scale deviation on the ordinate.
- Step S415 (not shown): Returning to step S403, the integration is based on the X-axis projections xil, xi2.
- the calculation method is the same as the steps S403 to S414, and will not be described again.
- the rectangular area composed of xil, xi2, yjl, and yj 2 is the gray scale deviation area.
- the present invention determines the gradation deviation area in the above manner, and sequentially judges the pixel points as compared with each other, and the processing speed is faster.
- Step S50 calculating, in step S40, the grayscale deviation region of the current frame image and the previous frame image, that is, the rectangular region surrounded by xil, xi2, yj1, and yj2 (ie, the grayscale deviation region on the screen) corresponds to The area image is encoded.
- Step S60 The area image encoded in step S50 is sent to the video receiving end in a wireless form, wherein the transmitted information includes location information of the area image, such as the above xil, xi2, yj l and yj2 information.
- Step S70 The video receiving end receives the encoded area image, and displays it after decoding. After receiving the area image data and decoding, the image is integrated into the buffered previous frame image according to its position information, and the integrated image is buffered for integration with the next frame image.
- Playing a video file in a non-full-screen window at the video sending end only when the image of the other area except the playing video window changes (for example, the user moves the position of the video playing window or adjusts the size of the video playing window by using a mouse or keyboard input command, Or activate a new window for other apps or files) to take a full-screen window capture.
- the user does not input the command within a certain period of time, it is only necessary to take a screenshot of the window in which the video is played. Further, the current frame video window image is compared with the previous frame to calculate the grayscale deviation region.
- the grayscale deviation area calculated by comparing the adjacent two frames of images is the window for video playback.
- the calculated grayscale deviation area is the video playback window (that is, the video screen is completely changed). If the video playback content is a lecture or the like (usually the background is unchanged, only the presenter is the main speaker). The action of the change occurs, and the calculated gray-scale deviation area can be further reduced, thereby saving bandwidth consumption and speeding up the encoding speed.
- step S20 a step (not shown) is further included: video transmission terminal monitoring The interval at which the user inputs the command, and controls the desktop window screen capture or the video playback window screen according to the time interval control.
- the confirmation of the position of the video play window may be judged according to the difference result of the cut-off image of the desktop window according to the consecutive consecutive frames (the number of frames is greater than 2, preferably 4 frames), and changes after multiple judgments
- the part formed by the area is relatively determined, that is, the position of the video playing window, and after the method confirms the position and size of the video playing window, the screen portion of the identified video playing window can be screened in the subsequent screen capture. And the judgment of the above step S40.
- the desktop window screen capture is performed again at intervals (for example, 5 seconds) to increase the accuracy of image transmission (for example, the playback progress bar of the video playback software is outside the video screen, thereby
- the background image is updated in time according to the playback progress bar as the background image.
- the video sender considers that the user has no instruction input, thereby only taking a screen shot of the video play window. Otherwise, if there is an instruction input within the expected time interval, a full window screen shot is taken.
- the screen capture method of the video play window is the same as that of step 20, and will not be described again.
- the system for quickly acquiring and transmitting images is described below. As shown in FIG. 5, the video transmitting end 51 and the video receiving end 52 are included.
- the video sending end 51 is configured to send a video sharing request to establish a wireless shared connection with the video receiving end 52, and view and cache the broadcasted video.
- the current frame image is compared with the previous frame image to calculate each gray scale deviation region, and each gray scale deviation region of the two frame images is encoded and output in a wireless form.
- the video transmitting end 51 includes a first sharing module 511, a screen capture module 512, a video transmitting end storage module 513, a filtering module 514, a comparison module 515, an encoding module 516, and a transmitting module 517.
- the first sharing module 511 is configured to send a video sharing request, and determine whether to perform a shared connection according to the feedback information.
- the screen capture module 512 is connected to the first sharing module 511 for performing frame-by-frame screen capture according to the shared trigger information.
- the video sending end storage module 513 is connected to the screen capture module 512 for receiving the window cutoff Screen images are cached.
- the video sending end storage module 513 caches at least two screenshot images of the current frame and the previous frame.
- the video sending end storage module 513 receives the next frame window image, the previous frame window screen image that is cached earlier according to the buffering timing. delete.
- the filtering module 514 is connected to the video sending end storage module 513 for retrieving the buffered window screen image and performing filtering processing to eliminate image noise.
- the comparison module 515 is connected to the filtering module 514, and is used for comparing the current frame image with the previous frame screenshot image to calculate each grayscale deviation region.
- the comparison module 515 includes:
- a pixel gradation conversion unit for converting a color map of an image RGB numerical arrangement matrix into a grayscale image
- a differential operation unit connected to the pixel gradation conversion unit, for performing a difference operation between the current frame image and the previous frame image, and combining the difference operation results into the same array
- the integral operation unit is connected to the difference operation unit, and is configured to respectively perform the X-axis Y-axis projection on the array and obtain the integral thereof;
- the integral result comparison unit is connected to the integral operation unit for determining the magnitude of the integration result and the preset integration threshold. If the integration result is greater than the integration threshold, the grayscale deviation is indicated, and vice versa, the grayscale deviation is not present;
- the gray-scale deviation area starting point determining unit is connected with the integral result comparing unit, and is used for judging the number of gray-scale deviations of the continuous line (or column) and the preset number of thresholds, if the continuous line (or column) appears gray The number of degrees of deviation is greater than the threshold of the number, indicating that the gray scale deviation area appears, and vice versa, that the gray scale deviation area does not appear;
- the gray-scale deviation area end point determining unit is connected to the gray-scale deviation area starting point determining unit for determining the number of gray-scale deviations of the continuous line (or column) and the size of the preset number threshold, if continuous lines (or columns) The number of grayscale deviations is less than the threshold of the number, indicating that the grayscale deviation region ends, and conversely, the grayscale deviation region is not ended;
- the row and column coordinate accumulating unit is connected to the integral result comparison unit for adding 1 to the coordinates of the image row (or column).
- the encoding module 516 is connected to the comparison module 515 for encoding the image corresponding to each grayscale deviation region calculated by the comparison module 515 to form pixel data.
- the transmitting module 517 is coupled to the encoding module 516 for outputting the encoded pixel data in a wireless form.
- a user command monitoring module (not shown) is used to monitor the frequency with which the user inputs a command signal via a mouse or keyboard. The monitoring result is sent to the screen capture module 512. If the user does not input a command signal within a predetermined time, the screen capture module 512 performs image capture only on the video play window.
- the video receiving end 52 is configured to respond to the video sharing request of the video transmitting end 51 according to the shared identifier bit, and after receiving the wireless shared connection, receive the encoded pixel data, decode and integrate the broadcast data.
- the video receiving end 52 includes: a second sharing module 521, a sharing detecting module 522, a receiving module 523, a decoding module 524, an integrating module 525, and a video receiving end storage module 526.
- the second sharing module 521 is coupled to the first sharing module 511 for receiving a video sharing request and outputting detection information of the identification bit.
- the sharing detection module 522 is connected to the second sharing module 521, and is configured to detect the sharing status of the video receiving end 52 according to the detection information of the identification bit (ie, whether video sharing is being performed with other video transmitting ends) and generate feedback information.
- the second sharing module 521 is further configured to receive feedback information and transmit the feedback information to the first sharing module 511.
- the receiving module 523 is coupled to the transmitting module 517 for receiving encoded pixel data.
- the decoding module 524 is coupled to the receiving module 523 for decoding the encoded pixel data.
- the integration module 525 is coupled to the decoding module 524 and the video receiving end storage module 526, and is configured to integrate the image corresponding to the decoded grayscale deviation area into the cached image of the previous frame according to the row and column pixel coordinates. in.
- the video receiving end storage module 526 is connected to the integration module 525 to buffer the image integrated by the integration module 525.
- a display module (not shown) is coupled to the integrated module 525 for displaying the integrated video image.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Description
一种视频图像快速传输的方法及系统 技术领域 Method and system for fast transmission of video images
本发明涉及视频图像处理领域, 尤其涉及一种视频图像快速传输 的方法及系统。 背景技术 The present invention relates to the field of video image processing, and in particular, to a method and system for fast transmission of video images. Background technique
目前, 随着网络技术的不断发展, 无线网络在智能家电中已经普 及。 智能家电中的各种终端设备基本上都能通过无线技术上网并进行 信息的浏览, 但是随着智能家电的不断完善, 人们也提出了更高的要 求, 即要实现及时的屏幕共享。 用户在一个终端设备上浏览信息页面 能够及时的传递到另一个最接近自己的终端设备上, 这样可以极大的 满足用户的需求, 方便用户使用。 目前, 在多屏共享技术中的主要共 享方式、 技术存在有以下几种: 分屏器屏幕共享、 服务器视频流屏幕 共享和屏幕快捕技术屏幕共享。 At present, with the continuous development of network technology, wireless networks have become popular in smart home appliances. Various terminal devices in smart home appliances can basically access the Internet and browse information through wireless technology. However, with the continuous improvement of smart home appliances, people have also put forward higher requirements, that is, to achieve timely screen sharing. The user browsing the information page on one terminal device can be timely transmitted to another terminal device that is closest to itself, which can greatly satisfy the user's needs and is convenient for the user to use. At present, there are several main sharing methods and technologies in multi-screen sharing technology: split screen screen sharing, server video streaming screen sharing, and screen fast capture technology screen sharing.
但上述共享方式分别存有以下不足: However, the above sharing methods have the following deficiencies:
分屏器屏幕共享需要 VGA连接线进行线连接, 因为连接线长度 有限, 因而限制了相连接的两个设备之间的距离; Split screen screen sharing requires a VGA cable for wire connection, because the length of the cable is limited, thus limiting the distance between the two connected devices;
服务器视频流屏幕共享虽解决了接线长度问题,但一个服务器设备 只针对一个客户端设备, 无法实现一个服务器设备的播放内容在多个客户端 设备上同时进行显示的问题; Although the server video stream screen sharing solves the problem of the wiring length, one server device is only for one client device, and the problem that the playback content of one server device is simultaneously displayed on multiple client devices cannot be realized;
屏幕快捕技术屏幕共享每次都处理全屏数据, 数据量巨大, 需要 较高的机器配置才能快速的处理完数据的压缩与传输, 耗时长。 发明内容 Screen capture technology screen sharing handles full-screen data every time, the amount of data is huge, and a high machine configuration is required to quickly process the compression and transmission of data, which takes a long time. Summary of the invention
有鉴于此, 本发明的目的在于提供一种视频图像快速传输的方法 及系统, 节省带宽占用量, 实现智能家庭中终端的多屏共享时的图像 快速传输。 In view of this, the object of the present invention is to provide a method and system for fast transmission of video images, which saves bandwidth consumption and realizes images of multi-screen sharing of terminals in a smart home. Fast transfer.
本发明提供的图像快速传输的方法包括步骤: The method for fast image transmission provided by the invention comprises the steps of:
A、 视频发送端对其屏幕进行图像截屏并緩存; A. The video sending end performs an image screen capture on its screen and caches it;
B、确定出所截屏图像与緩存的上一帧截屏图像像素不同的区域, 并发送所述区域的图像; B. determining an area of the screen image that is different from the buffered image of the previous frame of the screen image, and transmitting the image of the area;
C、 视频接收端接收所述区域图像并与其緩存的图像组合作为所 接收的图像, 并緩存组合后的图像。 C. The video receiving end receives the area image and combines it with the buffered image as the received image, and buffers the combined image.
由上, 仅对图像像素不同的区域进行发送, 节省带宽占用量, 实 现智能家庭中终端的多屏共享时的图像快速传输。 From the above, only the areas where the image pixels are different are transmitted, which saves the bandwidth occupation, and realizes the fast transmission of the image when the multi-screen sharing of the terminal in the smart home.
可选的, 步骤 B所述像素不同的确定步骤包括: Optionally, the determining step of the pixel in step B includes:
判断像素与上一帧图像该位置像素属性值偏差超过一定值时确 定为像素不同。 It is determined that the pixel is different from the pixel value of the previous frame image when the deviation of the position pixel attribute value exceeds a certain value.
由上, 通过判断像素属性偏差, 实现对像素不同的判定。 From the above, the judgment of the pixel difference is realized by judging the pixel attribute deviation.
可选的, 步骤 B所述区域的确定步骤包括: Optionally, the determining step of the area in step B includes:
分别以行和列为单位, 确定出与上一帧图像像素不同的行和列, 所述行和列交集所构成的区域为所述不同的区域。 Rows and columns different from the image pixels of the previous frame are determined in units of rows and columns, respectively, and the regions formed by the intersection of the rows and columns are the different regions.
由上, 通过判断行和列像素的不同实现确定出需要截屏的区域, 由此可避免全屏截屏, 实现节省带宽占用量。 From above, by determining different implementations of row and column pixels, an area that needs to be screened is determined, thereby avoiding full-screen screen capture and achieving bandwidth saving.
可选的, 所述确定出与上一帧图像像素不同的行和列的步骤包 括: Optionally, the step of determining a row and a column different from the pixel of the previous frame image includes:
将所截屏图像与上一帧图像的各个像素进行差分运算得到灰度 差值, 并将运算结果作为数组存储; Performing a difference operation between the screen image and each pixel of the previous frame image to obtain a grayscale difference value, and storing the operation result as an array;
对所述数组分别进行 X轴和 Y轴的投影并分别计算各列和各行 的灰度差值的和; Performing projections of the X-axis and the Y-axis on the array and respectively calculating the sum of the gradation differences of the columns and the rows;
分别判断各列和各行灰度差值的和大于一阈值的各行和各列为 所确定出的行和列。 The rows and columns for which the sum of the gradation differences of the respective columns and the rows are greater than a threshold are respectively determined as the determined rows and columns.
由上, 通过比较相邻两帧图像的灰度, 实现判定图像像素不同的 起始行和起始列的位置。 From the above, by comparing the gradations of the adjacent two frames of images, it is determined that the positions of the start and start columns of the image pixels are different.
可选的, 分别判断各列和各行灰度差值的和大于一阈值后还包 括: Optionally, respectively, determining that the sum of the gray level differences of each column and each row is greater than a threshold value Includes:
判断连续的设定数量的列和行灰度差值的和均大于一阈值的步 骤。 A step of judging that the sum of the successive set number of column and line gradation differences is greater than a threshold.
由上, 实现判定图像像素不同的结束行和结束列的位置, 由此确 定相邻两帧图像像素不同的区域。 From the above, the positions of the end line and the end column in which the image pixels are different are determined, thereby determining the regions in which the adjacent two image pixels are different.
可选的, 还包括: Optionally, it also includes:
根据一定时间内、 一定数量帧的截屏图像统计出像素不同的区 域; Counting regions with different pixels according to screenshot images of a certain number of frames in a certain period of time;
步骤 A所述截屏为: 对所述统计出的区域进行截屏。 The screen shot in step A is: Screen capture of the statistical area.
由上, 通过设定截屏图像的时间间隔和在该时间间隔中的截屏数 量, 由此减少截屏速率, 实现进一步降低带宽占用量。 From above, by setting the time interval of the screen shot image and the number of screen shots in the time interval, the screen cut rate is reduced, thereby further reducing the bandwidth occupation.
较佳的, 还包括: Preferably, the method further includes:
判断视频发送端在所述区域截屏之外的屏幕图像的应用被激活 时, 以全屏图像作为步骤 A所述截屏。 When it is judged that the application of the screen image of the video transmitting end outside the area screen shot is activated, the full screen image is taken as the screen shot of the step A.
所述应用被激活包括: 判断接收到鼠标、 键盘, 或其他应用窗口 被激活。 The application being activated includes: determining that a mouse, keyboard, or other application window is activated.
由上, 通过设定激活状态, 当未被激活时, 不对屏幕进行截屏, 实现增加截屏的时间间隔, 进一步节省宽带占用量。 From above, by setting the activation state, when not activated, the screen is not screened, and the time interval for increasing the screen capture is increased, thereby further saving the bandwidth occupation.
可选的, 所述属性值包括像素点的色彩属性和颜色深度属性。 本发明还提供了一种视频图像快速传输的系统, 包括视频发送端 和视频接收端, Optionally, the attribute value includes a color attribute and a color depth attribute of the pixel. The invention also provides a system for fast transmission of video images, comprising a video transmitting end and a video receiving end,
其中, 所述视频发送端包括: The video sending end includes:
截屏模块, 用于对视频发送端的屏幕进行图像截屏; a screen capture module, configured to perform an image screen capture on a screen of the video sending end;
视频发送端存储模块, 用于緩存所述截屏模块依次所截各帧图 像; a video sending end storage module, configured to cache the image of each frame cut by the screen capture module;
对比模块, 用于将当前所截屏图像与存储模块緩存的上一帧图像 进行对比, 确定图像像素不同的区域; a comparison module, configured to compare the current screen capture image with a previous frame image buffered by the storage module to determine a different area of the image pixel;
发送模块, 用于将图像像素不同的区域以无线形式输出。 The sending module is configured to output different areas of the image pixels in a wireless form.
视频接收端包括: 接收模块, 用于接收视频发送端的发送模块所发送的图像像素不 同的区域; The video receiver includes: a receiving module, configured to receive a different area of an image pixel sent by a sending module of the video sending end;
整合模块, 用于将接收的图像像素不同的区域与所述緩存的图像 组合作为所接收的图像; An integration module, configured to combine different regions of the received image pixels with the cached image as the received image;
视频接收端存储模块, 用于将整合模块所整合后的图像进行緩 存。 The video receiving end storage module is configured to cache the integrated image of the integrated module.
由上, 仅对图像像素不同的区域进行发送, 节省带宽占用量, 实 现智能家庭中终端的多屏共享时的图像快速传输。 附图说明 From the above, only the areas where the image pixels are different are transmitted, which saves the bandwidth occupation, and realizes the fast transmission of the image when the multi-screen sharing of the terminal in the smart home. DRAWINGS
图 1为本发明提供的视频图像快速传输方法的流程图; 1 is a flowchart of a method for quickly transmitting video images according to the present invention;
图 2为本发明步骤 S10中视频发送端与视频接收端之间建立无线 共享连接的方法流程图; 2 is a flowchart of a method for establishing a wireless shared connection between a video sending end and a video receiving end in step S10 of the present invention;
图 3为本发明步骤 S20中视频发送端对被共享的视频文件进行截 屏的方法流程图; 3 is a flowchart of a method for intercepting a shared video file by a video sending end in step S20 of the present invention;
图 4为本发明步骤 S40中当前帧图像与前一帧图像进行对比得出 灰度偏差区域的方法流程图; 4 is a flow chart of a method for obtaining a grayscale deviation region by comparing a current frame image with a previous frame image in step S40 of the present invention;
图 5为本发明视频图像快速传输的系统结构示意图。 具体实施方式 FIG. 5 is a schematic structural diagram of a system for fast transmission of video images according to the present invention. detailed description
本发明的主要原理是: 视频发送端与视频接收端之间建立无线共 享连接,视频发送端通过截屏将视频图像进行编码向视频接收端发送 时, 视频发送端同时将该图像进行緩存, 并对所緩存的图像进行滤波 处理。 对当前帧图像与緩存的前一帧的图像进行对比, 获取两帧图像 中灰度偏差区域(即图像像素所不同的区域) , 并仅将该灰度偏差区 域所对应的图像部分进行编码后发送。视频接收端将灰度偏差区域整 合至前一帧图像中并緩存。 由此以节省带宽占用量和编码速度, 实现 智能家庭中终端的多屏共享时的图像快速传输。 The main principle of the present invention is: a wireless sharing connection is established between the video transmitting end and the video receiving end, and when the video transmitting end encodes the video image to the video receiving end through the screen capture, the video transmitting end simultaneously caches the image, and The buffered image is subjected to filtering processing. Comparing the current frame image with the image of the previous frame of the buffer, acquiring the grayscale deviation region (ie, the region different from the image pixel) in the two frame images, and encoding only the image portion corresponding to the grayscale deviation region. send. The video receiving end integrates the grayscale deviation area into the previous frame image and buffers it. Thereby, the bandwidth consumption and the encoding speed are saved, and the image fast transmission when the multi-screen sharing of the terminal in the smart home is realized.
下面结合附图 1-5对本发明所述图像快速传输方法及系统的具体 实施方式进行详细的说明。 The specific method and system for fast image transmission according to the present invention will be described below with reference to FIGS. The embodiment will be described in detail.
如图 1所示, 图像快速传输的方法包括以下步骤: As shown in FIG. 1, the method for fast image transmission includes the following steps:
步骤 S10: 视频发送端与视频接收端之间建立无线共享连接。 其中, 如图 2所示, 所述步骤 S10包括: Step S10: Establish a wireless shared connection between the video sending end and the video receiving end. As shown in FIG. 2, the step S10 includes:
步骤 S101 : 视频发送端通过无线通信网络, 如 WIFI网络, 向视 频接收端发送视频共享请求信息。 Step S101: The video sending end sends the video sharing request information to the video receiving end through a wireless communication network, such as a WIFI network.
步骤 S102: 视频接收端接收视频共享请求信息, 根据视频接收端 所存储的共享标识位的值判断是否可接受视频共享, 并据此反馈相应 的信息。 Step S102: The video receiving end receives the video sharing request information, determines whether the video sharing is acceptable according to the value of the shared identification bit stored by the video receiving end, and feeds back the corresponding information according to the information.
当检测到其共享标识位设置为 1时, 表示视频接收端正在与其他 视频发送端进行视频共享, 此时无法再与上述视频接收端进行共享, 进入步骤 S103; 当共享标识位设置为 0 时, 表示视频接收端并未与 其他视频发送端进行视频共享, 处于空闲状态, 此时可与上述视频接 收端进行视频共享, 进入步骤 S104。 When it is detected that the shared identifier bit is set to 1, it indicates that the video receiving end is sharing video with other video transmitting ends, and can no longer share with the video receiving end, and proceeds to step S103; when the shared identification bit is set to 0. , indicating that the video receiving end does not share video with other video transmitting ends, and is in an idle state. At this time, video sharing with the video receiving end may be performed, and the process proceeds to step S104.
步骤 S103:视频接收端向视频发送端发送反馈拒绝共享请求的信 息, 结束本流程。 Step S103: The video receiving end sends a feedback feedback rejection request message to the video sending end, and the process ends.
步骤 S104: 视频接收端反馈允许共享请求的信息, 建立视频共享 连接。 Step S104: The video receiving end feeds back the information that allows the sharing request, and establishes a video sharing connection.
当视频发送端与视频接收端之间建立无线共享连接后, 视频接收 端将视频发送端的地址信息进行存储, 以及更改共享标识位为 1。 After the wireless sharing connection is established between the video sending end and the video receiving end, the video receiving end stores the address information of the video sending end, and changes the shared identification bit to 1.
由上, 视频发送端与视频接收端完成视频共享连接的建立。 From above, the video transmitting end and the video receiving end complete the establishment of the video sharing connection.
步骤 S20: 视频发送端对其整个屏幕, 即桌面进行逐帧截屏。 常用截屏方式包括 GDI函数、 DirectX函数或 Windows Media API 函数。本实施例中,采用 GDI函数进行对桌面进行截屏, GDI是以 "桌 面也是一个窗口, 桌面也有一个窗口句柄 (HWND)" 为基础的, 一次 截屏的时间仅为 4 亳秒。 GDI是一个可执行程序, 它接受 Windows 应用程序的访问请求, 但应用程序无法直接访问输出设备(屏幕) , 因此对输出设备的操作是通过设备上下文(DC, Device Contex ) 进 行的。 DC是 Windows中的一种数据结构, 它包含 GDI函数需要的 所有关于输出设备的类型以及显示界面情况的描述字段。 GDI函数所 截屏的图像并不是直接由输出设备输出, 而是将图像复制到 DC中。 屏幕上的每一个窗口都对应一个 DC, 对 DC的操作反应在其所对应 的屏幕窗口上。 Step S20: The video sending end performs a frame-by-frame screen capture on its entire screen, that is, the desktop. Common screenshots include GDI functions, DirectX functions, or Windows Media API functions. In this embodiment, the GDI function is used to perform screen capture on the desktop. The GDI is based on "the desktop is also a window, and the desktop also has a window handle (HWND)", and the time for one screen capture is only 4 sec. GDI is an executable program that accepts access requests from Windows applications, but the application does not have direct access to the output device (screen), so the operation of the output device is done through the device context (DC, Device Contex). DC is a data structure in Windows that contains the GDI functions needed. All description fields about the type of output device and the status of the display interface. The image captured by the GDI function is not directly output by the output device, but the image is copied to the DC. Each window on the screen corresponds to a DC, and the operation of the DC is reflected on its corresponding screen window.
如图 3所示, 所述步骤 S20中的截屏步骤包括: As shown in FIG. 3, the screen capture step in step S20 includes:
步骤 S201 : 获取当前屏幕桌面的窗口。 Step S201: Acquire a window of the current screen desktop.
当进行桌面截图时, 首先使指针指向当前屏幕桌面窗口。 由此, 需调用函数来获取桌面窗口的句柄, 在本实施例中, 通过调用 GetDesktopWindow函数获取桌面的窗口句柄。 When taking a desktop screenshot, first point the pointer to the current screen desktop window. Therefore, the function needs to be called to obtain the handle of the desktop window. In this embodiment, the window handle of the desktop is obtained by calling the GetDesktopWindow function.
步骤 S202: 获取当前屏幕桌面窗口的 DC。 Step S202: Obtain a DC of a current screen desktop window.
本步骤调用 GetDC函数获取桌面窗口的 DC, 用于获取桌面窗口 的内容。 包括桌面窗口内容对应整个桌面的位图图像以及当前整个桌 面窗口的宽和高。 This step calls the GetDC function to get the DC of the desktop window, which is used to get the contents of the desktop window. This includes the bitmap image of the desktop window corresponding to the entire desktop and the width and height of the current entire desktop window.
步骤 S203: 创建与窗口 DC兼容的 DC和位图, 将位图选进兼容 的 DC。 Step S203: Create a DC and a bitmap compatible with the window DC, and select the bitmap into a compatible DC.
本步骤中, 首先调用 CreateCompatible 函数创建兼容 DC ( CreateCompatibleDC ) 和位图 ( CreateCompatibleBitmap ) , 兼容 In this step, the CreateCompatible function is first created by creating the compatible DC ( CreateCompatibleDC ) and the bitmap ( CreateCompatibleBitmap ).
DC用于获取整个桌面窗口的像素值。位图依据目前屏幕的大小创建, 与整个桌面窗口大小相同。 另外, 位图可进行读写操作, 用于存放全 屏幕像素的内存。 而后, 调用 SelectObject函数将所创建的位图选进 兼容的 DC。 The DC is used to get the pixel value of the entire desktop window. The bitmap is created based on the size of the current screen, the same size as the entire desktop window. In addition, bitmaps can be read and written to store full-screen pixels of memory. Then, call the SelectObject function to select the created bitmap into a compatible DC.
步骤 S204: 复制桌面窗口 DC的内容到兼容 DC。 Step S204: Copy the contents of the desktop window DC to the compatible DC.
本步骤调用 BitBlt函数将步骤 S202中的桌面窗口 DC复制至步骤 This step calls the BitBlt function to copy the desktop window DC in step S202 to the step.
S203中创建的位图上, 兼容 DC上的位图便为截屏时的图像, 由此完 成当前屏幕桌面的截屏。 On the bitmap created in S203, the bitmap on the compatible DC is the image at the time of the screen capture, thereby completing the screenshot of the current screen desktop.
步骤 S205: 释放所创建的 DC。 Step S205: Release the created DC.
本步骤调用 Release函数将所创建的 DC释放, 由此释放内存保 证其他程序顺畅运行。 This step calls the Release function to release the created DC, which frees the memory to ensure that other programs run smoothly.
步骤 S30: 对所截屏的图像进行滤波并进行緩存。 在进行截屏的过程中, 由于光照不均或环境变化, 由此造成所截 屏的图像的亮度变化不均匀, 所以图像会产生很多噪声。 通过对所截 屏图像进行滤波处理, 消除图像噪声以克服图像干扰。 Step S30: Filtering and buffering the screen image. In the process of taking a screen, due to uneven illumination or environmental changes, the brightness of the image of the screen is not uniformly changed, so the image generates a lot of noise. By filtering the screen image, image noise is eliminated to overcome image interference.
步骤 S40: 将当前帧图像与前一帧图像的像素属性进行对比, 得 出灰度偏差区域。 Step S40: Comparing the current frame image with the pixel attribute of the previous frame image to obtain a grayscale deviation region.
如图 4所示, 本实施例中以相邻两帧图像具有一个灰度偏差区域 为例进行描述, 步骤 S40中包括: As shown in FIG. 4, in the embodiment, the image of the adjacent two frames has a grayscale deviation area as an example. Step S40 includes:
步骤 S401 : 将所获取图像 RGB数值排列矩阵的彩色图换算为灰 度图并将当前帧图像与前一帧图像进行差分运算得到灰度差值,运算 结果存放至同一个数组 I ( xi,yj ) 当中。 Step S401: Converting the color map of the acquired image RGB numerical matrix into a grayscale image and performing a difference operation between the current frame image and the previous frame image to obtain a grayscale difference value, and the operation result is stored in the same array I (xi, yj ).
本实施例中, 为了图像加快图像对比的处理速度, 首先改变像素的 色彩属性, 将截屏的图像由彩色图像转换为灰度图像。 转换后的灰度 图像的每一个像素点用一个字节表示该像素点的灰度值。 灰度值在 0 ~ 255之间, 其数值越大, 表示该像素点越白 (即亮度高) , 数值越 'J、就越黑。 本实施例中通过对像素颜色深度属性的对比进行判断。 In this embodiment, in order to speed up the processing speed of the image contrast, the color attribute of the pixel is first changed, and the screen image is converted from the color image to the gray image. Converted Grayscale Each pixel of the image represents the grayscale value of that pixel with one byte. The gray value is between 0 and 255. The larger the value, the whiter the pixel (ie, the higher the brightness). The more the value is 'J, the darker it is. In this embodiment, the comparison of the pixel color depth attributes is performed.
将当前帧图像与前一帧图像的各个像素点的灰度值进行差分运 算, 并将运算结果存放至一个数组 I ( Xi,yj ) 当中。 其中, 数组中的 X 表示横坐标, y表示纵坐标; i、 j分别表示横坐标或纵坐标像素点的 序数。 The current frame image is compared with the gray value of each pixel of the previous frame image, and the operation result is stored in an array I ( X i, yj ). Where X in the array represents the abscissa and y represents the ordinate; i and j represent the ordinal numbers of the abscissa or ordinate pixel respectively.
步骤 S402: 将步骤 401 中所获得的数组分别进行 X轴 Y轴的投 影并求其积分。 Step S402: The array obtained in step 401 is separately projected on the X-axis Y-axis and integrated.
假定所截屏的图像为 M*N的排列矩阵, 上述数组在 X轴投影的 Assume that the image taken by the screen is an arrangement matrix of M*N, and the above array is projected on the X-axis.
M N M N
积分为: Fx =∑ ( ', ;yj'),在 γ轴投影的积分为: Fyj =∑l i, yj)。 The integral is: F x = ∑ ( ', ;yj'), and the integral of the projection on the γ axis is: F yj = ∑li, yj).
j=l '=1 j=l '=1
其中, /( ,}/')表示该像素点的上述灰度差值, 表示第 i列所有 像素点的灰度差值求和 (即所述积分) , Fyj表示 j行所有像素点的 灰度差值求和 (即所述积分) 。 Where /( , }/') represents the above-mentioned gradation difference value of the pixel, represents the gradation difference summation of all the pixel points of the i-th column (ie, the integral), and F yj represents all the pixels of the j-line. The gamma difference sums (ie the integral).
步骤 S403~步骤 S414为依据积分式的判断过程, 将 Fyj与 F xi从 下往上,从左往右与预先设定的积分阈值 F0进行比较,当当前行(列) 的 Fyj (或 Fxi )大于 F0时, 表示该行(或列) 的出现灰度偏差, 继续 判断下一行(列)是否大于 F0时, 直至出现连续一定数量行(列) 的^^ (或^ d )大于 F0, 表示出现灰度偏差区域。 进而继续上述判断 方法判断出现连续行(列) 的 ^¾ (或^ d )小于 F0, 表示灰度偏差区 域结束。 将全部行和列的积分结果判断完毕后, 确定当前帧图像与前 一帧图像的矩形灰度偏差区域, 也就是需要传递的图像的区域。 具体 步骤如下: Steps S403 to S414 are based on the integral judgment process, and F yj and F xi are taken from Bottom up, from left to right, compared with the preset integral threshold F0, when the current row (column) F yj (or F xi ) is greater than F0, it indicates the gray scale deviation of the row (or column), Continue to judge whether the next line (column) is greater than F0, until ^^ (or ^ d ) of a certain number of consecutive rows (columns) is greater than F0, indicating that the gray scale deviation area appears. Further, the above judging method is continued to judge that the ^3⁄4 (or ^d) of the continuous line (column) is smaller than F0, indicating that the gradation deviation area ends. After judging the integration results of all the rows and columns, the rectangular grayscale deviation region of the current frame image and the previous frame image, that is, the region of the image to be transmitted, is determined. Specific steps are as follows:
步骤 S403: 判断当前行的积分结果, 即^^是否大于 F0。 若是则 进入步骤 S405, 否则进入步骤 S404。 Step S403: Determine the integration result of the current line, that is, whether ^^ is greater than F0. If yes, go to step S405, otherwise go to step S404.
步骤 S404: 将当前行的纵坐标 j加 1 (即移至下一行) , 且将 K1 清零, 返回步骤 S403, 直至将图像所包含全部行的积分运算完成。 其中, K1表示连续出现灰度偏差的行数。 Step S404: Add 1 to the ordinate j of the current line (i.e., move to the next line), and clear K1 to 0, and return to step S403 until the integral operation of all the lines included in the image is completed. Where K1 represents the number of rows in which the grayscale deviation occurs continuously.
本步骤中, 若当前行的积分结果小于 F0, 则表示当前帧图像与前 一帧图像在该行各像素点的灰度偏差不足以被肉眼识别, 由此视为当 前帧图像与前一帧图像在此行像素点灰度相同, 将当前行的纵坐标 j 加 1, 将下一行的积分结果与 F0比较, 并且将 K1清零。 In this step, if the integration result of the current line is less than F0, it means that the gray level deviation between the current frame image and the previous frame image at each pixel of the line is not enough to be recognized by the naked eye, thereby being regarded as the current frame image and the previous frame. The image has the same gray level of pixels in this row, increments the ordinate j of the current line by 1, compares the integration result of the next line with F0, and clears K1.
步骤 S405: 当积分结果大于预先设定的 F0时, K1进行加 1。 本步骤中, 当前行的积分结果大于 F0, 则表示当前帧图像与前一 帧图像在当前行的各像素点灰度偏差可被肉眼识别, 将 K1进行加 1 运算。 Step S405: When the integration result is greater than the preset F0, K1 is incremented by 1. In this step, if the integration result of the current line is greater than F0, it means that the grayscale deviation of each pixel point of the current frame image and the previous frame image can be recognized by the naked eye, and K1 is incremented by one.
步骤 S406: 判断 K1是否大于预先设定的个数阈值^ 若是则进 入步骤 408, 否则进入步骤 407。 Step S406: It is judged whether K1 is greater than a preset number threshold. ^ If yes, go to step 408, otherwise go to step 407.
举例来说, 若设置 K=3, 则当 K1大于 Κ时, 表示当前帧图像与 前一帧图像已有连续 3行出现灰度偏差, 已构成灰度偏差区域。 For example, if K=3 is set, when K1 is greater than Κ, it indicates that the current frame image and the previous frame image have three consecutive lines of grayscale deviation, which constitutes the grayscale deviation region.
通过设置阈值 Κ, 可以提高对灰度偏差区域识别的准确性, 避免 只要出现一点不同即视为灰度偏差区域。 By setting the threshold Κ, the accuracy of the grayscale deviation area recognition can be improved, and the area of the grayscale deviation can be avoided as long as there is a little difference.
步骤 S407: 将当前行的纵坐标 j加 1, 并返回步骤 S403, 直至将 图像所包含所有行积分结果与 F0比较结束。 Step S407: Add 1 to the ordinate j of the current line, and return to step S403 until The result of all the line integrations included in the image ends with the comparison of F0.
本步骤为 K1 小于 κ, 表示灰度偏差尚未构成度偏差区域, 故将 当前行的纵坐标 j加 1, 进行下一行的积分运算并与 F0比较。 In this step, K1 is less than κ, indicating that the grayscale deviation has not yet formed a degree deviation region, so the vertical coordinate j of the current row is incremented by 1, and the integration operation of the next row is performed and compared with F0.
步骤 S408: 记录当前行的纵坐标 yj l (即行数) , 将 K1值清零。 本步骤 K1 大于 K, 表示当前帧图像与前一帧图像出现灰度偏差 区域, 故记录出现灰度偏差的起始行的纵坐标 yj l, 并将 K1值清零。 Step S408: Record the ordinate yj l (ie, the number of rows) of the current line, and clear the K1 value. In this step, K1 is greater than K, indicating that the current frame image and the previous frame image have gray-scale deviation regions, so the ordinate yj l of the starting line where the gray-scale deviation occurs is recorded, and the K1 value is cleared.
步骤 S409: 判断积分结果是否小于预先设定的 F0。 若是则进入 步骤 S411, 否则进入步骤 S410。 Step S409: It is judged whether the integration result is smaller than a preset F0. If yes, go to step S411, otherwise go to step S410.
此时由出现灰度偏差的起始行的纵坐标 yj l起, 继续判断积分结 果 中是否有连续行积分结果均小于 F0,即判断灰度偏差区域是否 结束。 At this time, from the ordinate yj l of the starting line where the gradation deviation occurs, it is judged whether or not the continuous line integration result in the integration result is smaller than F0, that is, whether the gradation deviation area is ended or not.
步骤 S410: 将当前行的纵坐标 j加 1, 并且将 K2清零, K2表示 连续未出现灰度偏差的行数, 并返回步骤 S409, 直至将图像所包含 全部行的积分运算完成。 Step S410: Add 1 to the ordinate j of the current line and clear K2, and K2 indicates the number of lines in which the gradation deviation does not occur continuously, and return to step S409 until the integral operation of all the lines included in the image is completed.
步骤 S411 : 对于积分结果小于预先设定积分阈值, 则将 K2加 1。 步骤 S412: 判断 K2是否大于预先设定的个数阈值 K:,, 若是, 则 表示当前灰度偏差区域结束, 进入步骤 414, 否则进入步骤 413。 其 中 K,可以与 Κ取值相同。 Step S411: If the integration result is less than the preset integration threshold, K2 is incremented by 1. Step S412: It is judged whether K2 is greater than a preset number threshold K:, and if so, it indicates that the current grayscale deviation area ends, and the process proceeds to step 414, otherwise, the process proceeds to step 413. Among them, K can be the same as the value.
步骤 S413: 将当前行的纵坐标 j加 1, 并返回步骤 S409, 直至将 图像所包含全部行的结果与 F0比较结束。 Step S413: The vertical coordinate j of the current line is incremented by 1, and the process returns to step S409 until the result of comparing all the lines included in the image with F0 is ended.
步骤 S414: 记录当前行的纵坐标 yj2, 将 K2值清零。 Step S414: Record the ordinate yj2 of the current line, and clear the K2 value.
此时 K2大于 K:,, 则表示当前帧图像与前一帧图像在所有行的灰 度偏差区域结束, 记录灰度偏差区域的终止行的纵坐标 yj2, 并将 K2 值清零。 At this time, K2 is greater than K:, indicating that the current frame image and the previous frame image end in the gray deviation region of all the lines, the ordinate yj2 of the end line of the gradation deviation region is recorded, and the K2 value is cleared.
此时 yj l、 yj2之间的区域就是灰度偏差在纵坐标上的区间。 At this time, the area between yj l and yj2 is the interval of the gray scale deviation on the ordinate.
步骤 S415 (未图示) : 返回步骤 S403, 同理依据 X轴投影的积 分式 xil、 xi2。 Step S415 (not shown): Returning to step S403, the integration is based on the X-axis projections xil, xi2.
依据 X轴投影的积分式 ^^^Σ ^^)所计算出 Xil、 xi2, 计 i=l According to the integral formula ^^^Σ ^^) of the X-axis projection, X il, xi2, i=l
算方法与步骤 S403~步骤 S414相同, 不再赘述。 由 xil、 xi2、 yjl和 yj 2所组成的矩形区域便为灰度偏差区域。 The calculation method is the same as the steps S403 to S414, and will not be described again. The rectangular area composed of xil, xi2, yjl, and yj 2 is the gray scale deviation area.
由上, 本发明采用上述方式判断灰度偏差区域, 相比较起对各个 像素点那样依次判断, 处理速度更快。 From the above, the present invention determines the gradation deviation area in the above manner, and sequentially judges the pixel points as compared with each other, and the processing speed is faster.
步骤 S50: 将步骤 S40中计算得出当前帧图像与前一帧图像灰度 偏差区域, 即由 xil、 xi2、 yj l和 yj2所围成的矩形区域(即屏幕上灰 度发生偏差区域)对应的区域图像进行编码。 Step S50: calculating, in step S40, the grayscale deviation region of the current frame image and the previous frame image, that is, the rectangular region surrounded by xil, xi2, yj1, and yj2 (ie, the grayscale deviation region on the screen) corresponds to The area image is encoded.
步骤 S60: 将步骤 S50中所编码的区域图像以无线形式向视频接 收端发送, 其中该发送的信息包括所述区域图像的位置信息, 例如上 述 xil、 xi2、 yj l和 yj2信息。 Step S60: The area image encoded in step S50 is sent to the video receiving end in a wireless form, wherein the transmitted information includes location information of the area image, such as the above xil, xi2, yj l and yj2 information.
步骤 S70: 视频接收端接收已编码的区域图像, 进行解码后显示。 当接收到所述区域图像数据并解码后, 将该图像按照其位置信息 整合至所緩存的前一帧图像中, 并将该整合后的图像进行緩存, 用作 与下一帧图像的整合。 Step S70: The video receiving end receives the encoded area image, and displays it after decoding. After receiving the area image data and decoding, the image is integrated into the buffered previous frame image according to its position information, and the integrated image is buffered for integration with the next frame image.
在视频发送端以非全屏窗口播放视频文件, 只有在除播放视频窗 口外的其他区域图像发生变化时(例如用户通过鼠标或键盘输入指令 移动了视频播放窗口的位置或调整视频播放窗口的大小, 或又激活其 他应用或文件的新窗口) , 才需要进行全屏窗口截屏。 反之当一定时 间内用户无指令输入时,只需对视频播放的窗口进行截屏。近一步的, 将当前帧视频窗口图像与前一帧进行比较, 计算灰度偏差区域。 Playing a video file in a non-full-screen window at the video sending end, only when the image of the other area except the playing video window changes (for example, the user moves the position of the video playing window or adjusts the size of the video playing window by using a mouse or keyboard input command, Or activate a new window for other apps or files) to take a full-screen window capture. On the other hand, when the user does not input the command within a certain period of time, it is only necessary to take a screenshot of the window in which the video is played. Further, the current frame video window image is compared with the previous frame to calculate the grayscale deviation region.
如前文所述, 在对全屏窗口截屏时, 比较相邻两帧图像所计算得 出的灰度偏差区域为视频播放的窗口。 而对视频窗口截屏时, 比较所 计算得出的灰度偏差区域最大即为视频播放窗口(即视频画面完全改 变) , 若视频播放内容为讲座等节目时(通常背景不变而仅为主讲人 的动作发生变化) , 则所计算得出的灰度偏差区域可进一步缩小, 由 此更加节省带宽占用量以及加快编码速度。 As described above, when the full-screen window is screened, the grayscale deviation area calculated by comparing the adjacent two frames of images is the window for video playback. When the video window is screened, the calculated grayscale deviation area is the video playback window (that is, the video screen is completely changed). If the video playback content is a lecture or the like (usually the background is unchanged, only the presenter is the main speaker). The action of the change occurs, and the calculated gray-scale deviation area can be further reduced, thereby saving bandwidth consumption and speeding up the encoding speed.
因此, 步骤 S20之前还包括步骤(未图示) : 视频发送端监控用 户输入指令的时间间隔,依据时间间隔控制进行桌面窗口截屏或视频 播放窗口截屏。 其中, 视频播放窗口的位置的确认, 可依据所截相邻 连续帧 (帧数大于 2, 较佳的, 选择 4帧) 桌面窗口截屏图像的差分 结果进行判断, 当多次判断后, 发生变化的区域所构成的部分较为确 定, 即为视频播放窗口所在位置, #居该方法确认出视频播放窗口位 置、 大小后, 即可在后续截屏时, 仅对所识别出的视频播放窗口部分 进行截屏和上述步骤 S40的判断。 更优的, 在进行视频播放窗口截屏 时, 每隔一段时间 (例如 5秒)再次进行桌面窗口截屏, 以增加图像 传输的准确性(例如视频播放软件的播放进度条在视频画面外, 由此 视播放进度条为背景图像) , 及时更新背景图像。 Therefore, before step S20, a step (not shown) is further included: video transmission terminal monitoring The interval at which the user inputs the command, and controls the desktop window screen capture or the video playback window screen according to the time interval control. Wherein, the confirmation of the position of the video play window may be judged according to the difference result of the cut-off image of the desktop window according to the consecutive consecutive frames (the number of frames is greater than 2, preferably 4 frames), and changes after multiple judgments The part formed by the area is relatively determined, that is, the position of the video playing window, and after the method confirms the position and size of the video playing window, the screen portion of the identified video playing window can be screened in the subsequent screen capture. And the judgment of the above step S40. Preferably, during the video playback window screen capture, the desktop window screen capture is performed again at intervals (for example, 5 seconds) to increase the accuracy of image transmission (for example, the playback progress bar of the video playback software is outside the video screen, thereby The background image is updated in time according to the playback progress bar as the background image.
例如用户在浏览视频时, 若在预计的时间间隔 (例如 30 秒) 内 未检测到来自鼠标或键盘的指令, 则视频发送端认为用户无指令输 入, 由此仅对视频播放窗口进行截屏。 反之在预计的时间间隔内有指 令输入, 则进行整窗口截屏。其中视频播放窗口的截屏方法与步骤 20 相同, 不再赘述。 For example, when the user browses the video, if the command from the mouse or keyboard is not detected within the expected time interval (for example, 30 seconds), the video sender considers that the user has no instruction input, thereby only taking a screen shot of the video play window. Otherwise, if there is an instruction input within the expected time interval, a full window screen shot is taken. The screen capture method of the video play window is the same as that of step 20, and will not be described again.
下面对图像快速获取、 传输的系统进行描述, 如图 5所示, 包括 视频发送端 51和视频接收端 52。 The system for quickly acquiring and transmitting images is described below. As shown in FIG. 5, the video transmitting end 51 and the video receiving end 52 are included.
其中, 所述视频发送端 51 用于发送视频共享请求与视频接收端 52建立无线共享连接, 并对所播视频进行截图并緩存。将当前帧图像 与前一帧图像进行比对计算出各个灰度偏差区域, 将两帧图像的各个 灰度偏差区域进行编码后以无线形式输出。 The video sending end 51 is configured to send a video sharing request to establish a wireless shared connection with the video receiving end 52, and view and cache the broadcasted video. The current frame image is compared with the previous frame image to calculate each gray scale deviation region, and each gray scale deviation region of the two frame images is encoded and output in a wireless form.
视频发送端 51包括第一共享模块 511、 截屏模块 512、 视频发送 端存储模块 513、 滤波模块 514、 对比模块 515、 编码模块 516和发送 模块 517。 The video transmitting end 51 includes a first sharing module 511, a screen capture module 512, a video transmitting end storage module 513, a filtering module 514, a comparison module 515, an encoding module 516, and a transmitting module 517.
第一共享模块 511用于发送视频共享请求, 依据反馈信息判断是 否进行共享连接。 The first sharing module 511 is configured to send a video sharing request, and determine whether to perform a shared connection according to the feedback information.
截屏模块 512与第一共享模块 511连接, 用于依据共享触发信息 对窗口进行逐帧截屏。 The screen capture module 512 is connected to the first sharing module 511 for performing frame-by-frame screen capture according to the shared trigger information.
视频发送端存储模块 513与截屏模块 512连接, 用于接收窗口截 屏图像并进行緩存。 其中, 视频发送端存储模块 513至多緩存当前帧 与前一帧两幅截屏图像, 当视频发送端存储模块 513接收下一帧窗口 图像时, 按照緩存时序将较早緩存的前一帧窗口截屏图像删除。 The video sending end storage module 513 is connected to the screen capture module 512 for receiving the window cutoff Screen images are cached. The video sending end storage module 513 caches at least two screenshot images of the current frame and the previous frame. When the video sending end storage module 513 receives the next frame window image, the previous frame window screen image that is cached earlier according to the buffering timing. delete.
滤波模块 514与视频发送端存储模块 513连接, 用于调取所緩存 的窗口截屏图像并进行滤波处理, 消除图像噪声。 The filtering module 514 is connected to the video sending end storage module 513 for retrieving the buffered window screen image and performing filtering processing to eliminate image noise.
对比模块 515与滤波模块 514连接, 用于将当前帧图像与前一帧 截屏图像进行对比, 计算出各个灰度偏差区域。 The comparison module 515 is connected to the filtering module 514, and is used for comparing the current frame image with the previous frame screenshot image to calculate each grayscale deviation region.
其中, 对比模块 515包括: The comparison module 515 includes:
像素灰度换算单元, 用于将图像 RGB 数值排列矩阵的彩色图换 算为灰度图; a pixel gradation conversion unit for converting a color map of an image RGB numerical arrangement matrix into a grayscale image;
差分运算单元, 与像素灰度换算单元连接, 用于将当前帧图像与 前一帧图像进行差分运算, 并将差分运算结果合并同一数组; a differential operation unit, connected to the pixel gradation conversion unit, for performing a difference operation between the current frame image and the previous frame image, and combining the difference operation results into the same array;
积分运算单元, 与差分运算单元连接, 用于对所述数组分别进行 X轴 Y轴的投影并求其积分; The integral operation unit is connected to the difference operation unit, and is configured to respectively perform the X-axis Y-axis projection on the array and obtain the integral thereof;
积分结果对比单元, 与积分运算单元连接, 用于判断积分结果与 预先设定的积分阈值的大小, 若积分结果大于积分阈值则表示出现灰 度偏差, 反之表示未出现灰度偏差; The integral result comparison unit is connected to the integral operation unit for determining the magnitude of the integration result and the preset integration threshold. If the integration result is greater than the integration threshold, the grayscale deviation is indicated, and vice versa, the grayscale deviation is not present;
灰度偏差区域起点判断单元, 与积分结果对比单元连接, 用于判 断连续行(或列)出现灰度偏差的数量与预先设定的个数阈值的大小, 若连续行(或列) 出现灰度偏差的数量大于个数阈值, 表示出现灰度 偏差区域, 反之表示未出现灰度偏差区域; The gray-scale deviation area starting point determining unit is connected with the integral result comparing unit, and is used for judging the number of gray-scale deviations of the continuous line (or column) and the preset number of thresholds, if the continuous line (or column) appears gray The number of degrees of deviation is greater than the threshold of the number, indicating that the gray scale deviation area appears, and vice versa, that the gray scale deviation area does not appear;
灰度偏差区域终点判断单元, 与灰度偏差区域起点判断单元连 接, 用于判断连续行(或列) 出现灰度偏差的数量与预先设定的个数 阈值的大小, 若连续行(或列) 出现灰度偏差的数量小于个数阈值, 表示灰度偏差区域结束, 反之表示灰度偏差区域未结束; The gray-scale deviation area end point determining unit is connected to the gray-scale deviation area starting point determining unit for determining the number of gray-scale deviations of the continuous line (or column) and the size of the preset number threshold, if continuous lines (or columns) The number of grayscale deviations is less than the threshold of the number, indicating that the grayscale deviation region ends, and conversely, the grayscale deviation region is not ended;
行列坐标累加单元, 与积分结果对比单元连接, 用于对像行(或 列) 的坐标进行加 1运算。 The row and column coordinate accumulating unit is connected to the integral result comparison unit for adding 1 to the coordinates of the image row (or column).
编码模块 516与对比模块 515连接, 用于将对比模块 515计算的 各个灰度偏差区域所对应的图像进行编码处理, 形成像素数据。 发送模块 517与编码模块 516连接, 用于将所编码的像素数据以 无线形式输出。 The encoding module 516 is connected to the comparison module 515 for encoding the image corresponding to each grayscale deviation region calculated by the comparison module 515 to form pixel data. The transmitting module 517 is coupled to the encoding module 516 for outputting the encoded pixel data in a wireless form.
用户指令监控模块(未图示) , 用于监控用户通过鼠标或键盘输 入指令信号的频率。 并将监控结果发送至所述截屏模块 512。 若用户 在预定时间内未进行指令信号输入, 则截屏模块 512仅对视频播放窗 口进行图像截取。 A user command monitoring module (not shown) is used to monitor the frequency with which the user inputs a command signal via a mouse or keyboard. The monitoring result is sent to the screen capture module 512. If the user does not input a command signal within a predetermined time, the screen capture module 512 performs image capture only on the video play window.
所述视频接收端 52用于依据共享标识位响应视频发送端 51的视 频共享请求, 建立无线共享连接后接收已编码的像素数据, 进行解码 并整合后播出。 The video receiving end 52 is configured to respond to the video sharing request of the video transmitting end 51 according to the shared identifier bit, and after receiving the wireless shared connection, receive the encoded pixel data, decode and integrate the broadcast data.
视频接收端 52包括: 第二共享模块 521、 共享检测模块 522、 接 收模块 523、解码模块 524、整合模块 525和视频接收端存储模块 526。 The video receiving end 52 includes: a second sharing module 521, a sharing detecting module 522, a receiving module 523, a decoding module 524, an integrating module 525, and a video receiving end storage module 526.
第二共享模块 521与第一共享模块 511耦合, 用于接收视频共享 请求, 并输出标识位的检测信息。 The second sharing module 521 is coupled to the first sharing module 511 for receiving a video sharing request and outputting detection information of the identification bit.
共享检测模块 522与第二共享模块 521连接, 用于依据标识位的 检测信息检测视频接收端 52的共享状态 (即是否正与其他视频发送 端进行视频共享) 并生成反馈信息。 The sharing detection module 522 is connected to the second sharing module 521, and is configured to detect the sharing status of the video receiving end 52 according to the detection information of the identification bit (ie, whether video sharing is being performed with other video transmitting ends) and generate feedback information.
所述第二共享模块 521还用于接收反馈信息, 并将此反馈信息回 传至第一共享模块 511。 The second sharing module 521 is further configured to receive feedback information and transmit the feedback information to the first sharing module 511.
接收模块 523与发送模块 517耦合,用于接收已编码的像素数据。 解码模块 524与接收模块 523连接, 用于将已编码的像素数据进 行解码处理。 The receiving module 523 is coupled to the transmitting module 517 for receiving encoded pixel data. The decoding module 524 is coupled to the receiving module 523 for decoding the encoded pixel data.
整合模块 525 与解码模块 524 和后文所述视频接收端存储模块 526连接, 用于将所解码的灰度偏差区域所对应的图像按照其行列像 素点坐标整合至所緩存的前一帧的图像中。 The integration module 525 is coupled to the decoding module 524 and the video receiving end storage module 526, and is configured to integrate the image corresponding to the decoded grayscale deviation area into the cached image of the previous frame according to the row and column pixel coordinates. in.
视频接收端存储模块 526, 与整合模块 525连接,将整合模块 525 所整合的图像进行緩存。 The video receiving end storage module 526 is connected to the integration module 525 to buffer the image integrated by the integration module 525.
显示模块(未图示) , 与整合模块 525连接, 用于显示已整合的 视频图像。 A display module (not shown) is coupled to the integrated module 525 for displaying the integrated video image.
以上所述仅为本发明的较佳实施例而已, 并不用以限制本发明, 凡在本发明的精神和原则之内, 所作的任何修改、等同替换、 改进等, 均应包含在本发明的保护范围之内。 The above is only the preferred embodiment of the present invention and is not intended to limit the present invention. Any modifications, equivalent substitutions, improvements, etc. made within the spirit and scope of the present invention are intended to be included within the scope of the present invention.
Claims
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201210024336.2A CN103248946B (en) | 2012-02-03 | 2012-02-03 | The method and system that a kind of video image quickly transmits |
| CN201210024336.2 | 2012-02-03 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2013113191A1 true WO2013113191A1 (en) | 2013-08-08 |
Family
ID=48904392
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2012/073736 Ceased WO2013113191A1 (en) | 2012-02-03 | 2012-04-10 | Method and system for rapid video image transmission |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN103248946B (en) |
| WO (1) | WO2013113191A1 (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20210194994A1 (en) * | 2017-11-20 | 2021-06-24 | ASG Technologies Group, Inc. dba ASG Technologies | Publication of Applications Using Server-Side Virtual Screen Change Capture |
Families Citing this family (14)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104580094A (en) * | 2013-10-21 | 2015-04-29 | 中兴通讯股份有限公司 | Image transmission method and device |
| JP2017529716A (en) * | 2014-07-30 | 2017-10-05 | エントリクス カンパニー、リミテッド | Cloud streaming service system, cloud streaming service method using still image compression technique, and apparatus therefor |
| CN104618676A (en) * | 2015-02-04 | 2015-05-13 | 深圳市迈昂科技有限公司 | Wireless video transmission method |
| CN106354247A (en) * | 2015-07-17 | 2017-01-25 | 上海乐相科技有限公司 | Display control method and device for headset intelligent glasses |
| CN106803863A (en) * | 2016-12-20 | 2017-06-06 | 深圳市金立通信设备有限公司 | A kind of image sharing method and terminal |
| CN109218748B (en) * | 2017-06-30 | 2020-11-27 | 京东方科技集团股份有限公司 | Video transmission method, device, and computer-readable storage medium |
| CN109218731B (en) * | 2017-06-30 | 2021-06-01 | 腾讯科技(深圳)有限公司 | Screen projection method, device and system of mobile equipment |
| CN107613370B (en) * | 2017-10-27 | 2020-08-14 | 烟台北方星空自控科技有限公司 | Method for realizing local area network screen sharing by adopting screen capture image |
| CN108153573A (en) * | 2017-12-26 | 2018-06-12 | 合肥中科云巢科技有限公司 | Cloud desktop picture update method and virtual machine |
| CN111245879A (en) * | 2018-11-29 | 2020-06-05 | 深信服科技股份有限公司 | Desktop content transmission method and system of virtual desktop and related components |
| CN111625311B (en) * | 2020-05-18 | 2023-05-26 | Oppo(重庆)智能科技有限公司 | Control method, control device, electronic equipment and storage medium |
| CN114020486B (en) * | 2021-10-08 | 2024-06-21 | 中国联合网络通信集团有限公司 | Data generation method, device, equipment and storage medium |
| CN114189753A (en) * | 2021-11-26 | 2022-03-15 | 安徽创世科技股份有限公司 | A method and device for sharing video selections |
| CN116405606A (en) * | 2023-04-14 | 2023-07-07 | 深圳市瑞云科技股份有限公司 | A Method of Improving the Efficiency of Image Network Transmission |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101291488A (en) * | 2008-03-24 | 2008-10-22 | 中兴通讯股份有限公司 | Screen printing method on mobile terminal |
| CN101808096A (en) * | 2010-03-22 | 2010-08-18 | 北京大用科技有限责任公司 | Method for sharing and controlling large screen among local area networks in different positions |
| CN101820416A (en) * | 2010-02-24 | 2010-09-01 | 上海引跑信息科技有限公司 | Processing method of high-speed shared desktop in netmeeting system |
Family Cites Families (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20060161624A1 (en) * | 2001-04-13 | 2006-07-20 | Elaine Montgomery | Methods and apparatuses for dynamically sharing a portion of a display for application based screen sampling |
| US20030189980A1 (en) * | 2001-07-02 | 2003-10-09 | Moonlight Cordless Ltd. | Method and apparatus for motion estimation between video frames |
| CN101835043B (en) * | 2010-03-23 | 2013-10-09 | 熔点网讯(北京)科技有限公司 | Adaptive bandwidth desktop sharing method based on block encoding |
-
2012
- 2012-02-03 CN CN201210024336.2A patent/CN103248946B/en active Active
- 2012-04-10 WO PCT/CN2012/073736 patent/WO2013113191A1/en not_active Ceased
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101291488A (en) * | 2008-03-24 | 2008-10-22 | 中兴通讯股份有限公司 | Screen printing method on mobile terminal |
| CN101820416A (en) * | 2010-02-24 | 2010-09-01 | 上海引跑信息科技有限公司 | Processing method of high-speed shared desktop in netmeeting system |
| CN101808096A (en) * | 2010-03-22 | 2010-08-18 | 北京大用科技有限责任公司 | Method for sharing and controlling large screen among local area networks in different positions |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20210194994A1 (en) * | 2017-11-20 | 2021-06-24 | ASG Technologies Group, Inc. dba ASG Technologies | Publication of Applications Using Server-Side Virtual Screen Change Capture |
| US11582284B2 (en) * | 2017-11-20 | 2023-02-14 | Asg Technologies Group, Inc. | Optimization of publication of an application to a web browser |
| US12393394B2 (en) | 2017-11-20 | 2025-08-19 | Rocket Software Technologies, Inc. | Systems and method for publication of applications using server-side virtual screen change capture |
Also Published As
| Publication number | Publication date |
|---|---|
| CN103248946A (en) | 2013-08-14 |
| CN103248946B (en) | 2018-01-30 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2013113191A1 (en) | Method and system for rapid video image transmission | |
| US9800919B2 (en) | Method and device for screen mirroring | |
| US8255531B2 (en) | Method and apparatus for providing mobile device interoperability | |
| CN104735449B (en) | A kind of image transfer method split based on rectangle every column scan | |
| US11694316B2 (en) | Method and apparatus for determining experience quality of VR multimedia | |
| US10742932B2 (en) | Communication terminal, communication system, moving-image outputting method, and recording medium storing program | |
| KR20140111859A (en) | Method and device for sharing content | |
| US10044979B2 (en) | Acquiring regions of remote shared content with high resolution | |
| WO2021031850A1 (en) | Image processing method and apparatus, electronic device and storage medium | |
| WO2021244666A1 (en) | Video playing control method and apparatus, and computer device and storage medium | |
| JP2010124381A (en) | Monitoring system | |
| JP6224516B2 (en) | Encoding method and encoding program | |
| WO2017054142A1 (en) | Video data acquisition method, apparatus and system, and computer readable storage medium | |
| CN106302760A (en) | Desktop screen capture control method and system | |
| US9226003B2 (en) | Method for transmitting video signals from an application on a server over an IP network to a client device | |
| US20150186095A1 (en) | Inter-terminal image sharing method, terminal device, and communications system | |
| US20140099039A1 (en) | Image processing device, image processing method, and image processing system | |
| CN113810755B (en) | Panoramic video preview method and device, electronic equipment and storage medium | |
| CN111885417B (en) | VR video playing method, device, equipment and storage medium | |
| CN106919376B (en) | Dynamic picture transmission method, server device and user device | |
| CN107318021B (en) | Data processing method and system for remote display | |
| CN110650309A (en) | Video conference video image transmission method, terminal and readable storage medium | |
| CN107547913B (en) | Video data playing and processing method, client and equipment | |
| JP2015023417A (en) | Communication device and imaging device, control method thereof, and program | |
| US20180286006A1 (en) | Tile reuse in imaging |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 12867017 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 12867017 Country of ref document: EP Kind code of ref document: A1 |
|
| 32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 17/04/2015) |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 12867017 Country of ref document: EP Kind code of ref document: A1 |