Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
The present application provides a video parsing method, which can be applied to a video parsing server in a video parsing system as shown in fig. 1. Referring to fig. 1, the video parsing system may include a video parsing server, a content management platform, a scheduling system, and a parent node server. In the present application, the number of servers is not limited. For example, the video parsing server may be a single server or a server cluster in actual application. Similarly, the parent node server may be a single server or a server cluster in practical applications.
In this application, the content management platform may store the latest video corresponding to the pre-fetching service. Specifically, the client may send a prefetch instruction to the content management platform for the latest video that needs to be prefetched, where the prefetch instruction may carry an identifier of the latest video, and thus, after receiving the prefetch instruction, the content management platform may identify one or more identifiers of the latest video, and may download the corresponding latest video from the source station server of the client.
The scheduling system can store father node information, and the father node information can be used for representing the storage relation between the video file and the father node server. Specifically, other video files, besides the latest video, may be stored in the parent node server. A storage list of video files can be maintained in the scheduling system, and the identification of the parent node server can be used as an index identification in the storage list, and the identification of the video files stored in the parent node server can be used as an index result. Therefore, the video file stored in the parent node server can be inquired through the identification of the parent node server. Conversely, through the identification of the video file, it can also be queried in which parent node server or servers the video file is stored.
Of course, in practical applications, the video parsing system may also include only the video parsing server, the scheduling system, and the parent node server. The parent node server can store the video files with the opened pre-fetching service and the video files without the opened pre-fetching service in a centralized manner. For video files with the opened pre-fetching service, the parent node server can obtain the latest videos from the content management platform, so that the latest videos and other videos are stored in the parent node server. Subsequently, the video analysis server only needs to acquire video information of the target video which is not analyzed from the father node server.
Referring to fig. 2, the video parsing method applied in the video parsing server may include the following steps.
S1: acquiring father node information, wherein the father node information is used for representing a storage relation between a video file and a father node server; and determining the unresolved target video stored in the parent node server based on the parent node information.
In this embodiment, the video parsing server may obtain video information of the video file from the parent node server. The video file can be a video file with opened prefetching service or a video file without opened prefetching service. Specifically, the video parsing server may first obtain parent node information from the scheduling system, where the parent node information may be a storage list in the scheduling system. Through the storage list, the video analysis server can acquire the identifiers of the video files stored in the current parent node servers.
In this embodiment, the video parsing server may generate a file list of the video files stored in the parent node server according to a storage relationship between the video file represented by the parent node information and the parent node server, where the file list may include an identifier of each video file. Before the video file is analyzed, the video analysis server can judge which video files in the file list are not analyzed, and can analyze the video files which are not analyzed subsequently, so that repeated analysis is avoided.
Specifically, in the video parsing server, for video files that have completed parsing, the identifiers of these video files and corresponding parsing information may be stored in an associated manner. Specifically, the identifier of the video file that has completed parsing may be used as a key (key), and the corresponding parsing information may be used as a value (value), so that the parsed result is stored by means of a key-value pair (key-value). In this way, in the file list generated by the video parsing server, it may be sequentially queried in the video parsing server whether parsing information associated with the video file exists for each video file in the file list. Finally, the video file without the associated parsing information may be used as the target video without being parsed.
Of course, in practical applications, there are many ways to mark whether a video file is parsed or not. For example, in the parent node information stored by the scheduling system, a parsing identifier may be added to each video file. The parsing flag may be used to characterize whether the current video file is parsed. And the scheduling system can be informed when the video analysis server completes the analysis of one video file, so that the scheduling system can modify the analysis identifier of the video file. Subsequently, after the video analysis server acquires the father node information, the target video which is not analyzed can be directly determined from the father node information.
Referring to fig. 3, in one embodiment, the latest video for opening the pre-fetch service may be stored centrally at the content management platform. The video parsing server can obtain video information of the latest video from a content management platform, wherein the content management platform can respond to a prefetch instruction of a client and download the latest video pointed by the prefetch instruction from a source station server of the client.
In this embodiment, the video parsing server may obtain video information of the latest video from the content management platform according to a fixed time period or according to a time node appointed by the client. The video information acquired by the video parsing server is not the entire data of the video but is partial data of the video. The video information may be information such as data size, encoding format, playing time length, etc. of the latest video, or may be other information calculated according to the information. In practical applications, different video information can be obtained from the content management platform through requests of different formats.
In particular, in one embodiment, the video parsing server may send a header request to the content management platform that points to the latest video, which may be, for example, a head request in an HTTP request. Unlike GET requests in HTTP, the head request does not GET the actual data body, but just some description information of the data body. Specifically, after receiving the header request sent by the video parsing server, the content management platform may identify an identifier of the latest video carried therein, so that response information of the latest video may be fed back to the video parsing server for the header request.
In this embodiment, the response information may carry a plurality of items of description information of the latest video. For example, the description information may include a compression format, a data size, a data type, a last buffering time, and the like of the latest video. Each piece of description information can be assigned with a corresponding field. For example, for data size, it can be identified by the assignment of the content-length field. In this way, the video parsing server can use the assignment of the content length field as the data size of the latest video by identifying the content length field in the response information.
In one embodiment, in addition to obtaining the data size of the latest video, some other information (e.g., encoding format, playing time length, etc.) of the latest video needs to be obtained. In practical applications, the other information of the latest video is usually recorded in the head field and/or the tail field of the latest video. Therefore, the video parsing server can obtain the head field and/or the tail field of the latest video from the content management platform. Specifically, the video parsing server may acquire partial data of the latest video by sending a range data acquisition request to the content management platform. The range data acquisition request may be a range request in which the position of the partial data to be acquired in the entire data of the latest video may be determined by defining a range parameter. For example, the range parameter may be expressed as content-length: 0-10, indicating that the first 11 bytes of data need to be retrieved from the entire data of the latest video.
In practical applications, the way of acquiring the header field and the trailer field is slightly different. Specifically, for the header field, the data start position of the latest video may be taken as the start position of the data to be acquired. Generally, the offset (offset) of the data start position may be 0. Of course, in some application scenarios, the data start position may not be 0, but may be another known value. After the initial position of the data is defined, the ending position of the data to be acquired can be calculated according to the length of the header field. For fixed format video, the length of the header field is also often fixed. The length of the header field may be represented by a first preset data length. For example, if the header field is typically 100KB in length, the first predetermined data length may be 100 KB. Therefore, the ending position of the data to be acquired can be determined according to the starting position of the data and the first preset data length. In this way, the start position and the end position of the data to be acquired may define a range parameter, and a range data acquisition request representing the data to be acquired may be constructed according to the range parameter.
For the end field, the data end position of the latest video may be used as the end position of the data to be acquired. The data termination position may be determined according to the data size of the latest video acquired as described above. For example, if the data size of the latest video is 100000KB, the data ending position may be 99999KB (data starting position starts from 0). Therefore, after the data termination position is determined, the initial position of the data to be acquired can be generated according to the second preset data length representing the length of the tail field. Subsequently, a range data acquisition request representing the data to be acquired may be constructed according to the range parameters defined by the start position and the end position of the data to be acquired.
In practical applications, the header field and the trailer field may be obtained according to practical situations. Only the head field or the tail field may be acquired, or both the head field and the tail field may be acquired, and the number of acquired fields is determined mainly according to the format of the video.
In this embodiment, after the video analysis server sends the range data acquisition request pointing to the latest video to the content management platform, the video analysis server may receive the range data fed back by the content management platform in response to the range data acquisition request. By analyzing the range data, other information of the latest video can be acquired. For example, the information such as the playing time length and the encoding format of the latest video can be obtained by parsing.
S3: and acquiring the video information of the target video from the corresponding father node server, and analyzing the video information of the target video to generate analysis information of the target video.
In this embodiment, after the target video that is not analyzed is determined, the target parent node server storing the target video may be determined according to the storage relationship represented by the parent node information. Subsequently, the video information of the corresponding target video can be acquired from the target parent node servers. The way of acquiring the video information from the parent node server is consistent with the way of acquiring the video information of the latest video from the content management platform, and different video information can be acquired through the header request and the range data acquisition request. Specifically, the video parsing server may send a header request pointing to the target video to the target parent node server, and receive response information fed back by the target parent node server for the header request, so that an assignment of a content length field may be used as a data size of the target video by identifying the content length field in the response information.
In addition, the video resolution server may further send a range data acquisition request pointing to the target video to the target parent node server, and receive range data fed back by the target parent node server for the range data acquisition request. The range data is at least used for representing the playing time length of the target video and can also represent the encoding mode of the target video and the like.
The range parameter carried in the range data acquisition request sent to the target parent node server may also be generated in the manner described above, and will not be described herein again.
In this embodiment, after the video information of the latest video and the video information of the target video are acquired, the analysis information of the latest video and the analysis information of the target video may be generated respectively. The playing code rate of the video is the most significant influence on resource allocation in the CDN system, and therefore, the analysis information at least needs to reflect the playing code rate of the video.
Specifically, according to the video information of the latest video and the video information of the target video, the data size and the playing time length of the latest video and the data size and the playing time length of the target video may be determined respectively. The size of the data can be obtained through the head request, and the playing duration of the video can be obtained through the range data corresponding to the range data obtaining request. Subsequently, according to the data size and the playing time of the latest video, the playing code rate of the latest video can be determined. Similarly, according to the data size and the playing time length of the target video, the playing code rate of the target video can be determined. The playback rate may be a ratio of a data size to a playback time duration. In this way, the video parsing server may use the playing code rate of the latest video as a parsing information of the latest video, and use the playing code rate of the target video as a parsing information of the target video.
Of course, other analysis information such as the encoding method and the video version can be obtained by analyzing the video information in other aspects, which is not limited herein.
S5: and storing the generated analysis information of the target video in the video analysis server.
In this embodiment, after generating the analysis information of the latest video and the target video, the video analysis server may identify the video identifiers of the latest video and the target video, respectively, and store the identified video identifiers and the corresponding analysis information in the video analysis server in an associated manner. Specifically, the video identifier of the latest video or the target video may be used as a key, and the corresponding parsing information may be used as a value, so that the video identifier of the latest video or the target video and the corresponding parsing information may be stored in the video parsing server by means of a key-value pair. In practical applications, the video identifier may be a URL (Uniform Resource Locator) of the latest video or the target video, or a character string calculated by performing a hash algorithm based on the URL, and the video identifier may uniquely represent the corresponding latest video or the target video, so that unique parsing information may be queried according to the video identifier.
In this embodiment, the video identifier and the parsing information stored in the video parsing server may be used as a basis for determining whether the parsing of the video file is completed. If the corresponding analysis information can be inquired in the video analysis server according to the video identification, the video corresponding to the video identification is analyzed. And if the corresponding analysis information cannot be inquired in the video analysis server according to the video identification, the video corresponding to the video identification is still not analyzed.
In this embodiment, the analysis information stored in the video analysis server may be subsequently accessed by another server in the CDN system, so that resource allocation in the CDN system may be adjusted and optimized according to the analysis information. For example, the video parsing server may store the bit rate of each video, and the video with different bit rates has different requirements for transmission bandwidth. Therefore, when providing video resources for users, the CDN system may access the video bitrate stored in the video parsing server and allocate a higher bandwidth to a high-bitrate video, so as to create a good video viewing experience for the users.
Referring to fig. 4, the present application further provides a video parsing server, including:
the target video determining unit is used for acquiring father node information from the scheduling system, and the father node information is used for representing the storage relationship between the video file and the father node server; determining an unresolved target video stored in a parent node server based on the parent node information;
the video analysis unit is used for acquiring the video information of the target video from a corresponding father node server and analyzing the video information of the target video to generate analysis information of the target video;
and the analysis information storage unit is used for storing the generated analysis information of the target video in the video analysis server.
In one embodiment, the video parsing server further comprises:
the system comprises a latest video synchronization unit, a content management platform and a client, wherein the latest video synchronization unit is used for acquiring video information of a latest video from the content management platform, and the content management platform is used for responding to a prefetching instruction of a client and downloading the latest video pointed by the prefetching instruction from a source station server of the client;
correspondingly, the video analyzing unit is further used for analyzing the video information of the latest video to generate the analysis information of the latest video;
the analysis information storage unit is further configured to store the generated analysis information of the latest video in the video analysis server.
In one embodiment, the latest video synchronization unit includes:
a header request sending module, configured to send a header request pointing to the latest video to the content management platform, and receive response information fed back by the content management platform for the header request;
and the data size identification module is used for identifying the content length field in the response information and taking the assignment of the content length field as the data size of the latest video.
In one embodiment, the latest video synchronization unit further comprises:
the range data acquisition module is used for sending a range data acquisition request pointing to the latest video to the content management platform and receiving range data fed back by the content management platform according to the range data acquisition request; wherein the range data is at least used for representing the playing time length of the latest video.
Referring to fig. 5, the present application further provides a video parsing server, where the video parsing server includes a memory and a processor, where the memory is used to store a computer program, and when the computer program is executed by the processor, the video parsing server can implement the video parsing method described above.
The present application further provides a video parsing system, the system includes a video parsing server, a scheduling system and a parent node server, wherein:
the scheduling system is used for storing father node information, and the father node information is used for representing the storage relation between the video file and the father node server;
the father node server is used for storing the video file;
the video analysis server is used for acquiring video information of an unresolved target video from the father node server and analyzing the video information of the target video to generate and store analysis information of the target video.
In practical applications, the video parsing system may further include a content management platform as shown in fig. 1, where the content management platform may store the latest video with the pre-fetching service enabled therein. Of course, the latest video for opening the pre-fetching service may also be obtained from the content management platform in advance by the parent node server, so that the video analysis server can obtain the latest video only by communicating with the parent node server.
Referring to fig. 6, in the present application, the technical solution in the above embodiment can be applied to the computer terminal 10 shown in fig. 6. The computer terminal 10 may include one or more (only one shown) processors 102 (the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA), a memory 104 for storing data, and a transmission module 106 for communication functions. It will be understood by those skilled in the art that the structure shown in fig. 6 is only an illustration and is not intended to limit the structure of the electronic device. For example, the computer terminal 10 may also include more or fewer components than shown in FIG. 6, or have a different configuration than shown in FIG. 6.
The memory 104 may be used to store software programs and modules of application software, and the processor 102 executes various functional applications and data processing by executing the software programs and modules stored in the memory 104. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the computer terminal 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the computer terminal 10. In one example, the transmission device 106 includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device 106 can be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
Therefore, according to the technical scheme, the video is analyzed through the independent video analysis server, so that the pressure of the edge node server can be reduced, and the influence on the experience of a user is avoided. The video files in the network can be stored in a parent node server of the CDN system. The video files may include the latest video file for which the prefetch service is opened, and other video files than the latest video file. Subsequently, the video parsing server may obtain the stored video file from the parent node server. Wherein, the video file stored in the parent node server may be partially parsed. Therefore, the video parsing server needs to identify the target video stored in the parent node server without parsing. In this way, for the target video, the analysis can be performed according to the acquired video information, so as to generate analysis information of the target video. The generated analysis information can be stored in the video analysis server, and the analysis information can be subsequently used as a reference basis for resource allocation and video monitoring analysis in the CDN system. Therefore, before the video is analyzed, the video analysis server firstly determines the video file which is not analyzed, so that the repeated analysis process can not be caused, and the video analysis efficiency is greatly improved.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.