CN112218102A - Video content package making method, client and system - Google Patents
Video content package making method, client and system Download PDFInfo
- Publication number
- CN112218102A CN112218102A CN202010890993.XA CN202010890993A CN112218102A CN 112218102 A CN112218102 A CN 112218102A CN 202010890993 A CN202010890993 A CN 202010890993A CN 112218102 A CN112218102 A CN 112218102A
- Authority
- CN
- China
- Prior art keywords
- video
- information
- text
- splitting
- plot
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 31
- 238000004519 manufacturing process Methods 0.000 claims abstract description 12
- 239000012634 fragment Substances 0.000 claims abstract description 11
- 238000009826 distribution Methods 0.000 claims abstract description 3
- 239000013598 vector Substances 0.000 claims description 21
- 238000004458 analytical method Methods 0.000 claims description 10
- 230000010365 information processing Effects 0.000 claims description 8
- 238000010801 machine learning Methods 0.000 claims description 6
- 239000002131 composite material Substances 0.000 claims description 4
- 230000001960 triggered effect Effects 0.000 claims description 4
- 230000000694 effects Effects 0.000 description 6
- 230000006870 function Effects 0.000 description 4
- 238000013136 deep learning model Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000007704 transition Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 230000001815 facial effect Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000008520 organization Effects 0.000 description 2
- 238000003860 storage Methods 0.000 description 2
- 241000219109 Citrullus Species 0.000 description 1
- 235000012828 Citrullus lanatus var citroides Nutrition 0.000 description 1
- 206010044565 Tremor Diseases 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- JJWKPURADFRFRB-UHFFFAOYSA-N carbonyl sulfide Chemical compound O=C=S JJWKPURADFRFRB-UHFFFAOYSA-N 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008092 positive effect Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- CCEKAJIANROZEO-UHFFFAOYSA-N sulfluramid Chemical group CCNS(=O)(=O)C(F)(F)C(F)(F)C(F)(F)C(F)(F)C(F)(F)C(F)(F)C(F)(F)C(F)(F)F CCEKAJIANROZEO-UHFFFAOYSA-N 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/21—Server components or server architectures
- H04N21/218—Source of audio or video content, e.g. local disk arrays
- H04N21/2187—Live feed
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/23424—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/435—Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/47205—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/845—Structuring of content, e.g. decomposing content into time segments
- H04N21/8456—Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Business, Economics & Management (AREA)
- Marketing (AREA)
- Human Computer Interaction (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention provides a video content package making method, a client and a system, and relates to the technical field of internet. A video content distribution package production method comprises the following steps: collecting set video plot information; splitting the video plots to obtain a plurality of plot fragment information; and sending video segment making invitation information on the network platform according to each episode information. After the video plots are split, the invention sends out the video segment making invitation information on the network platform, realizes the making of the package sending of the video content and meets the requirement of users for making rich movie contents.
Description
Technical Field
The invention relates to the technical field of Internet.
Background
In the system market, a system has attracted people's attention all the time, and people have unique interest and demand on the system, namely a social system. According to incomplete statistics, related SNS (social Networking services) electronic products in China are thousands of, and the main SNS systems are of the following types: the campus life type is that users mainly use students; professional business type, the user is mainly white collar; the friend making type is that the user prefers to the young and the male with the right age; the door opening type is opened, the doorsill for the user to enter the door is low, and communication is convenient. At present, a video online social system and a video online social method thereof are being pursued by people of all ages, and video social platforms such as tremble, watermelon video, volcano small video and the like become common social tools in life of people. The user can watch various objects and drop events recorded by people in different places through the lens through the video social platform.
On the other hand, the micro-movies emerging on the network greatly enrich the entertainment life of people, but due to the specialty of micro-movie production and the large manpower and material resources consumed by shooting movies, it is still difficult for individuals to independently produce movies with rich plot. For example, when a user wants to complete a recording film about a travel in europe, the user may need to carry a shooting tool to go to different real-site scenes of a plurality of european countries for shooting, and some shots usually only need to be shot for tens of seconds, because the tens of seconds consume a lot of manpower, material resources and financial resources.
How to fully utilize the lens data of a large number of users widely distributed in various places in the current network platform to meet the requirements of different users for making rich-content movie and television contents is also a problem to be solved at present.
Disclosure of Invention
The invention aims to: the defects of the prior art are overcome, and a video content package making method, a client and a system are provided. According to the invention, after the video plot is split, the network platform sends out the video segment making invitation information, so that the making of the package of the video content is realized, the requirement of a user for making rich movie content is met, and the user experience is improved.
In order to achieve the above object, the present invention provides the following technical solutions.
A video content distribution package production method comprises the following steps: collecting set video plot information; splitting the video plots to obtain a plurality of plot fragment information; and sending video segment making invitation information on the network platform according to each episode information.
Further, the method also comprises the step of,
collecting the information of the contracted users receiving the video segment making invitation information;
acquiring the information of the video segments produced by the contract users,
and combining the video segments according to the corresponding plot segments to form a composite video.
Further, the method also comprises the step of,
analyzing the episode information to determine whether the episode contains content corresponding to a future time;
in the case where it is determined that the content corresponding to the future time is included, the future video segment production invitation information is triggered.
Preferably, when the content corresponding to the future time includes geographic position information, a search is performed based on the geographic position information, and the future video segment creation invitation information is transmitted to the target object that meets the condition of the geographic position information.
Further, the step of splitting the video plot comprises,
analyzing and learning the existing videos in the network platform or the associated network platform by using a machine learning model;
obtaining plot splitting rules through analysis and learning;
and splitting the video plot according to the plot splitting rule.
Further, the step of splitting the video plot is as follows,
acquiring the character content of video plot information, and metering the word number of the character content;
dividing the word number into N sections on average, wherein N is an integer greater than or equal to 2, and each section corresponds to an episode;
the segment number corresponding to each episode segment is recorded.
Further, the step of splitting the video plot is as follows,
acquiring text information of video plot information;
performing semantic analysis on the text, and dividing the text into a plurality of text blocks according to semantic information;
each text block corresponds to one episode, and the text block corresponding to each episode and the position in the text are recorded.
Preferably, the dividing of the plurality of text blocks based on the splitting method includes the following steps,
dividing a text into 2 text blocks according to semantic information, acquiring a keyword vocabulary and word frequency information of each text block, and constructing a text feature vector;
comparing the difference degrees of the text feature vectors of the 2 text blocks;
if the difference degree reaches a threshold value, representing that the first-level splitting is successful, performing second-level splitting on the 2 text blocks, and splitting each text block into 2 second-level text blocks;
acquiring a keyword vocabulary and word frequency information of each secondary text block, constructing a secondary text feature vector, and comparing the difference degrees of the text feature vectors of 2 adjacent secondary text blocks;
if the difference degree reaches the threshold value, representing that the second-level splitting is successful, performing third-level splitting on the second-level text block; and analogizing in turn until the difference degree of the corresponding N-level text blocks is smaller than a threshold value, canceling current-level splitting, and terminating splitting.
The invention also provides a video social contact client, which comprises the following structure:
the information acquisition module is used for acquiring the set video plot information;
the information processing module is used for splitting the video plots to obtain a plurality of plot fragment information;
and the video publishing module is used for sending out video segment making invitation information on the network platform according to each episode information.
The invention also provides a video social contact system, which comprises a user client and a server,
the user client comprises an information acquisition module which is used for acquiring set video plot information;
the server side comprises the following structures:
the information processing module is used for splitting the video plots to obtain a plurality of plot fragment information;
and the video publishing module is used for sending out video segment making invitation information on the network platform according to each episode information.
Due to the adoption of the technical scheme, compared with the prior art, the invention has the advantages and positive effects that the method is taken as an example and is not limited: after the video plots are split, video segment making invitation information is sent out on a network platform, so that the package making of video contents is realized, the requirement of users for making rich movie contents is met, and the user experience is improved.
Drawings
Fig. 1 is a flowchart of a video content package production method according to an embodiment of the present invention.
Fig. 2 to fig. 7 are diagrams illustrating an operation example of video packetization according to an embodiment of the present invention.
Fig. 8 is a block diagram of a client according to an embodiment of the present invention.
Fig. 9 is a schematic structural diagram of a system according to an embodiment of the present invention.
The numbers in the figures are as follows:
a user 100;
an intelligent terminal 200, a display structure 210, document editing windows 220, 230;
the system comprises a client 300, an information acquisition module 310, an information processing module 320 and a video publishing module 330;
system 400, user client 410, server 420.
Detailed Description
The video content package production method, client and system provided by the invention are further described in detail below with reference to the accompanying drawings and specific embodiments. It should be noted that technical features or combinations of technical features described in the following embodiments should not be considered as being isolated, and they may be combined with each other to achieve better technical effects. In the drawings of the embodiments described below, the same reference numerals appearing in the respective drawings denote the same features or components, and may be applied to different embodiments.
It should be noted that the structures, proportions, sizes, and other dimensions shown in the drawings and described in the specification are only for the purpose of understanding and reading the present disclosure, and are not intended to limit the scope of the invention, which is defined by the claims, and any modifications of the structures, changes in the proportions and adjustments of the sizes and other dimensions, which are within the scope of the invention and the full scope of the invention. The scope of the preferred embodiments of the present invention includes additional implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present invention.
Examples
Referring to fig. 1, a method for making a video content package includes the following steps:
and S100, collecting the set video plot information.
The contracting party sets video plot information in the network platform and collects the video plot information. The network platform can be various live broadcast platforms, small video platforms or video playing platforms.
The contracting party is a party sending invitation information of video segment making to the contracting party, and comprises natural people, users and organizations. Specifically, for example, the contracting party may be zhang san of the individual user registered in the network platform; or a registration organization, a movie studio; for example, but not limited to, a certain network platform may be used, for example, a certain live video platform sends episode information of a to-be-produced network movie to an audience based on its own platform, and at this time, the live video platform is a contracting party.
S200, splitting the video plots to obtain a plurality of plot fragment information.
In this embodiment, the video episode may be split based on a machine learning model (algorithm), which may specifically include the following steps:
analyzing and learning the existing videos in the network platform or the associated network platform by using a machine learning model;
obtaining plot splitting rules through analysis and learning;
and splitting the video plot according to the plot splitting rule.
Preferably, the machine learning model is a deep learning model. The deep learning is a method for performing characterization learning on data in machine learning, can simulate the mechanism of human brain to explain data, and perform learning and analysis on images, sounds and texts, and can obtain rules or rules of the data based on learning. The deep learning model can generally comprise an image acquisition module, a sound acquisition module, an image recognition module, a voice recognition module, a machine translation module and the like.
The existing large amount of videos are learned through the deep learning model, the plot setting rule of the videos can be obtained, and then the plot splitting rule is set according to the plot setting rule.
For example, it is found through learning that plot transitions of most videos are associated with scenes, and 80% of plot transitions are accompanied by scene transitions, so the set plot splitting rule may be: and carrying out segmentation according to scene information in the video. For example, but not by way of limitation, if the video includes 4 scenes, and the video sequentially includes an indoor scene one, a sea scene, an urban night scene and an indoor scene two according to a time axis of the video, the video may be split into 4 episode, which sequentially includes an indoor scene one episode, a sea scene episode, an urban night scene episode and an indoor scene two episode.
In another implementation of this embodiment, the video episode may be further split based on the set word count of the text content of the episode information. Specifically, the method comprises the following steps:
acquiring the character content of video plot information, and metering the word number of the character content;
dividing the word number into N sections on average, wherein N is an integer greater than or equal to 2, and each section corresponds to an episode;
the segment number corresponding to each episode segment is recorded.
The method is particularly suitable for the conditions that the plot is simple and the text content corresponding to the plot is relatively neat. For example, without limitation, when the user sets the video episode information, the method adopts a poetry form (or directly utilizes the existing ancient poetry as the video episode), that is, the method is suitable for episode splitting in the above manner.
In another implementation of this embodiment, the aforementioned video episodes may be further split based on semantic analysis. The method specifically comprises the following steps:
acquiring text information of video plot information;
performing semantic analysis on the text, and dividing the text into a plurality of text blocks according to semantic information;
each text block corresponds to one episode, and the text block corresponding to each episode and the position in the text are recorded.
Preferably, the dividing of the plurality of text blocks based on the splitting method includes the following steps:
according to the semantic information, dividing the text into 2 text blocks, acquiring a keyword vocabulary and word frequency information of each text block, and constructing a text feature vector. By way of example, 2 text blocks are numbered as text block a and text block B. When text division is performed, random division may be performed based on a period or a paragraph symbol, or division may be performed based on a sentence structure of the text — for example, a period is used as a division point, the first 100 sentences are one text block, and the second text block is located after the first text block.
And comparing the difference degrees of the text feature vectors of the text block A and the text block B.
If the difference degree reaches the threshold value, the first-level splitting is successful, the second-level splitting is carried out on the 2 text blocks, and each text block is split into 2 second-level text blocks. For example, and not by way of limitation, the threshold is set to 60%, when the difference between the text chunk a and the text chunk B is greater than 60%, the primary splitting is successful, and the secondary splitting is continued, where the text chunk a is split into a secondary text chunk a1 and a secondary text chunk a2, and the text chunk B is split into a secondary text chunk B1 and a secondary text chunk B2.
And acquiring a keyword vocabulary and word frequency information of each secondary text block, constructing a secondary text feature vector, and comparing the difference degrees of the text feature vectors of the adjacent 2 secondary text blocks. And respectively comparing the text feature vectors of the secondary text block A1 and the secondary text block A2, and the text feature vectors of the secondary text block B1 and the secondary text block B2 to obtain corresponding difference degrees.
If the difference degree reaches the threshold value, representing that the second-level splitting is successful, and performing third-level splitting on the successfully split second-level text block; and analogizing in turn until the difference degree of the corresponding N-level text blocks is smaller than a threshold value, canceling current-level splitting, and terminating splitting. When the difference degree of the text feature vectors of the secondary text blocks A1 and A2 is larger than 60%, the splitting is successful, the secondary text blocks A1 and A2 are continuously split respectively, the secondary text block A1 is split into A11 and A12, the secondary text block A2 is split into A21 and A22, and then the text feature vectors are respectively constructed and compared.
When the difference degree of the text feature vectors of the secondary text blocks B1 and B2 is less than 60%, the secondary splitting is failed, and the secondary splitting is cancelled, namely the recovery text block B (called a stable text block) is not split any more.
And by analogy, the splitting is finished until the obtained text block cannot be successfully split and belongs to a stable text block. And the obtained stable text block quantity is the quantity of the split text blocks. For example, if the splitting of a11 and a12 fails, a1 is called a stable text chunk, and if the splitting of a21 and a22 fails, a2 is called a stable text chunk, then the aforementioned video episode splitting results are: the text block A1, the text block A2 and the text block B are arranged in sequence, and each text block corresponds to an episode.
S300, sending video segment making invitation information on a network platform according to each episode information.
And sending invitation information for making the video segment at the network platform aiming at the episode segment obtained by splitting, namely sending a package.
After step S300, the method further includes the steps of:
collecting the information of the contracted users receiving the video segment making invitation information; and acquiring video segment information made by the contract users, and combining the video segments according to corresponding plot segments to form a composite video.
Taking the third Zhang of the individual user as an example, how the third Zhang performs video package making in a video social platform such as network video/live broadcast and the like is described below, so as to make a video.
And a client of the live broadcast platform is installed in the intelligent terminal of Zhang III. The intelligent terminal can be a mobile phone, a tablet computer, a telephone, a notebook computer or a wearable intelligent terminal.
The client may include a user management module, which may be used for management of user identity information, such as user registration, login, and information maintenance. For example, and without limitation, when a user registers, the user management module may upload the identity feature information, such as facial image data, as standard identity feature information, and when a user subsequently logs in, the user may log in to the client via the facial recognition function.
After Zhang III enters the client, referring to fig. 2, short videos output on the social video platform can be browsed, viewed, commented and complied with, and own videos can also be uploaded and made.
With continued reference to fig. 2, the client of the video social platform further provides a video package creation triggering option "package creation", and after the option is triggered by zhang san, the client enters a video package creation interface, which is shown in fig. 3.
And prompting the Zhang III to set the video plot in the video package making interface.
By way of example and not limitation, the settings for the video episodes may be based on default templates, or the user's own creative ideas, or set locally based on the user's current geographic location.
Taking the default template as an example, after the user triggers the "available template" option, the user may enter the default video scenario interface. In the interface, the user can select various settings of the theme, the role, the scene, the style, the tone, the prop, the special effect, the dubbing and the like of the script. By way of example and not limitation, the theme may include swordsmen, science fiction, metropolis, kids, quadric, etc., the character may include superman, alien, monster, etc., the scene may include kitchen, courtyard, field, city night scene, etc., the style may include freshness, dynamic, vitality, science fiction, etc., the hue may include bright-colored, dusk, deep-colored, etc., the property may include virtual pet, virtual equipment, etc., the special effect may include photoelectric special effect, cosmic special effect, wind cloud special effect, etc., and the dubbing may include girl's voice, boy's voice, and boy's voice, etc.
Or the user triggers "i want to author" to author the video episode, specifically, for example, to edit the text content of the video episode through the online document editing window 220 on the network platform, as shown in fig. 4. And after the editing is finished, triggering the next step to enter a package sending stage.
In the stage of package sending, firstly, video plot information set by a user is collected, and then the video plot is split to obtain a plurality of plot fragment information. Referring to fig. 5, by way of example and not limitation, a video episode set by zhang san is split into 3 episode segments, the split 3 episode segments are output in an online document manner, and a separate document editing window 230 is set for each episode segment, so that a user can perform document editing, such as viewing, modifying, copying, pasting, and the like. Of course, the user may also increase or decrease the episode as required, and specifically, the user may output a trigger button for increasing or decreasing the episode in the window.
After the user finishes editing, the user can click a 'confirm package sending' button to finish the package sending making. Preferably, before sending the contracting information, the user may be prompted to set the right to contract for selecting the object that can be contracted, as shown in fig. 6, for example, the user may select to open the contract right to any user (all users) of the network platform.
After the user selection is completed, the hair kit making is completed, as shown in fig. 7. To facilitate the user's recall operation or tracking of the contract information, a recall operation button and a tracking operation button may also be provided for the user to select as desired.
In another mode of this embodiment, the method further comprises the step of,
analyzing the episode information to determine whether the episode contains content corresponding to a future time; the content corresponding to the future time refers to content that has not occurred so far. By way of example and not limitation, a user may be involved in setting up video episode information in an olympic games-on-screen scenario to be held in a city.
In the case where it is determined that the content corresponding to the future time is included, the future video segment production invitation information is triggered.
The future video segment creation invitation information is made for the video segment creation at the future time.
In particular, when the content corresponding to the future time includes the geographical position information, a search is performed based on the geographical position information, and the future video segment creation invitation information is transmitted to the target object meeting the geographical position information condition.
Taking the aforementioned scenario related to the olympic games being held in a certain city as an example, if the video needs to obtain the video information related to the olympic games being held in the city, the user whose address is located in the aforementioned city can be sent the production invitation information, because these users are more conditional to obtain the video information related to the olympic games being held in the future time (on the day of the subtitling); or acquiring users possibly positioned in the city on the open-screen day based on network search, such as microblog and friend circles, and sending production invitation information. By way of example and not limitation, if the lilan publishes information in a microblog, she subscribes to a microblog for city play on the olympic on the open-screen day, then the lilan can be targeted.
Referring to fig. 8, a video social client is provided as another embodiment of the present invention.
The client 300 includes:
the information acquisition module 310 is used for acquiring the set video plot information;
the information processing module 320 is used for splitting the video plots to obtain a plurality of plot fragment information;
the video publishing module 330 is configured to send out video segment creation invitation information on the network platform for each episode information.
The information collection module is used for collecting video plot information set in the client 300 by the contracting party. The client 300 may be various live clients, small video clients or video playing clients.
The contracting party is a party sending invitation information of video segment making to the contracting party, and comprises natural people, users and organizations. Specifically, for example, the contracting party may be zhang san of the individual user registered in the network platform; or a registration organization, a movie studio; for example, but not limited to, a certain network platform may be used, for example, a certain live video platform sends episode information of a to-be-produced network movie to an audience based on its own platform, and at this time, the live video platform is a contracting party.
Preferably, the information processing module splits the video episodes based on semantic analysis. The method specifically comprises the following steps:
acquiring text information of video plot information;
performing semantic analysis on the text, and dividing the text into a plurality of text blocks according to semantic information;
each text block corresponds to one episode, and the text block corresponding to each episode and the position in the text are recorded.
Preferably, the dividing of the plurality of text blocks based on the splitting method includes the following steps:
according to the semantic information, dividing the text into 2 text blocks, acquiring a keyword vocabulary and word frequency information of each text block, and constructing a text feature vector. And comparing the difference degrees of the text feature vectors of the text block A and the text block B.
If the difference degree reaches the threshold value, the first-level splitting is successful, the second-level splitting is carried out on the 2 text blocks, and each text block is split into 2 second-level text blocks. And acquiring a keyword vocabulary and word frequency information of each secondary text block, constructing a secondary text feature vector, and comparing the difference degrees of the text feature vectors of the adjacent 2 secondary text blocks. If the difference degree reaches the threshold value, representing that the second-level splitting is successful, and performing third-level splitting on the successfully split second-level text block; and analogizing in turn until the difference degree of the corresponding N-level text blocks is smaller than a threshold value, canceling current-level splitting, and terminating splitting.
For the obtained episode segments, the video publishing module 330 sends invitation information for making video segments on the network platform, i.e. packages the invitation information.
The client 300 may further include a video composition module, configured to obtain information of video segments created by the contract users, and combine the video segments according to corresponding episode segments to form a composite video.
Other technical features are referred to in the previous embodiments and are not described herein.
Referring to fig. 9, a video social system 400 is provided for another embodiment of the present invention, which includes a user client 410 and a server 420.
The user client 410 comprises an information collection module for collecting the set video episode information;
the server 420 includes the following structure:
the information processing module is used for splitting the video plots to obtain a plurality of plot fragment information;
and the video publishing module is used for sending out video segment making invitation information on the network platform according to each episode information.
In this embodiment, the user client 410 is preferably a live client or a small video client.
The user client 410 and the server 420 are connected via a communication network, which is generally the internet, or a local internet or a local area network.
The server 420 includes a hardware server, and the hardware server may generally include the following structure: one or more processors that perform computational processing; the storage, specifically, the internal memory, the external memory and the network storage, is used for storing data required by calculation and operable programs; a network interface for connecting a network; the hardware units are connected by computer buses (bus) or signal lines.
Other technical features are referred to in the previous embodiments and are not described herein.
In the above description, although all components of aspects of the present disclosure may be construed as assembled or operatively connected as one module, the present disclosure is not intended to limit itself to these aspects. Rather, the various components may be selectively and operatively combined in any number within the intended scope of the present disclosure. Each of these components may also be implemented in hardware itself, while the various components may be partially or selectively combined in general and implemented as a computer program having program modules for performing the functions of the hardware equivalents. Codes or code segments to construct such a program can be easily derived by those skilled in the art. Such a computer program may be stored in a computer readable medium, which may be executed to implement aspects of the present disclosure. The computer readable medium may include a magnetic recording medium, an optical recording medium, and a carrier wave medium.
In addition, terms like "comprising," "including," and "having" should be interpreted as inclusive or open-ended, rather than exclusive or closed-ended, by default, unless explicitly defined to the contrary. All technical, scientific, or other terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs unless defined otherwise. Common terms found in dictionaries should not be interpreted too ideally or too realistically in the context of related art documents unless the present disclosure expressly limits them to that.
While exemplary aspects of the present disclosure have been described for illustrative purposes, those skilled in the art will appreciate that the foregoing description is by way of description of the preferred embodiments of the present disclosure only, and is not intended to limit the scope of the present disclosure in any way, which includes additional implementations in which functions may be performed out of the order illustrated or discussed. Any changes and modifications of the present invention based on the above disclosure will be within the scope of the appended claims.
Claims (10)
1. A video content distribution package production method is characterized by comprising the following steps:
collecting set video plot information;
splitting the video plots to obtain a plurality of plot fragment information;
and sending video segment making invitation information on the network platform according to each episode information.
2. The method of claim 1, wherein: the method also comprises the step of carrying out the following steps,
collecting the information of the contracted users receiving the video segment making invitation information;
acquiring the information of the video segments produced by the contract users,
and combining the video segments according to the corresponding plot segments to form a composite video.
3. The method of claim 1, wherein: the method also comprises the step of carrying out the following steps,
analyzing the episode information to determine whether the episode contains content corresponding to a future time;
in the case where it is determined that the content corresponding to the future time is included, the future video segment production invitation information is triggered.
4. The method of claim 3, wherein: and when the content corresponding to the future time contains the geographic position information, searching based on the geographic position information, and sending future video segment production invitation information to a target object meeting the geographic position information condition.
5. The method of claim 1, wherein: the step of splitting the video plot comprises the following steps,
analyzing and learning the existing videos in the network platform or the associated network platform by using a machine learning model;
obtaining plot splitting rules through analysis and learning;
and splitting the video plot according to the plot splitting rule.
6. The method of claim 1, wherein: the step of splitting the aforementioned video episodes is as follows,
acquiring the character content of video plot information, and metering the word number of the character content;
dividing the word number into N sections on average, wherein N is an integer greater than or equal to 2, and each section corresponds to an episode;
the segment number corresponding to each episode segment is recorded.
7. The method of claim 1, wherein: the step of splitting the aforementioned video episodes is as follows,
acquiring text information of video plot information;
performing semantic analysis on the text, and dividing the text into a plurality of text blocks according to semantic information;
each text block corresponds to one episode, and the text block corresponding to each episode and the position in the text are recorded.
8. The method of claim 7, wherein: dividing a plurality of text blocks based on a splitting method, specifically comprising the following steps,
dividing a text into 2 text blocks according to semantic information, acquiring a keyword vocabulary and word frequency information of each text block, and constructing a text feature vector;
comparing the difference degrees of the text feature vectors of the 2 text blocks;
if the difference degree reaches a threshold value, representing that the first-level splitting is successful, performing second-level splitting on the 2 text blocks, and splitting each text block into 2 second-level text blocks;
acquiring a keyword vocabulary and word frequency information of each secondary text block, constructing a secondary text feature vector, and comparing the difference degrees of the text feature vectors of 2 adjacent secondary text blocks;
if the difference degree reaches the threshold value, representing that the second-level splitting is successful, and performing third-level splitting on the successfully split second-level text block; and analogizing in turn until the difference degree of the corresponding N-level text blocks is smaller than a threshold value, canceling current-level splitting, and terminating splitting.
9. A video social client, comprising:
the information acquisition module is used for acquiring the set video plot information;
the information processing module is used for splitting the video plots to obtain a plurality of plot fragment information;
and the video publishing module is used for sending out video segment making invitation information on the network platform according to each episode information.
10. A video social system, characterized by: the system comprises a user client and a server, wherein the user client comprises an information acquisition module which is used for acquiring set video plot information;
the server side comprises the following structures:
the information processing module is used for splitting the video plots to obtain a plurality of plot fragment information;
and the video publishing module is used for sending out video segment making invitation information on the network platform according to each episode information.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010890993.XA CN112218102B (en) | 2020-08-29 | 2020-08-29 | Video content package making method, client and system |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010890993.XA CN112218102B (en) | 2020-08-29 | 2020-08-29 | Video content package making method, client and system |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN112218102A true CN112218102A (en) | 2021-01-12 |
| CN112218102B CN112218102B (en) | 2024-01-26 |
Family
ID=74059211
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202010890993.XA Active CN112218102B (en) | 2020-08-29 | 2020-08-29 | Video content package making method, client and system |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN112218102B (en) |
Citations (12)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101390032A (en) * | 2006-01-05 | 2009-03-18 | 眼点公司 | System and methods for storing, editing, and sharing digital video |
| US20090087161A1 (en) * | 2007-09-28 | 2009-04-02 | Graceenote, Inc. | Synthesizing a presentation of a multimedia event |
| US20120311448A1 (en) * | 2011-06-03 | 2012-12-06 | Maha Achour | System and methods for collaborative online multimedia production |
| CN103384311A (en) * | 2013-07-18 | 2013-11-06 | 博大龙 | Method for generating interactive videos in batch mode automatically |
| CN103905742A (en) * | 2014-04-10 | 2014-07-02 | 北京数码视讯科技股份有限公司 | Video file segmentation method and device |
| US20140186004A1 (en) * | 2012-12-12 | 2014-07-03 | Crowdflik, Inc. | Collaborative Digital Video Platform That Enables Synchronized Capture, Curation And Editing Of Multiple User-Generated Videos |
| CN105794213A (en) * | 2013-11-26 | 2016-07-20 | 谷歌公司 | Collaborative video editing in cloud environment |
| CN105868292A (en) * | 2016-03-23 | 2016-08-17 | 中山大学 | Video visualization processing method and system |
| CN106649713A (en) * | 2016-12-21 | 2017-05-10 | 中山大学 | Movie visualization processing method and system based on content |
| CN108933970A (en) * | 2017-05-27 | 2018-12-04 | 北京搜狗科技发展有限公司 | The generation method and device of video |
| CN109194887A (en) * | 2018-10-26 | 2019-01-11 | 北京亿幕信息技术有限公司 | A kind of cloud cuts video record and clipping method and plug-in unit |
| CN111277905A (en) * | 2020-03-09 | 2020-06-12 | 新华智云科技有限公司 | Online collaborative video editing method and device |
-
2020
- 2020-08-29 CN CN202010890993.XA patent/CN112218102B/en active Active
Patent Citations (13)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101390032A (en) * | 2006-01-05 | 2009-03-18 | 眼点公司 | System and methods for storing, editing, and sharing digital video |
| US20090087161A1 (en) * | 2007-09-28 | 2009-04-02 | Graceenote, Inc. | Synthesizing a presentation of a multimedia event |
| US20120311448A1 (en) * | 2011-06-03 | 2012-12-06 | Maha Achour | System and methods for collaborative online multimedia production |
| US20140186004A1 (en) * | 2012-12-12 | 2014-07-03 | Crowdflik, Inc. | Collaborative Digital Video Platform That Enables Synchronized Capture, Curation And Editing Of Multiple User-Generated Videos |
| CN105122789A (en) * | 2012-12-12 | 2015-12-02 | 克劳德弗里克公司 | Digital platform for user-generated video synchronized editing |
| CN103384311A (en) * | 2013-07-18 | 2013-11-06 | 博大龙 | Method for generating interactive videos in batch mode automatically |
| CN105794213A (en) * | 2013-11-26 | 2016-07-20 | 谷歌公司 | Collaborative video editing in cloud environment |
| CN103905742A (en) * | 2014-04-10 | 2014-07-02 | 北京数码视讯科技股份有限公司 | Video file segmentation method and device |
| CN105868292A (en) * | 2016-03-23 | 2016-08-17 | 中山大学 | Video visualization processing method and system |
| CN106649713A (en) * | 2016-12-21 | 2017-05-10 | 中山大学 | Movie visualization processing method and system based on content |
| CN108933970A (en) * | 2017-05-27 | 2018-12-04 | 北京搜狗科技发展有限公司 | The generation method and device of video |
| CN109194887A (en) * | 2018-10-26 | 2019-01-11 | 北京亿幕信息技术有限公司 | A kind of cloud cuts video record and clipping method and plug-in unit |
| CN111277905A (en) * | 2020-03-09 | 2020-06-12 | 新华智云科技有限公司 | Online collaborative video editing method and device |
Also Published As
| Publication number | Publication date |
|---|---|
| CN112218102B (en) | 2024-01-26 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN112188117B (en) | Video synthesis method, client and system | |
| CN108305636B (en) | A kind of audio file processing method and processing device | |
| CN108933970B (en) | Video generation method and device | |
| US9881085B2 (en) | Methods, systems, and media for aggregating and presenting multiple videos of an event | |
| Mare | A complicated but symbiotic affair: The relationship between mainstream media and social media in the coverage of social protests in southern Africa | |
| CN107566907A (en) | Video editing method, device, storage medium and terminal | |
| US20210144418A1 (en) | Providing video recommendation | |
| CN112616063A (en) | Live broadcast interaction method, device, equipment and medium | |
| CN110177219A (en) | The template recommended method and device of video | |
| CN104394437B (en) | A kind of online live method and system that start broadcasting | |
| US20230156245A1 (en) | Systems and methods for processing and presenting media data to allow virtual engagement in events | |
| WO2022078167A1 (en) | Interactive video creation method and apparatus, device, and readable storage medium | |
| CN113766299A (en) | Video data playing method, device, equipment and medium | |
| KR20200023013A (en) | Video Service device for supporting search of video clip and Method thereof | |
| CN111581333A (en) | Text-CNN-based audio-video play list pushing method and audio-video play list pushing system | |
| US20220217430A1 (en) | Systems and methods for generating new content segments based on object name identification | |
| CN116665083A (en) | Video classification method and device, electronic equipment and storage medium | |
| US12389047B2 (en) | Live stream processing method and apparatus | |
| CN103942275A (en) | Video identification method and device | |
| CN117953898A (en) | Speech recognition method, server and storage medium for video data | |
| US20120254255A1 (en) | Apparatus and method for generating story according to user information | |
| CN113407779B (en) | Video detection method, device and computer-readable storage medium | |
| CN112218102B (en) | Video content package making method, client and system | |
| CN118568297A (en) | Construction method and application of cognitive war system based on aragonic video | |
| CN117221670A (en) | Automatic generation method and product of trailer based on movie and television drama scenario content |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |