[go: up one dir, main page]

WO2024091084A1 - Procédé de recommandation de scène de référence et dispositif de recommandation de scène de référence destinés à la génération de vidéo automatique - Google Patents

Procédé de recommandation de scène de référence et dispositif de recommandation de scène de référence destinés à la génération de vidéo automatique Download PDF

Info

Publication number
WO2024091084A1
WO2024091084A1 PCT/KR2023/016939 KR2023016939W WO2024091084A1 WO 2024091084 A1 WO2024091084 A1 WO 2024091084A1 KR 2023016939 W KR2023016939 W KR 2023016939W WO 2024091084 A1 WO2024091084 A1 WO 2024091084A1
Authority
WO
WIPO (PCT)
Prior art keywords
reference scene
scene
tags
tag
scenes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/KR2023/016939
Other languages
English (en)
Korean (ko)
Inventor
권석면
김유석
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
10t1m Inc
Original Assignee
10t1m Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 10t1m Inc filed Critical 10t1m Inc
Publication of WO2024091084A1 publication Critical patent/WO2024091084A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/71Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring

Definitions

  • This disclosure relates to a reference scene recommendation method and reference scene recommendation device for automatic video generation. More specifically, by dividing the video into scenes to create multiple reference scenes and assigning tags to each reference scene, when a keyword is received from the automatic video generation device, the reference scene to which the tag corresponding to the keyword is assigned is recommended. This relates to a reference scene recommendation method and reference scene recommendation device for automatic video creation.
  • the problem that the present disclosure aims to solve is to divide the video into scene units to generate a plurality of reference scenes and assign tags to each reference scene, so that when a keyword is received from an automatic video generation device, the tag corresponding to the keyword is
  • the aim is to provide a reference scene recommendation method and reference scene recommendation device for automatically generating a video that can recommend assigned reference scenes.
  • a reference scene recommendation method includes dividing collected images into scene units to generate a plurality of reference scenes; extracting feature information by analyzing the plurality of reference scenes and assigning different types of tags to the plurality of reference scenes based on this; and storing a plurality of reference scenes to which the tags are assigned in a reference scene database.
  • the step of allocating different types of tags to the plurality of reference scenes includes extracting feature information of an object included in the reference scene, extracting a feature descriptor expressing the feature information of the object as a vector value, and assigning an object attribute tag to the reference scene according to a feature descriptor; applying the reference scene to a scene type analysis model to extract the type of situation expressed in the reference scene, and assigning a situation attribute tag to the reference scene according to the type of situation; and extracting a highlight portion from the collected video and assigning a highlight attribute tag to a reference scene corresponding to the highlight portion among a plurality of reference scenes stored in the reference scene database.
  • a reference scene recommendation device includes one or more processors; and a memory including instructions configured to cause the one or more processors to execute operations, wherein the operations include dividing the collected images into scenes to generate a plurality of reference scenes. thing; analyzing the plurality of reference scenes to extract feature information and assigning different types of tags to the plurality of reference scenes based on this; And it may include storing a plurality of reference scenes to which the tags are assigned in a reference scene database.
  • Assigning different types of tags to each of the plurality of reference scenes includes extracting feature information of an object included in the reference scene, extracting a feature descriptor expressing the feature information of the object as a vector value, and extracting the feature assigning object attribute tags to the reference scene according to a descriptor; applying the reference scene to a scene type analysis model to extract the type of situation expressed in the reference scene, and assigning a situation attribute tag to the reference scene according to the type of situation; and extracting a highlight portion from the collected video and assigning a highlight attribute tag to a reference scene corresponding to the highlight portion among a plurality of reference scenes stored in the reference scene database.
  • the video is divided into scene units to generate a plurality of reference scenes, and tags are assigned to each reference scene to automatically generate the video.
  • tags are assigned to each reference scene to automatically generate the video.
  • FIG. 1 is a diagram illustrating an automatic video generation system according to an embodiment of the present disclosure.
  • Figure 2 is a diagram illustrating an automatic video generation device according to an embodiment of the present disclosure.
  • Figure 3 is a diagram illustrating a reference scene recommendation device according to an embodiment of the present disclosure.
  • 4 to 7 are diagrams for explaining the operation of a reference scene recommendation device according to an embodiment of the present disclosure.
  • Figure 8 is a flow chart illustrating a reference scene recommendation method for automatic video generation according to an embodiment of the present disclosure.
  • FIG. 1 is a diagram illustrating an automatic video generation system according to an embodiment of the present disclosure.
  • the automatic video generation system may include an automatic video generation device 200, a reference scene recommendation device 300, one or more customer terminals 400, and one or more user terminals 500.
  • Customer terminal 400 may refer to an electronic device used by customers such as advertisers.
  • the user terminal 500 may refer to an electronic device used by general users other than advertisers.
  • the customer can input the video generation reference information needed to automatically generate the video into the customer terminal 400, and the customer terminal 400 can transmit the video generation reference information input by the customer to the automatic video creation device 200.
  • the image generation reference information may be a keyword in word units.
  • the automatic video generation device 200 can automatically generate videos, such as advertising videos, according to customer requests. Specifically, when video generation reference information is received from the customer terminal 400, the automatic video creation device 200 may generate a script using the received video generation reference information and a pre-generated script database.
  • the script database may store one or more attributes related to a keyword and text matching each attribute.
  • one or more properties related to a keyword include object properties of the object corresponding to the keyword, screen properties of the scene matching the object, situation properties of the scene matching the object, and highlight properties of the scene matching the object.
  • the automatic video generation device 200 may generate a script of a reference scene using text that matches an attribute determined based on user behavior information using customer-related content among one or more attributes related to a keyword.
  • the automatic video generation device 200 may generate a scenario consisting of a reference scene based on the script.
  • the automatic video creation device 200 can extract keywords from the script. More specifically, the automatic video generation device 200 can extract words from the text of the script based on spaces. And, based on a database of frequency values for each word created in advance, the frequency values of the extracted words can be measured.
  • a token may include a pair of words and morpheme values, and may be assigned a label indicating a frequency value.
  • the automatic video generating device 200 has (frequency value: 1000, (word, morpheme value)), (frequency value: 234, (word, morpheme)), (frequency value: 2541, (word, morpheme) ), and (frequency value: 2516, (word, morpheme)) can be created.
  • the automatic video generating device 200 may assign different weights to each token according to the word of each token and/or the label of each token.
  • the automatic video generation device 200 determines the type of language that implements the words in the token (e.g., English, Chinese, Korean, etc.), the position of the words within the text of the script, and/or the characters assigned to the token.
  • different weights can be assigned to each token.
  • the automatic video generation device 200 may calculate the first weight using the total number of tokens generated from the text of the script and the order of each token.
  • the automatic video generation device 200 calculates the order of the current token based on the total number of tokens generated from the text of the script and an important value predetermined according to the type of language.
  • the first weight can be calculated. For example, if the total number of tokens is 12 and the current token order is 4th, 12 can be assumed to be '1' and 1 can be divided by 4 to calculate '0.25'. And the first weight can be calculated by reflecting the important value predetermined according to the type of language in the value calculated in this way.
  • the significant value may change depending on the order of the current token. Specifically, if an important word is a language that appears at the end of a sentence, the important value reflected may also increase as the order of the current token increases. If the important word is a language that appears at the beginning of the sentence, the important value reflected will decrease as the order of the current token increases.
  • the automatic video generating device 200 creates a second weight for the current token using the frequency value indicated by the label of the current token, the frequency value indicated by the label of the previous token, and the frequency value indicated by the label of the next token. can be calculated.
  • the automatic video generating device 200 may assign a final weight to the current token using the first weight and the second weight. Then, keywords consisting of tokens with final weights can be extracted.
  • the automatic video generation device 200 may provide the reference scene recommendation device 300 with a reference scene recommendation request message including keywords composed of tokens with different weights. Additionally, a reference scene may be received from the reference scene recommendation device 300.
  • the automatic video generation device 200 may generate an image by combining the received reference scene and pre-generated environmental data. To this end, the automatic video generation device 200 may select sound data according to a scenario and convert text data corresponding to the scenario into voice data. And, the automatic video generation device 200 can generate an AI actor according to the above scenario.
  • the reference scene recommendation device 300 can collect images to automatically generate videos according to customer requests and build a reference scene database based on the collected images.
  • the reference scene recommendation device 300 receives a reference scene recommendation request message from the automatic video generation device 200, it refers to a reference scene to which a tag that is the same or similar to a keyword included in the reference scene recommendation request message is assigned. It can be extracted from the scene database and provided to the automatic video creation device 200.
  • the reference scene recommendation device 300 may collect images (eg, videos). Then, the collected video can be decoded to obtain the frames that make up the video, and then the frames can be sampled at playback time intervals.
  • images eg, videos
  • the collected video can be decoded to obtain the frames that make up the video, and then the frames can be sampled at playback time intervals.
  • the reference scene recommendation device 300 may list the sampled frames in the order of playback time and calculate the degree of similarity between adjacent frames. When the similarity is calculated for all the listed frames, the reference scene recommendation device 300 groups the frames based on the similarity, thereby generating a plurality of reference scenes divided by scene.
  • the reference scene recommendation device 300 may perform feature matching on adjacent frames to calculate similarity between adjacent frames. Specifically, the reference scene recommendation device 300 compares the keypoints between adjacent frames and, if the similarity is greater than the reference value, groups the frames into one scene to create one reference scene. . If, as a result of comparing feature points between adjacent frames, the similarity is less than the standard value, it can be determined that the scene has been switched, and different reference images can be generated by grouping the corresponding frames into different scenes.
  • the reference scene recommendation device 300 may extract objects for each listed frame and then determine whether to change the scene based on a change in the number of extracted objects. Additionally, a reference scene can be created based on the point in time when the number of extracted objects changes or the point in time when the number of extracted objects changes beyond the standard value.
  • the reference scene recommendation device 300 determines whether the background has changed based on a change in pixel value between pixels in adjacent frames, and determines whether or not there is a scene change based on the determination result. You can judge. Next, a reference scene can be created based on the point in time when the background changes.
  • the reference scene recommendation device 300 may determine whether to switch scenes based on changes in the content of audio data and/or subtitle data constituting the video. Additionally, a reference scene can be created based on the point in time when new content appears in the audio data and/or subtitle data.
  • the reference image recommendation device 300 may extract objects for each listed frame and then determine whether a scene change occurs based on a change in the type of the extracted object. Additionally, a reference scene can be created based on the point in time when a previously extracted object disappears and/or when a new object appears.
  • the reference scene recommendation device 300 can analyze the plurality of reference scenes and extract characteristic information of the reference scene. And, depending on the extracted feature information, different types of tags can be assigned to each reference scene. For example, depending on the extracted feature information, one of an object attribute tag, a screen attribute tag, a situation attribute tag, and a highlight attribute tag can be assigned.
  • an object attribute tag a screen attribute tag, a situation attribute tag, and a highlight attribute tag.
  • the reference scene recommendation device 300 may detect a feature area of an object in the reference scene (Interest Point Detection).
  • the feature area refers to the main area from which a feature descriptor that describes the characteristics of an object is extracted.
  • Feature descriptors may also be referred to as descriptors, feature vectors, or vector values, and may be used to determine whether objects are identical or similar.
  • the feature area is the contour included in the object, corners such as corners among the contours, blobs that are distinct from the surrounding area, areas that are invariant or covariant according to the transformation of the reference scene data, and/or the surrounding brightness. May contain poles with dark or bright features.
  • the feature area may target a patch (piece) of the reference scene or the entire reference scene.
  • the reference scene recommendation device 300 may extract feature information of the object from the detected feature area. Additionally, a feature descriptor expressing the extracted feature information as a vector value can be extracted. And object attribute tags can be assigned to the reference scene according to the feature descriptor.
  • the reference scene recommendation device 300 may detect a feature area of the reference scene. And the feature information of the reference scene can be extracted from the feature area of the detected reference scene. Additionally, a feature descriptor expressing the extracted feature information as a vector value can be extracted. And screen attribute tags can be assigned to the reference scene according to the feature descriptor.
  • the above-mentioned feature descriptor may be calculated using the location of the feature area, brightness, color, sharpness, gradient, scale and/or pattern information of the feature area in the reference scene.
  • the feature descriptor may calculate the brightness value, brightness change value, and/or distribution value of the feature area by converting them into vectors.
  • the feature descriptor is not only a local descriptor based on the feature area as described above, but also a global descriptor, frequency descriptor, binary descriptor, or neural network descriptor. It can also be expressed as
  • the global descriptor can convert the brightness, color, sharpness, gradient, scale, and/or pattern information of the entire reference scene, each area where the reference scene is divided by an arbitrary standard, or each feature area into vector values. there is.
  • the frequency descriptor can convert the number of times pre-classified feature descriptors are included in a reference scene and/or the number of times they include global features such as a conventionally defined color table into a vector value.
  • a binary descriptor can be used by extracting in bits whether each descriptor is included and/or whether the size of each element value constituting the descriptor is larger or smaller than a specific value, and then converting it to an integer type.
  • a neural network descriptor can extract image information used for learning or classification from the layers of a neural network.
  • the reference scene recommendation device 300 may apply the reference scene to a scene type analysis model.
  • a scene type analysis model may refer to a model learned to receive a scene as input and output the scene type. Additionally, the scene type may refer to the type of situation being expressed in the scene.
  • the reference scene recommendation device 300 may assign a situation attribute tag to the reference scene according to the type of the extracted situation.
  • the reference scene recommendation device 300 may build a scene type analysis model as a CNN (Convolution Neural Network) model, which is one of the deep learning models, and learn the above-described data set.
  • the CNN model can be designed to include two convolutional layers, a relu layer, a max pooling layer, and one fully connected layer.
  • the reference scene recommendation device 300 uses the RCNN technique to construct a feature sequence in the map order of the convolution feature maps calculated from the CNN model, and then converts each feature sequence into a long and short term. It can be learned by applying to memory networks (LSTM; Long Short Term Memory networks).
  • LSTM Long Short Term Memory networks
  • the reference scene recommendation device 300 may extract a highlight portion from the image.
  • the highlight portion may refer to the section containing the most important information in the video. For example, if the content of the video consists of four sections of 'Before', 'Before', and 'Before', the section corresponding to 'Before' may be considered the highlight section. Highlights can be extracted manually or automatically.
  • the reference scene recommendation device 300 may assign a highlight attribute tag to the reference scene corresponding to the highlight portion.
  • the reference scene recommendation device 300 After assigning tags to each of a plurality of reference scenes according to the above-described method, upon receiving a reference scene recommendation request message from the automatic video generation device 200, the reference scene recommendation device 300 includes a tag in the reference scene recommendation request message.
  • a reference scene to which a tag identical or similar to the existing keyword is assigned can be extracted from the reference scene database and provided to the automatic video generation device 200.
  • the reference video data recommendation device 300 may extract a keyword from the reference scene recommendation request message and extract tokens constituting the keyword. There is. Next, a tag that matches the morpheme value of the token can be selected from among a plurality of tags assigned to the reference scene. And if the selected tag and the word in the token match, the reference scene to which the tag is assigned can be extracted from the reference scene database.
  • the reference scene recommendation device 300 may select an object attribute tag from a plurality of tags assigned to the reference scene. And if the object attribute tag and the word in the token match, the reference scene to which the corresponding tag is assigned can be extracted from the reference scene database and provided to the automatic video generation device 200.
  • the reference scene recommendation device 300 may select a screen attribute tag and a situation attribute tag from a plurality of tags assigned to the reference scene. And if the screen attribute tag and the word in the token match, and the situation attribute tag and the word in the token match, the reference scene to which the corresponding tag is assigned is extracted from the reference scene database and provided to the automatic video creation device 200. You can.
  • the reference scene recommendation device 300 calculates the similarity ratio between each of a plurality of tags assigned to the reference image and the word of the token, targeting a reference scene to which a tag that does not match the morpheme value of the token is assigned. You can. Additionally, reference scenes to which tags with a similarity ratio greater than a certain ratio are assigned can be extracted from the reference scene database and provided to the automatic video generation device 200.
  • the reference scene recommendation device 300 may compare the characters constituting the tag assigned to the reference scene and the characters constituting the word of the token to calculate the number of matching characters. Also, by comparing the number of strings corresponding to the tag and the number of strings corresponding to the word of the token, a longer number of strings can be selected. Additionally, a similarity ratio representing the ratio of the number of matching characters to the number of selected strings can be calculated. In addition, reference scenes to which tags with a similarity ratio greater than a certain ratio are assigned can be extracted from the reference scene database and provided to the automatic video generation device 200.
  • the automatic video generation device 200 and/or the reference scene recommendation device 300 as described above may be implemented as included in, for example, a web service providing server.
  • the web service providing server can provide various contents to the user terminal 500.
  • the type of content provided to the user terminal 500 may vary depending on the type of application used by the user terminal 500 to access the web service providing server.
  • This web service providing server may be implemented as an online shopping mall server or a search engine server.
  • the customer terminal 400 may include an application for accessing a web service providing server. Accordingly, when the application is selected and executed by the customer, the customer terminal 400 can access the automatic video generating device 200 through the application. Thereafter, when the customer inputs video generation reference information into the customer terminal 400, the customer terminal 400 may request automatic video generation by providing the input video generation reference information to the automatic video generation device 200.
  • the user terminal 500 may include an application for accessing a web service providing server. Accordingly, when the application is selected and executed by the user, the user terminal 500 can access the web service providing server through the application.
  • the user terminal 500 can display a web page provided from a web service providing server through an application.
  • a web page may include a screen loaded on an electronic device and/or content within the screen so that it can be immediately displayed on the screen according to a user's scroll input.
  • the entire application execution screen that extends horizontally or vertically and is displayed as the user scrolls may be included in the concept of a web page.
  • the camera roll screen can also be included in the concept of a web page.
  • the user terminal 500 may include an application (eg, software, neural network model, etc.) for analyzing user interests. Accordingly, the user terminal 500 may collect and store log records and/or engagement records and determine the user's interests by analyzing the log records and/or engagement records through an application for user interest analysis.
  • an application eg, software, neural network model, etc.
  • the user terminal 500 may extract content by analyzing log records and/or engagement records stored in the user terminal 500, and create a label indicating the type of extracted content. It can be extracted.
  • Log records may be created by recording events that occur while the operating system or software of the user terminal 500 is running.
  • Engagement records can be created by recording a set of committed actions that result in a user becoming interested, participating, and engaging.
  • User behavior information includes not only actions such as the user viewing content through a web browser, the user creating a 'like' tag on content through social networks, and the user viewing images or text on the homepage. , it can also include the object of these actions, the time when these actions occurred, and the time these actions were maintained.
  • a label indicating the type of extracted content may indicate, for example, whether the extracted content corresponds to the user's interests or not.
  • a label indicating the type of extracted content may be extracted by analyzing log records and/or engagement records, or may be extracted from labels stored in advance.
  • the user terminal 500 may be equipped with a crawler, a parser, and an indexer, through which web pages viewed by the user may be collected.
  • the item information e.g., image, item name, and item price
  • the crawler can collect data related to item information by collecting a list of web addresses that users browse, checking websites, and tracking links.
  • the parser can interpret web pages collected during the crawling process and extract item information such as images, item prices, and item names included in the page.
  • the indexer can index the location and meaning of the extracted item information.
  • Figure 2 is a diagram illustrating an automatic video generation device according to an embodiment of the present disclosure.
  • the automatic video generation device 200 includes a script creation unit 210, a scenario creation unit 220, a keyword extraction unit 230, a reference scene transmission/reception unit 240, and an environment data creation unit 250. and an image synthesis unit 260.
  • the script generator 210 may generate a script using the received image generation reference information and a pre-generated script database.
  • the script generator 210 searches the script database for keywords included in the image generation reference information, and then generates object properties of the object corresponding to the searched keyword, screen properties of the scene matching the object, and scenes matching the object.
  • a script can be created using text that matches the attributes determined based on the user's behavior information using content related to the customer among the highlight attributes of the scene matching the situation attributes and objects.
  • the scenario generator 220 may generate a scenario composed of a standard scene based on the script generated by the script generator 210. According to embodiments, the scenario may further include sound effects and/or atmosphere in addition to the reference scene.
  • the keyword extraction unit 230 may extract keywords from the script generated by the script creation unit 210. More specifically, the keyword extractor 230 may extract words from the text of the script based on spaces. And, based on a database of frequency values for each word created in advance, the frequency values of the extracted words can be measured.
  • the keyword extraction unit 230 may generate a token by performing morphological analysis on each of the extracted words.
  • a token may include a pair of words and morpheme values, and may be assigned a label indicating a frequency value.
  • the keyword extraction unit 230 has (frequency value: 1000, (word, morpheme value)), (frequency value: 234, (word, morpheme)), (frequency value: 2541, (word, morpheme)) Tokens such as , and (frequency value: 2516, (word, morpheme)) can be generated.
  • the keyword extractor 230 may assign different weights to each token according to the word and/or label of each token.
  • the keyword extraction unit 230 determines the type of language (e.g., English, Chinese, Korean, etc.) that implements the word in the token, the position of the word within the text of the script, and/or the label assigned to the token.
  • different weights can be assigned to each token.
  • the keyword extractor 230 may calculate the first weight using the total number of tokens generated from the text of the script and the order of each token.
  • the keyword extraction unit 230 quantifies the order of the current token based on the total number of tokens generated from the text of the script and an important value predetermined according to the type of language, and provides information on the current token.
  • 1 Weight can be calculated. For example, if the total number of tokens is 12 and the token order is 4th, the keyword extractor 230 may assume 12 as '1' and divide 1 by 4 to calculate '0.25'. And the first weight can be calculated by reflecting the important value predetermined according to the type of language in the value calculated in this way. According to an embodiment, the significant value may change depending on the order of the current token.
  • the important value reflected may also increase as the order of the current token increases. If the important word is a language that appears at the beginning of the sentence, the important value reflected will decrease as the order of the current token increases.
  • the keyword extractor 230 may calculate the second weight using the frequency value indicated by the label of the current token, the frequency value indicated by the label of the previous token, and the frequency value indicated by the label of the next token. .
  • the keyword extractor 230 may assign a final weight to the current token using the first weight and the second weight. Then, keywords consisting of tokens with final weights can be extracted.
  • the reference scene transceiver 240 provides a reference scene recommendation request message containing keywords composed of tokens with different weights to the reference scene recommendation device 300, and selects the reference scene from the reference scene recommendation device 300. You can receive it.
  • the environmental data generator 250 may select sound data according to the scenario. And text data corresponding to the above scenario can be converted into voice data. Furthermore, an AI actor can be created according to the above scenario.
  • the image synthesis unit 260 may generate an image by combining the reference scene received by the reference scene transmission/reception unit 240 and the environment data generated by the environment data generation unit 250.
  • Figure 3 is a diagram of a reference scene recommendation device according to an embodiment of the present disclosure.
  • the reference scene recommendation device 300 may build a reference scene database based on the collected images. Additionally, when receiving a reference scene recommendation request message from the automatic video generation device 200, the reference scene recommendation device 300 refers to a reference scene to which a tag that is the same or similar to a keyword included in the reference scene recommendation request message is assigned. It can be extracted from the scene database and provided to the automatic video creation device 200. To this end, the reference scene recommendation device 300 may include an image segmentation unit 310, a tag allocation unit 320, a reference scene database 330, and a reference scene recommendation unit 340.
  • the image segmentation unit 310 may collect images (eg, videos). Then, the collected video can be decoded to obtain the frames that make up the video, and then the frames can be sampled at playback time intervals.
  • images eg, videos
  • the collected video can be decoded to obtain the frames that make up the video, and then the frames can be sampled at playback time intervals.
  • the image segmentation unit 310 may arrange the sampled frames in the order of playback time and calculate the degree of similarity between adjacent frames.
  • the reference scene recommendation device 300 groups the frames based on the similarity, thereby generating a plurality of reference scenes divided by scene.
  • the image segmentation unit 310 may perform feature matching on adjacent frames to calculate the degree of similarity between adjacent frames. Specifically, the image segmentation unit 310 may compare keypoints between adjacent frames and, if the similarity is greater than or equal to a reference value, generate one reference scene by grouping the corresponding frames into one scene. If, as a result of comparing feature points between adjacent frames, the similarity is less than the standard value, it can be determined that the scene has been switched, and different reference images can be generated by grouping the corresponding frames into different scenes.
  • the image segmentation unit 310 may extract objects for each listed frame and then determine whether to change the scene based on a change in the number of extracted objects. Additionally, a reference scene can be created based on the point in time when the number of extracted objects changes or the point in time when the number of extracted objects changes beyond the standard value.
  • the image segmentation unit 310 determines whether the background changes based on the change in pixel value between pixels of the same position among pixels of adjacent frames, and determines whether there is a scene change based on the determination result. can do. Next, a reference scene can be created based on the point in time when the background changes.
  • the video segmentation unit 310 may determine whether to change the scene based on a change in the content of the audio data and/or subtitle data constituting the video. Additionally, a reference scene can be created based on the point in time when new content appears in the audio data and/or subtitle data.
  • the image segmentation unit 310 may extract objects for each listed frame and then determine the beginning of a scene change based on a change in the type of the extracted object. Additionally, a reference scene can be created based on the point in time when a previously extracted object disappears and/or when a new object appears.
  • the tag allocator 320 may analyze a plurality of reference scenes and extract characteristic information of the reference scenes. And, depending on the extracted feature information, different types of tags can be assigned to each reference scene. For example, depending on the extracted feature information, one of an object attribute tag, a screen attribute tag, a situation attribute tag, and a highlight attribute tag can be assigned.
  • the tag allocator 320 may detect a characteristic area of an object in a reference scene (Interest Point Detection).
  • the feature area refers to the main area from which a feature descriptor that describes the characteristics of an object is extracted.
  • Feature descriptors may also be referred to as descriptors, feature vectors, or vector values, and may be used to determine whether objects are identical or similar.
  • feature areas include the contours of the object, corners such as corners among the contours, blobs that are distinct from the surrounding area, areas that are invariant or covariant depending on the deformation of the reference scene, and/or are darker than the surrounding brightness. Or it may contain poles with bright features.
  • the feature area may target a patch (piece) of the reference scene or the entire reference scene.
  • the tag allocator 320 may extract feature information of the object from the detected feature area. Additionally, a feature descriptor expressing the extracted feature information as a vector value can be extracted. And object attribute tags can be assigned to the reference scene according to the feature descriptor.
  • the tag allocator 320 may detect a feature area of a reference scene. And the feature information of the reference scene can be extracted from the feature area of the detected reference scene. Additionally, a feature descriptor expressing the extracted feature information as a vector value can be extracted. And screen attribute tags can be assigned to the reference scene according to the feature descriptor.
  • the above-described feature descriptor may be calculated using the location of the feature area, brightness, color, sharpness, gradient, scale and/or pattern information of the feature area in the reference scene.
  • the feature descriptor may calculate the brightness value, brightness change value, and/or distribution value of the feature area by converting them into vectors.
  • the tag allocation unit 320 may apply the reference scene to the scene type analysis model.
  • a scene type analysis model may refer to a model learned to receive a scene as input and output the scene type. Additionally, the scene type may refer to the type of situation being expressed in the scene.
  • the tag allocation unit 320 may assign a situation attribute tag to the reference scene according to the type of the extracted situation.
  • the tag allocator 320 may build a scene type analysis model as a CNN (Convolution Neural Network) model, which is one of the deep learning models, and learn the above-described data set.
  • the CNN model can be designed to include two convolutional layers, a relu layer, a max pooling layer, and one fully connected layer.
  • the tag allocation unit 320 uses the RCNN technique to construct a feature sequence in the map order of the convolution feature maps calculated from the CNN, and then stores each feature sequence in a long-short-term memory network. It can be learned by substituting for (LSTM; Long Short Term Memory networks).
  • the tag allocation unit 320 may extract the highlight portion from the video.
  • the highlight portion may refer to the section containing the most important information in the video. For example, if the content of the video consists of four sections of 'Before', 'Before', and 'Before', the section corresponding to 'Before' may be considered the highlight section. Highlights can be extracted manually or automatically.
  • the tag allocation unit 320 may assign a highlight attribute tag to the reference scene corresponding to the highlight portion.
  • Reference scenes to which tags are assigned by the tag allocation unit 320 may be stored in the reference scene database 330.
  • the reference scene database 330 may store the start time of the reference scene, the end time of the reference scene, and one or more tags assigned to the reference scene in a table format.
  • the reference scene recommendation unit 340 may extract a keyword from the reference scene recommendation request message. And the tokens that make up the keyword can be extracted. Next, a tag that matches the morpheme value of the token can be selected from among a plurality of tags assigned to the reference scene. And if the selected tag and the word in the token match, the reference scene to which the tag is assigned can be extracted from the reference scene database.
  • the reference scene recommendation device 300 may select an object attribute tag from a plurality of tags assigned to the reference scene. And if the object attribute tag and the word in the token match, the reference scene to which the corresponding tag is assigned can be extracted from the reference scene database and provided to the automatic video generation device 200.
  • the reference scene recommendation device 300 may select a screen attribute tag and a situation attribute tag from a plurality of tags assigned to the reference scene. And if the screen attribute tag and the word in the token match, and the situation attribute tag and the word in the token match, the reference scene to which the corresponding tag is assigned is extracted from the reference scene database and provided to the automatic video creation device 200. You can.
  • the reference scene recommendation unit 340 targets a reference image to which a tag that does not match the morpheme value of the token is assigned, and calculates the similarity ratio between each of the plurality of tags assigned to the reference image and the word of the token. You can. Additionally, reference scenes to which tags with a similarity ratio greater than a certain ratio are assigned can be extracted from the reference scene database and provided to the automatic video generation device 200.
  • the reference scene recommendation unit 340 may compare the characters constituting the tag assigned to the reference scene and the characters constituting the word of the token to calculate the number of matching characters. Also, by comparing the number of strings corresponding to the tag and the number of strings corresponding to the word of the token, a longer number of strings can be selected. Additionally, a similarity ratio representing the ratio of the number of matching characters to the number of selected strings can be calculated. Additionally, a reference image to which a tag with a similarity ratio greater than a certain rate is assigned can be extracted from the reference image database and provided to the automatic video generation device 200.
  • 4 to 7 are diagrams for explaining the operation of a reference scene recommendation device according to an embodiment of the present disclosure.
  • the reference scene recommendation device 300 may collect an image 410.
  • the collected image 410 may be provided to the image segmentation unit.
  • the image division unit may divide the input image into scenes to create a plurality of reference scenes (420_1 to 420_4).
  • a plurality of reference scenes may be input to the tag allocation unit.
  • the tag allocation unit may assign tags to each reference scene (420_1 to 420_4).
  • Reference scenes 420_1 to 420_4 to which tags are assigned may be stored in the reference scene database 430.
  • the image segmentation unit may decode the input image 410 to obtain frames constituting the image, and then sample the frames at playback time intervals.
  • the image segmentation unit may calculate the similarity between adjacent frames among the sampled frames and group the frames based on the similarity, thereby generating a plurality of reference scenes divided on a scene basis.
  • the tag allocation unit can analyze a plurality of reference scenes (420_1 to 420_4) to extract characteristic information of each reference scene and assign different types of tags to each reference scene (420_1 to 420_1) according to the extracted specific information.
  • the tag allocation unit may allocate one of an object attribute tag, a screen attribute tag, a situation attribute tag, and a highlight attribute tag, according to the extracted feature information.
  • the tag allocation unit may detect the feature area of the object in the reference scene and extract feature information of the object from the detected feature area. Additionally, a feature descriptor expressing the extracted feature information as a vector value can be extracted. And object attribute tags can be assigned to the reference scene according to the feature descriptor.
  • the tag allocator may analyze the reference scene 420_3 and detect the feature area of the object (Interest Point Detection). And as shown in FIG. 6(b), the object and its characteristic information can be extracted from the detected feature area. Afterwards, the tag allocation unit can extract the feature information of the object by expressing the feature information of the object as a vector value. Next, the tag allocation unit may allocate an object attribute tag to the reference scene 420_3 according to the characteristic information of the object, as shown in FIG. 6(c)d.
  • Figure 8 is a flow chart illustrating a reference scene recommendation method for automatic video generation according to an embodiment of the present disclosure.
  • the reference scene recommendation device 300 may collect an image and then divide the collected image into scenes to generate a plurality of reference scenes (S810).
  • the reference scene recommendation device 300 may extract feature information by analyzing a plurality of reference scenes and then assign different types of tags to the plurality of reference scenes based on this (S820).
  • the reference scene recommendation device 300 may store a reference scene to which a tag is assigned in a reference scene database (S830).
  • the reference scene recommendation device 300 When the reference scene recommendation device 300 receives a reference scene recommendation request message from the automatic video generation device 200, the reference scene recommendation device 200 extracts a reference scene from the reference scene database based on the received reference scene recommendation request message. ) can be provided (S840).
  • step S840 includes the reference scene recommendation device 300 receiving a reference scene recommendation request message from the automatic video generation device 200, and extracting keywords included in the reference scene recommendation request message. Step, extracting the tokens constituting the keyword, selecting a tag that matches the morpheme value of the token among a plurality of tags assigned to the reference scene, if the selected tag matches the word of the token, the corresponding tag is It may include extracting an assigned reference scene from a reference scene database, and providing the extracted reference scene to the automatic video generating device 200.
  • the step S840 includes the reference scene recommendation device 300 receiving a reference scene recommendation request message from the automatic video generation device 200, and extracting keywords included in the reference scene recommendation request message.
  • Step extracting tokens constituting a keyword, targeting a reference image to which a tag that does not match the morpheme value of the token among a plurality of tags assigned to the reference scene is assigned, a plurality of tags assigned to the reference image
  • a step of calculating a similarity ratio between each tag and a word in the talk extracting a reference scene assigned to a tag with a similarity ratio of a certain ratio or more from the reference scene database, and converting the extracted reference scene to the automatic video generation device 200. It may include providing steps.
  • calculating the similarity ratio includes calculating the number of matching characters by comparing the characters constituting the tag assigned to the reference scene and the characters constituting the word of the token, and calculating the number of matching characters. Comparing the number of strings and the number of strings corresponding to the word of the token, selecting a greater number of strings, calculating a similarity ratio indicating the ratio of the number of matching characters to the number of selected strings, the similarity ratio is specified It may include extracting a reference scene to which a tag greater than or equal to the ratio is assigned from a reference scene database, and providing the extracted reference scene to the automatic video generating device 200 .
  • FIGS. 1 to 8 a reference scene recommendation method and a reference scene recommendation device for automatically generating a video according to an embodiment of the present disclosure have been described.
  • programs for various operations of the reference scene recommendation device 300 may be stored in the memory of the reference scene recommendation device 300.
  • the processor of the reference scene recommendation device 300 may load and execute a program stored in the memory.
  • the processor may be implemented as an application processor (AP), central processing unit (CPU), microcontroller unit (MCU), or similar devices, depending on hardware, software, or a combination thereof.
  • AP application processor
  • CPU central processing unit
  • MCU microcontroller unit
  • hardware may be provided in the form of an electronic circuit that processes electrical signals to perform a control function
  • software may be provided in the form of a program or code that drives the hardware circuit.
  • the disclosed embodiments may be implemented in the form of a recording medium that stores instructions executable by a computer. Instructions may be stored in the form of program code, and when executed by a processor, may create program modules to perform operations of the disclosed embodiments.
  • the recording medium may be implemented as a computer-readable recording medium.
  • Computer-readable recording media include all types of recording media storing instructions that can be decoded by a computer. For example, there may be read only memory (ROM), random access memory (RAM), magnetic tape, magnetic disk, flash memory, optical data storage, etc.
  • ROM read only memory
  • RAM random access memory
  • magnetic tape magnetic tape
  • magnetic disk magnetic disk
  • flash memory optical data storage
  • computer-readable recording media may be provided in the form of non-transitory storage media.
  • 'non-transitory storage medium' only means that it is a tangible device and does not contain signals (e.g. electromagnetic waves). This term refers to cases where data is semi-permanently stored in a storage medium and temporary storage media. It does not distinguish between cases where it is stored as .
  • a 'non-transitory storage medium' may include a buffer where data is temporarily stored.
  • methods according to various embodiments disclosed in this document may be included and provided in a computer program product.
  • Computer program products are commodities and can be traded between sellers and buyers.
  • the computer program product may be distributed in the form of a machine-readable recording medium (e.g. compact disc read only memory (CD-ROM)) or via an application store (e.g. Play StoreTM) or on two user devices (e.g. It can be distributed directly between smartphones (e.g. smartphones) or distributed online (e.g. downloaded or uploaded).
  • a machine-readable recording medium e.g. compact disc read only memory (CD-ROM)
  • an application store e.g. Play StoreTM
  • two user devices e.g. It can be distributed directly between smartphones (e.g. smartphones) or distributed online (e.g. downloaded or uploaded).
  • a computer program product e.g., a downloadable app
  • a machine-readable recording medium such as the memory of a manufacturer's server, an application store's server, or a relay server. It can be stored or created temporarily.
  • the reference scene recommendation method and reference scene recommendation device described above can be applied to the video production field.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Library & Information Science (AREA)
  • Software Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)

Abstract

Un procédé de recommandation de scène de référence destiné à la génération automatique d'une vidéo, selon un mode de réalisation de la présente divulgation, peut comprendre les étapes consistant à : diviser une vidéo collectée en unités d'une scène afin de générer une pluralité de scènes de référence ; analyser la pluralité de scènes de référence afin d'extraire des informations de caractéristique et attribuer différents types d'étiquettes à la pluralité de scènes de référence sur la base des informations de caractéristique ; et stocker, dans une base de données de scène de référence, la pluralité de scènes de référence auxquelles les étiquettes ont été attribuées.
PCT/KR2023/016939 2022-10-27 2023-10-27 Procédé de recommandation de scène de référence et dispositif de recommandation de scène de référence destinés à la génération de vidéo automatique Ceased WO2024091084A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020220140180A KR102560610B1 (ko) 2022-10-27 2022-10-27 동영상 자동 생성을 위한 참조 영상 데이터 추천 방법 및 이를 실행하는 장치
KR10-2022-0140180 2022-10-27

Publications (1)

Publication Number Publication Date
WO2024091084A1 true WO2024091084A1 (fr) 2024-05-02

Family

ID=87433164

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2023/016939 Ceased WO2024091084A1 (fr) 2022-10-27 2023-10-27 Procédé de recommandation de scène de référence et dispositif de recommandation de scène de référence destinés à la génération de vidéo automatique

Country Status (2)

Country Link
KR (1) KR102560610B1 (fr)
WO (1) WO2024091084A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN120343356A (zh) * 2024-09-12 2025-07-18 北京联世传奇网络技术有限公司 广告视频集锦生成方法、装置、设备及存储介质

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102560610B1 (ko) * 2022-10-27 2023-07-27 주식회사 일만백만 동영상 자동 생성을 위한 참조 영상 데이터 추천 방법 및 이를 실행하는 장치
KR102780102B1 (ko) * 2025-01-03 2025-03-12 주식회사 루크레이티브 촬영 원본 영상에 광고 문구를 타이포그래피로 합성하는 방법 및 장치

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110070386A (ko) * 2009-12-18 2011-06-24 주식회사 케이티 영상 ars 자동 제작 시스템 및 그 방법
KR20130032653A (ko) * 2011-09-23 2013-04-02 브로드밴드미디어주식회사 동영상 자막을 키워드로 이용한 영상 검색 시스템 및 방법
KR20160087222A (ko) * 2015-01-13 2016-07-21 삼성전자주식회사 디지털 컨텐츠의 시각적 내용 분석을 통해 포토 스토리를 생성하는 방법 및 장치
KR20200120493A (ko) * 2019-04-11 2020-10-21 주식회사 인덴트코퍼레이션 제휴 쇼핑몰 연동 기반 인공지능 챗봇을 이용한 리뷰 관리 서비스 제공 방법 및 시스템
KR20220134084A (ko) * 2021-03-26 2022-10-05 이광호 사용자 맞춤형 영상 콘텐츠를 제공하는 시스템
KR102560610B1 (ko) * 2022-10-27 2023-07-27 주식회사 일만백만 동영상 자동 생성을 위한 참조 영상 데이터 추천 방법 및 이를 실행하는 장치

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110070386A (ko) * 2009-12-18 2011-06-24 주식회사 케이티 영상 ars 자동 제작 시스템 및 그 방법
KR20130032653A (ko) * 2011-09-23 2013-04-02 브로드밴드미디어주식회사 동영상 자막을 키워드로 이용한 영상 검색 시스템 및 방법
KR20160087222A (ko) * 2015-01-13 2016-07-21 삼성전자주식회사 디지털 컨텐츠의 시각적 내용 분석을 통해 포토 스토리를 생성하는 방법 및 장치
KR20200120493A (ko) * 2019-04-11 2020-10-21 주식회사 인덴트코퍼레이션 제휴 쇼핑몰 연동 기반 인공지능 챗봇을 이용한 리뷰 관리 서비스 제공 방법 및 시스템
KR20220134084A (ko) * 2021-03-26 2022-10-05 이광호 사용자 맞춤형 영상 콘텐츠를 제공하는 시스템
KR102560610B1 (ko) * 2022-10-27 2023-07-27 주식회사 일만백만 동영상 자동 생성을 위한 참조 영상 데이터 추천 방법 및 이를 실행하는 장치

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN120343356A (zh) * 2024-09-12 2025-07-18 北京联世传奇网络技术有限公司 广告视频集锦生成方法、装置、设备及存储介质

Also Published As

Publication number Publication date
KR102560610B1 (ko) 2023-07-27

Similar Documents

Publication Publication Date Title
WO2024091080A1 (fr) Procédé de génération de vidéo automatique, et serveur de génération de vidéo automatique
WO2024091084A1 (fr) Procédé de recommandation de scène de référence et dispositif de recommandation de scène de référence destinés à la génération de vidéo automatique
WO2020141961A1 (fr) Procédé et appareil pour récupérer des informations intelligentes à partir d'un dispositif électronique
WO2018135881A1 (fr) Gestion de l'intelligence de vision destinée à des dispositifs électroniques
WO2016013914A1 (fr) Procédé, appareil, système et programme d'ordinateur permettant de fournir et d'afficher des informations de produit
WO2021141419A1 (fr) Procédé et appareil pour générer un contenu personnalisé en fonction de l'intention de l'utilisateur
CN108509611B (zh) 用于推送信息的方法和装置
WO2018174637A1 (fr) Procédé d'achat en temps réel utilisant une reconnaissance vidéo dans une diffusion, et dispositif intelligent dans lequel une application pour la mise en oeuvre de celui-ci est installée
WO2010119996A1 (fr) Procédé et dispositif de fourniture d'annonces publicitaires à images mobiles
WO2024106993A1 (fr) Procédé de génération de vidéo de commerce et serveur utilisant des données d'examen
WO2016013915A1 (fr) Procédé, appareil et programme d'ordinateur d'affichage d'informations de recherche
WO2012118259A1 (fr) Système et procédé de fourniture d'un service lié à la vidéo sur la base d'une image
WO2020251174A1 (fr) Procédé permettant de faire la publicité d'un article de mode personnalisé pour l'utilisateur et serveur exécutant celle-ci
WO2020190103A1 (fr) Procédé et système de fourniture d'objets multimodaux personnalisés en temps réel
Jin et al. Network video summarization based on key frame extraction via superpixel segmentation
WO2024107000A1 (fr) Procédé et serveur pour générer une image d'examen personnalisée à l'aide de données de rétroaction
WO2014178498A1 (fr) Procédé pour produire une image publicitaire et système de production correspondant, et système pour produire un fichier de film comprenant une image publicitaire et procédé permettant d'obtenir un fichier de film
WO2024091085A1 (fr) Procédé de génération de scène de référence et dispositif de génération de scène de référence, qui sont basés sur une image
WO2022119326A1 (fr) Procédé de fourniture de service de production d'un contenu de conversion multimédia à l'aide d'une adaptation de ressource d'image, et appareil associé
WO2022145946A1 (fr) Système et procédé d'apprentissage de langue sur la base d'images de formation recommandées par intelligence artificielle et de phrases illustratives
WO2024091086A1 (fr) Procédé de fourniture de fonction de saut d'image et appareil de fourniture de fonction de saut d'image
WO2024019226A1 (fr) Procédé de détection d'urls malveillantes
WO2017222226A1 (fr) Procédé d'enregistrement d'un produit publicitaire sur un contenu d'image, et serveur pour l'exécution du procédé
WO2021149923A1 (fr) Procédé et appareil de fourniture de recherche d'image
WO2025023529A1 (fr) Procédé, dispositif informatique et programme informatique de suppression sélective d'arrière-plan

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23883188

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE