CN108737859A - Video recommendation method based on barrage and device - Google Patents
Video recommendation method based on barrage and device Download PDFInfo
- Publication number
- CN108737859A CN108737859A CN201810426715.1A CN201810426715A CN108737859A CN 108737859 A CN108737859 A CN 108737859A CN 201810426715 A CN201810426715 A CN 201810426715A CN 108737859 A CN108737859 A CN 108737859A
- Authority
- CN
- China
- Prior art keywords
- user
- video
- vector
- similarity
- barrage
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/258—Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
- H04N21/25866—Management of end-user data
- H04N21/25891—Management of end-user data being end-user preferences
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/235—Processing of additional data, e.g. scrambling of additional data or processing content descriptors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/251—Learning process for intelligent management, e.g. learning user preferences for recommending movies
- H04N21/252—Processing of multiple end-users' preferences to derive collaborative data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/435—Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/442—Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
- H04N21/44213—Monitoring of end-user related data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/466—Learning process for intelligent management, e.g. learning user preferences for recommending movies
- H04N21/4662—Learning process for intelligent management, e.g. learning user preferences for recommending movies characterized by learning algorithms
- H04N21/4666—Learning process for intelligent management, e.g. learning user preferences for recommending movies characterized by learning algorithms using neural networks, e.g. processing the feedback provided by the user
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/466—Learning process for intelligent management, e.g. learning user preferences for recommending movies
- H04N21/4667—Processing of monitored end-user data, e.g. trend analysis based on the log file of viewer selections
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/466—Learning process for intelligent management, e.g. learning user preferences for recommending movies
- H04N21/4668—Learning process for intelligent management, e.g. learning user preferences for recommending movies for recommending content, e.g. movies
Landscapes
- Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Computer Graphics (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Social Psychology (AREA)
- Computer Networks & Wireless Communication (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
本发明公开一种基于弹幕的视频推荐方法和装置,该方法包括:根据弹幕的语料,通过神经网络对语料的字向量和情感标签进行模型训练,并预测弹幕的情感;通过用户在视频中发布的弹幕,将用户构建为能够表示用户对各类视频的喜好程度的用户向量,其中用户向量的每一维表示用户对各类视频中一类的喜好程度;通过PV‑DM模型得到视频的文字介绍的第一向量,通过主题生成模型得到视频的弹幕的各隐含主题的分布的第二向量,第一向量和第二向量共同构建视频向量;根据用户相似度和视频相似度预测用户对视频的预测评分,将预测评分不小于预定阈值的视频推荐给用户,其中通过分别对用户之间的相似度和视频之间的相似度计算得分后,将得分加权得到预测评分。
The present invention discloses a video recommendation method and device based on bullet chatting. The method includes: according to the corpus of bullet chatting, model training is performed on word vectors and emotional labels of the corpus through a neural network, and the emotion of bullet chatting is predicted; The barrage published in the video builds the user into a user vector that can represent the user's preference for various videos, where each dimension of the user vector represents the user's preference for one category of various videos; through the PV-DM model Obtain the first vector of the text introduction of the video, and obtain the second vector of the distribution of the hidden topics of the bullet chat of the video through the topic generation model. The first vector and the second vector jointly construct the video vector; according to the user similarity and video similarity The degree predicts the user's prediction score for the video, and recommends the video with the prediction score not less than the predetermined threshold to the user. After calculating the scores for the similarity between users and the similarity between videos, the scores are weighted to obtain the prediction score.
Description
技术领域technical field
本发明涉及视频服务技术领域,尤其涉及一种基于弹幕的视频推荐方法和装置。The present invention relates to the technical field of video services, in particular to a barrage-based video recommendation method and device.
背景技术Background technique
随着在线视频网站的发展,弹幕已经成为一种热门的交流方式。在线视频网站中视频数量繁多,推荐系统已经用于视频网站来提升用户体验。目前推荐方式一般分为两种:一种是根据用户输入的关键词,与视频的标题或介绍作匹配,返回相关程度较高的视频;另一种是分析用户的历史观看记录、评分和评论等数据,分析用户的喜好,推荐给用户可能喜欢的视频。With the development of online video sites, barrage has become a popular way of communication. There are a large number of videos in online video sites, and recommender systems have been used in video sites to improve user experience. At present, there are generally two recommendation methods: one is to match the title or introduction of the video according to the keywords entered by the user, and return videos with a higher degree of relevance; the other is to analyze the user's historical viewing records, ratings and comments and other data, analyze user preferences, and recommend videos that users may like.
第一种方式依靠用户输入的关键词进行匹配,当用户无法准确地描述自己需求或输入的关键词有误时,则不能提供准确的推荐结果。而第二种方式主要依靠用户评分矩阵和提取用户弹幕的关键词进行推荐。但在主流的视频网站中,用户无法直接对视频做出评分,因此无法直接获得用户评分矩阵。而提取关键词的方法存在着以下问题:(1)用户对该关键词的情感不一定是正面的,如果用户不喜欢这个关键词,那推荐给他的视频并不能提升用户体验;(2)该关键词并不能完全代表用户的兴趣,有可能是用户偶尔提到的词,这样推荐的视频也不能提升用户体验。The first method relies on the keywords entered by the user for matching. When the user cannot accurately describe his needs or the keywords entered are wrong, it cannot provide accurate recommendation results. The second method mainly relies on the user rating matrix and keywords extracted from user barrage for recommendation. However, in mainstream video websites, users cannot directly rate videos, so the user rating matrix cannot be obtained directly. However, the method of extracting keywords has the following problems: (1) the user's emotion to the keyword is not necessarily positive, if the user does not like the keyword, the video recommended to him cannot improve the user experience; (2) The keyword does not fully represent the interest of the user, and may be a word occasionally mentioned by the user, so the recommended video cannot improve the user experience.
发明内容Contents of the invention
针对上述问题,本发明通过分析用户的弹幕内容,建立用户的模型并预测用户的情感,推荐给用户可能喜欢的视频。In view of the above problems, the present invention establishes a user model and predicts the user's emotion by analyzing the user's barrage content, and recommends videos that the user may like.
第一方面,本发明实施例提供了一种基于弹幕的视频推荐方法,包括:根据弹幕的语料,通过神经网络对语料的字向量和情感标签进行模型训练,并预测弹幕的情感;通过用户在视频中发布的弹幕,将用户构建为能够表示用户对各类视频的喜好程度的用户向量,其中用户向量的每一维表示用户对各类视频中一类的喜好程度;通过PV-DM模型得到视频的文字介绍的第一向量,通过主题生成模型得到视频的弹幕的各隐含主题的分布的第二向量,第一向量和第二向量共同构建视频向量;根据用户相似度和视频相似度预测用户对视频的预测评分,将预测评分不小于预定阈值的视频推荐给用户,其中通过分别对用户之间的相似度和视频之间的相似度计算得分后,将得分加权得到预测评分。In the first aspect, the embodiment of the present invention provides a bullet chat-based video recommendation method, including: according to the bullet chat corpus, perform model training on the word vector and emotion label of the corpus through a neural network, and predict the emotion of the bullet chat; Through the barrage posted by the user in the video, the user is constructed as a user vector that can represent the user's preference for various videos, where each dimension of the user vector represents the user's preference for a category of various videos; through PV -The DM model obtains the first vector of the text introduction of the video, and obtains the second vector of the distribution of each hidden topic of the video barrage through the topic generation model, and the first vector and the second vector jointly construct the video vector; according to the user similarity and video similarity to predict the user's predicted score for the video, and recommend the video with the predicted score not less than the predetermined threshold to the user, wherein the similarity between users and the similarity between videos are calculated after scoring, and the scores are weighted to get predictive score.
可选地,使用Skip-gram模型训练字向量,使用长短期记忆网络预测用户发送在视频中的弹幕的情感极性。Optionally, use the Skip-gram model to train word vectors, and use the long-short-term memory network to predict the emotional polarity of the barrage sent by the user in the video.
可选地,用户向量的每一维中包括根据弹幕的情感得出的分数,分数用于表示喜好程度。Optionally, each dimension of the user vector includes a score obtained according to the emotion of the barrage, and the score is used to indicate the degree of liking.
可选地,用户之间的相似度和视频之间的相似度分别通过余弦相似度获得。Optionally, the similarity between users and the similarity between videos are respectively obtained by cosine similarity.
可选地,推荐步骤中,选取用户预定时间内观看视频时出现次数最多的多个标签,根据标签获得需要预测的视频集合。Optionally, in the recommending step, a plurality of tags that appear most often when the user watches videos within a predetermined time period are selected, and a set of videos to be predicted is obtained according to the tags.
第二方面,本发明公开一种基于弹幕的视频推荐装置,包括:情感分析模型训练模块、用于根据弹幕的语料,通过神经网络对所述语料的字向量和情感标签进行模型训练,并预测所述弹幕的情感;用户模型构建模块、用于通过用户在视频中发布的所述弹幕,将所述用户构建为能够表示所述用户对各类视频的喜好程度的用户向量,其中所述用户向量的每一维表示所述用户对所述各类视频中一类的喜好程度;视频模型构建模块、用于通过PV-DM模型得到所述视频的文字介绍的第一向量,通过主题生成模型得到所述视频的弹幕的各隐含主题的分布的第二向量,所述第一向量和所述第二向量共同构建视频向量;推荐模块、用于根据用户相似性和视频相似性预测所述用户对视频的预测评分,将所述预测评分不小于预定阈值的视频推荐给所述用户,其中通过分别对用户之间的相似度和视频之间的相似度计算得分后,将所述得分加权得到所述预测评分。In the second aspect, the present invention discloses a video recommendation device based on barrage, including: a sentiment analysis model training module, which is used to perform model training on word vectors and emotional tags of the corpus through a neural network based on the corpus of the barrage, And predict the emotion of described barrage; User model construction module, be used for by the described barrage that user releases in video, described user is constructed as the user vector that can represent described user to the degree of preference of various videos, Wherein each dimension of the user vector represents the degree of preference of the user to one class of videos; the video model building block is used to obtain the first vector of the text introduction of the video through the PV-DM model, Obtain the second vector of the distribution of each hidden theme of the barrage of the video through the topic generation model, the first vector and the second vector jointly construct the video vector; the recommendation module is used for user similarity and video The similarity predicts the user's predicted score for the video, and recommends the video with the predicted score not less than a predetermined threshold to the user, wherein after calculating the scores for the similarity between users and the similarity between videos, The scores are weighted to obtain the predicted score.
可选地,使用Skip-gram模型训练字向量,使用长短期记忆网络预测用户发送在视频中的弹幕的情感极性。Optionally, use the Skip-gram model to train word vectors, and use the long-short-term memory network to predict the emotional polarity of the barrage sent by the user in the video.
可选地,用户向量的每一维中包括根据弹幕的情感得出的分数,分数用于表示喜好程度。Optionally, each dimension of the user vector includes a score obtained according to the emotion of the barrage, and the score is used to indicate the degree of liking.
可选地,用户之间的相似度和视频之间的相似度分别通过余弦相似度获得。Optionally, the similarity between users and the similarity between videos are respectively obtained by cosine similarity.
可选地,推荐步骤中,选取用户预定时间内观看视频时出现次数最多的多个标签,根据标签获得需要预测的视频集合。Optionally, in the recommending step, a plurality of tags that appear most often when the user watches videos within a predetermined time period are selected, and a set of videos to be predicted is obtained according to the tags.
第三方面,本发明的实施例提供一种计算设备,包括存储器,以及一个或者多个处理器;其中,计算设备还包括:一个或多个单元,一个或多个单元被存储在存储器中并被配置成由一个或多个处理器执行,一个或多个单元包括用于执行以下步骤的指令:根据弹幕的语料,通过神经网络对语料的字向量和情感标签进行模型训练,并预测弹幕的情感;通过用户在视频中发布的弹幕,将用户构建为能够表示用户对各类视频的喜好程度的用户向量,其中用户向量的每一维表示用户对各类视频中一类的喜好程度;通过PV-DM模型得到视频的文字介绍的第一向量,通过主题生成模型得到视频的弹幕的各隐含主题的分布的第二向量,第一向量和第二向量共同构建视频向量;根据用户相似度和视频相似度预测用户对视频的预测评分,将预测评分不小于预定阈值的视频推荐给用户,其中通过分别对用户之间的相似度和视频之间的相似度计算得分后,将得分加权得到预测评分。In a third aspect, an embodiment of the present invention provides a computing device, including a memory, and one or more processors; wherein, the computing device further includes: one or more units, one or more units are stored in the memory and It is configured to be executed by one or more processors, and one or more units include instructions for performing the following steps: according to the corpus of bullet chatting, perform model training on word vectors and emotional labels of the corpus through a neural network, and predict bullet chat The emotion of the screen; through the barrage posted by the user in the video, the user is constructed as a user vector that can represent the user's preference for various videos, where each dimension of the user vector represents the user's preference for a category of various videos Degree; the first vector of the text introduction of the video is obtained by the PV-DM model, the second vector of the distribution of each hidden theme of the barrage of the video is obtained by the topic generation model, and the first vector and the second vector jointly construct the video vector; According to the user similarity and video similarity, predict the user's predicted score for the video, and recommend the video with the predicted score not less than a predetermined threshold to the user, wherein after calculating the scores for the similarity between users and the similarity between videos, The scores are weighted to obtain a predicted score.
进一步地,本发明实施例提供与计算设备结合使用的计算机程序产品,其特征在于,包括计算机可读的存储介质和内嵌于其中的计算机程序机制;其中,计算机程序机制包括执行以下步骤的指令:根据弹幕的语料,通过神经网络对语料的字向量和情感标签进行模型训练,并预测弹幕的情感;通过用户在视频中发布的弹幕,将用户构建为能够表示用户对各类视频的喜好程度的用户向量,其中用户向量的每一维表示用户对各类视频中一类的喜好程度;通过PV-DM模型得到视频的文字介绍的第一向量,通过主题生成模型得到视频的弹幕的各隐含主题的分布的第二向量,第一向量和第二向量共同构建视频向量;根据用户相似度和视频相似度预测用户对视频的预测评分,将预测评分不小于预定阈值的视频推荐给用户,其中通过分别对用户之间的相似度和视频之间的相似度计算得分后,将得分加权得到预测评分。Furthermore, an embodiment of the present invention provides a computer program product used in combination with a computing device, which is characterized by including a computer-readable storage medium and a computer program mechanism embedded therein; wherein the computer program mechanism includes instructions for performing the following steps : According to the corpus of bullet chatting, the neural network is used to perform model training on the word vectors and emotional labels of the corpus, and predict the emotion of the bullet chatting; through the bullet chatting posted by users in the video, the user is constructed to be able to express the user's interest in various videos The user vector of the degree of preference, where each dimension of the user vector represents the user’s preference for a category of videos; the first vector of the text introduction of the video is obtained through the PV-DM model, and the elasticity of the video is obtained through the topic generation model. The second vector of the distribution of each hidden theme of the episode, the first vector and the second vector jointly construct the video vector; predict the user's prediction score for the video according to the user similarity and video similarity, and the video with the prediction score not less than the predetermined threshold Recommend to the user, wherein after calculating the scores for the similarity between users and the similarity between videos, the scores are weighted to obtain the prediction score.
本发明实施方式与现有技术相比,主要区别及其效果在于:本发明的技术方案能够根据用户的弹幕预测用户的真实情感表达,推荐用户喜欢的视频。Compared with the prior art, the main difference and effect of the embodiment of the present invention is that the technical solution of the present invention can predict the user's real emotional expression according to the user's barrage, and recommend the user's favorite video.
附图说明Description of drawings
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings that need to be used in the description of the embodiments will be briefly introduced below. Obviously, the drawings in the following description are only some embodiments of the present invention. For those skilled in the art, other drawings can also be obtained based on these drawings without creative effort.
图1是根据本发明第一实施例的基于弹幕的推荐方法100的流程图。Fig. 1 is a flowchart of a bullet chatting-based recommendation method 100 according to a first embodiment of the present invention.
图2是根据本发明实施例的LSTM模型预测弹幕的情感极性的示意图。Fig. 2 is a schematic diagram of an LSTM model predicting emotional polarity of a barrage according to an embodiment of the present invention.
图3是用于本发明第三实施例的基于弹幕的推荐装置300的结构框图。Fig. 3 is a structural block diagram of a barrage-based recommendation device 300 used in the third embodiment of the present invention.
具体实施方式Detailed ways
为使本发明的目的、技术方案和优点更加清楚,下面将结合附图对本发明实施方式作进一步地详细描述。本文中所描述的说明性系统和方法实施例并非意图进行限制。In order to make the object, technical solution and advantages of the present invention clearer, the implementation manner of the present invention will be further described in detail below in conjunction with the accompanying drawings. The illustrative system and method embodiments described herein are not intended to be limiting.
第一实施例first embodiment
图1是根据本发明实施例的基于弹幕的推荐方法100的流程图。如图1所示,该方法具体处理流程如下所述:Fig. 1 is a flowchart of a bullet chatting-based recommendation method 100 according to an embodiment of the present invention. As shown in Figure 1, the specific processing flow of this method is as follows:
S110、根据弹幕的语料,通过神经网络对所述语料的字向量和情感标签进行模型训练,并预测所述弹幕的情感。S110. According to the corpus of the bullet chatting, perform model training on the word vectors and emotion tags of the corpus through a neural network, and predict the emotion of the bullet chatting.
根据本发明的实施例,可以理解,需要收集视频相关的各种数据,例如:视频信息、用户信息、用户弹幕和情感标签等。其中,视频信息包括但不限于视频编号、视频类型标签、视频文字介绍等;用户信息包括但不限于用户编号等;用户弹幕可以包括用户编号、视频编号、弹幕发送时间、弹幕播放时间、弹幕内容等。情感标签是针对用户的弹幕内容标注的表示情感取向的正向(positive),负向(negative)和中性(neutral)的标签。According to the embodiments of the present invention, it can be understood that it is necessary to collect various video-related data, such as video information, user information, user barrage, and emotional tags. Among them, video information includes but not limited to video number, video type label, video text introduction, etc.; user information includes but not limited to user number, etc.; user barrage can include user number, video number, barrage sending time, barrage playing time , barrage content, etc. Emotional tags are positive, negative and neutral tags that indicate emotional orientation for the user's barrage content.
由于弹幕通常较短且具有很强的随意性,现有的分词算法很难起到令人满意的效果。因此本发明的实施例学习弹幕的字的向量表示,并用于情感分析。作为一个实例,可以采用现有的skip-gram模型训练字向量。Since the barrage is usually short and highly random, it is difficult for the existing word segmentation algorithm to achieve satisfactory results. Therefore, the embodiment of the present invention learns the vector representation of the words of the barrage, and uses it for sentiment analysis. As an example, an existing skip-gram model can be used to train word vectors.
根据本发明的实施例,采用LSTM(Long Short-Term Memory,长短期记忆网络)模型预测弹幕的情感极性。如图2所示,提取用户的弹幕为字的序列,将字对应的字向量wi(i=1,2,...,n)依次输入到LSTM单元中,每个LSTM单元都会输出一个hi(i=1,2,...,n),将最后一个的隐藏层输出向量hn作为句子表示输最后得到的输入至softmax层。Softmax层将向量分为三个标签:正向(positive),负向(negative)和中性(neutral)。本方法采用交叉熵作为损失函数:According to an embodiment of the present invention, an LSTM (Long Short-Term Memory, long-short-term memory network) model is used to predict the emotional polarity of the barrage. As shown in Figure 2, the user’s barrage is extracted as a sequence of words, and the word vectors w i (i=1, 2, ..., n) corresponding to the words are sequentially input into the LSTM unit, and each LSTM unit will output A h i (i=1, 2, ..., n), the last hidden layer output vector h n is used as a sentence representation and input to the softmax layer. The Softmax layer divides the vector into three labels: positive, negative and neutral. This method uses cross entropy as the loss function:
其中,DT表示所有的训练弹幕,C为情感标签数目。gc(d)的取值为1或0,分别代表弹幕d属于或不属于当前这个情感标签。Pc(d)代表softmax层将弹幕d分为类别c的概率。可以理解,模型训练采用反向传播和随机梯度下降进行参数更新。Among them, DT represents all training barrages, and C is the number of emotional tags. The values of g c (d) are 1 or 0, which respectively represent that the barrage d belongs to or does not belong to the current emotional label. P c (d) represents the probability that the softmax layer classifies the barrage d into category c. It can be understood that model training uses backpropagation and stochastic gradient descent for parameter update.
S120、通过用户在视频中发布的所述弹幕,将所述用户构建为能够表示所述用户对各类视频的喜好程度的用户向量,其中所述用户向量的每一维表示所述用户对所述各类视频中一类的喜好程度。S120. Using the barrage posted by the user in the video, construct the user as a user vector that can represent the user's preference for various videos, wherein each dimension of the user vector represents the user's preference for The liking degree of one category among the various types of videos.
为了衡量用户的相似度,本发明的实施例通过计算用户对每个视频标签的喜好程度来计算用户之间的相似度。假设整个视频网站的视频标签集合为L,用户i的特征表示为|L|维的向量ui,每一维j(j=1,..,|L|)代表用户对标签lj的喜好程度。可以通过以下方式构造用户向量并衡量用户之间的相似度:In order to measure the similarity of users, the embodiment of the present invention calculates the similarity between users by calculating the user's preference for each video tag. Assuming that the video tag set of the entire video website is L, the feature of user i is expressed as |L|-dimensional vector u i , and each dimension j (j=1, .., |L|) represents the user's preference for tag l j degree. User vectors can be constructed and the similarity between users measured in the following ways:
(1)获取用户发送过弹幕的视频,记为Vi;(1) Obtain the video that the user has sent the barrage, denoted as V i ;
(2)对每个视频v∈Vi,获取其标签集合Lv,对用户i在这个视频中发送的弹幕d,更新其用户向量ui:(2) For each video v∈V i , obtain its label set L v , and update its user vector u i for the barrage d sent by user i in this video:
ui[Lv]=ui[Lv]+scoresentiment(d)u i [L v ]=u i [L v ]+score sentiment (d)
其含义为用户在视频v中发送的弹幕d的情感是用户对视频v所属标签的情感,向量ui中视频v的标签对应的维度会加上由根据弹幕d的情感得出的分数scoresentiment(d)。本发明的实施例中弹幕情感分为正向(positive),负向(negative)和中性(neutral),因而scoresentiment(d)定义为:It means that the emotion of the bullet chatting d sent by the user in the video v is the user's emotion on the label of the video v, and the dimension corresponding to the label of the video v in the vector u i will add the score obtained according to the emotion of the bullet chatting d score sentiment (d). In the embodiment of the present invention, bullet screen emotion is divided into positive (positive), negative (negative) and neutral (neutral), thus score sentiment (d) is defined as:
这样用户向量ui就代表了用户i对各类型的视频的喜好程度。In this way, the user vector u i represents user i's preference for various types of videos.
(3)在构建用户向量后,可采用余弦相似度衡量用户之间的相似度:(3) After constructing the user vector, the cosine similarity can be used to measure the similarity between users:
S130、通过PV-DM模型得到所述视频的文字介绍的第一向量,通过主题生成模型得到所述视频的弹幕的各隐含主题的分布的第二向量,所述第一向量和所述第二向量共同构建视频向量。S130. Obtain the first vector of the text introduction of the video through the PV-DM model, obtain the second vector of the distribution of each hidden theme of the barrage of the video through the topic generation model, the first vector and the The second vectors together construct the video vector.
进一步地,本发明的实施例从两个方面对视频进行向量化:视频的文字介绍和视频的弹幕。作为一个实例,收集所有视频的介绍和视频的弹幕,用PV-DM(DistributedMemory Model of Paragraph Vectors,句向量的分布记忆模型)模型训练得到文字介绍部分的向量同时,将每个视频的弹幕作为文档,采用LDA(Latent DirichletAllocation)文档主题生成模型得到关于该弹幕文档的隐含主题的分布。这样视频vi的弹幕可表示为一个T维的向量T为隐含主题的数目。两种向量单位化后拼接可得到视频i的向量vi。Furthermore, the embodiment of the present invention vectorizes the video from two aspects: the text introduction of the video and the barrage of the video. As an example, collect all video introductions and video barrage, and use PV-DM (DistributedMemory Model of Paragraph Vectors, sentence vector distribution memory model) model training to get the vector of the text introduction part At the same time, the bullet chat of each video is used as a document, and the LDA (Latent Dirichlet Allocation) document topic generation model is used to obtain the distribution of the hidden topic of the bullet chat document. In this way, the barrage of video vi can be expressed as a T-dimensional vector T is the number of hidden topics. The vector v i of video i can be obtained by concatenating the two vectors after normalization.
可以理解,与衡量用户相似度类似,同样可以采用余弦相似度衡量视频之间的相似度:It can be understood that, similar to measuring user similarity, cosine similarity can also be used to measure the similarity between videos:
进一步地,S140、根据用户相似性和视频相似性预测所述用户对视频的预测评分,将所述预测评分不小于预定阈值的视频推荐给所述用户,其中通过分别对用户之间的相似度和视频之间的相似度计算得分后,将所述得分加权得到所述预测评分。Further, S140. Predict the user's predicted score for the video according to the user similarity and video similarity, and recommend the video with the predicted score not less than a predetermined threshold to the user, wherein the similarity between users is respectively calculated After the similarity score between the video and the video is calculated, the score is weighted to obtain the prediction score.
如果对所有视频预测得分,则效率相对低下,因此,在本发明的实施例中,可以先挑选一系列视频作为候选。作为一个实例,在用户i最近一段时间内,例如几天或一周,观看的m个视频中,收集这些视频的标签,选取出现次数最多k个标签,并找到这些标签下的所有视频作为候选集合,记为Vcan。根据本发明的实施例,从用户之间的相似度和视频之间的相似度两个方面预测用户的打分。对每个视频v∈Vcan,用户i可能的打分为:If the scores are predicted for all videos, the efficiency is relatively low. Therefore, in the embodiment of the present invention, a series of videos may be selected as candidates first. As an example, among the m videos watched by user i within a recent period of time, such as a few days or a week, collect the tags of these videos, select the tags with the most k occurrences, and find all the videos under these tags as a candidate set , denoted as V can . According to an embodiment of the present invention, the user's score is predicted from two aspects of the similarity between users and the similarity between videos. For each video v∈V can , user i’s possible scores are:
score(i,v)=k1·scoreuser+k2·scorevideo score(i, v)=k 1 ·score user +k 2 ·score video
其中scoreuser表示从用户相似度得到的预测打分,scorevideo表示从视频相似度得到的预测打分,其中,k1和k2为参数,k1+k2=1,k1和k2分别代表对这两个方面的侧重性。scoreuser和scorevideo可表示为:Where score user represents the predicted score obtained from user similarity, score video represents the predicted score obtained from video similarity, where k 1 and k 2 are parameters, k 1 +k 2 =1, k 1 and k 2 represent focus on these two aspects. score user and score video can be expressed as:
其中Dv表示视频v中弹幕的集合,|Dv|表示该集合的大小,ud代表发送弹幕d的用户向量。Lv表示视频v的标签集合,Vl表示和视频v有同一标签l的视频集合,|Vl|为该集合的大小。s(ud,ui)和s(vj,v)表示用户之间的相似度和视频之间的相似度。where D v represents the collection of bullet chatting in video v, |D v | represents the size of the collection, and u d represents the vector of users who sent bullet chatting d. L v represents the label set of video v, V l represents the video set with the same label l as video v, and |V l | is the size of the set. s(u d , u i ) and s(v j , v) denote the similarity between users and the similarity between videos.
而sp(vj)代表观看视频vj的用户对视频vj的总体评价,可以表示为:And sp(v j ) represents the overall evaluation of video v j by users watching video v j , which can be expressed as:
由此可以预测用户i的打分,并返回Vcan中预测得分最高的n个视频给用户i。Therefore, the score of user i can be predicted, and the n videos with the highest predicted scores in Vcan can be returned to user i.
因此,本发明实施例的基于弹幕的推荐方法能够根据用户的弹幕预测用户的真实情感表达,推荐用户喜欢的视频。Therefore, the bullet chatting-based recommendation method of the embodiment of the present invention can predict the user's real emotional expression according to the user's bullet chatting, and recommend videos that the user likes.
本发明的各方法实施方式均可以以软件、硬件、固件等方式实现。不管本发明是以软件、硬件、还是固件方式实现,指令代码都可以存储在任何类型的计算机可访问的存储器中(例如永久的或者可修改的,易失性的或者非易失性的,固态的或者非固态的,固定的或者可更换的介质等等)。同样,存储器可以例如是可编程阵列逻辑(Programmable ArrayLogic,简称“PAL”)、随机存取存储器(Random Access Memory,简称“RAM”)、可编程只读存储器(Programmable Read Only Memory,简称“PROM”)、只读存储器(Read-Only Memory,简称“ROM”)、电可擦除可编程只读存储器(Electrically Erasable Programmable ROM,简称“EEPROM”)、磁盘、光盘、数字通用光盘(Digital Versatile Disc,简称“DVD”)等等。All method implementations of the present invention can be implemented in software, hardware, firmware and other ways. Regardless of whether the invention is implemented in software, hardware, or firmware, the instruction codes may be stored in any type of computer-accessible memory (e.g., permanent or modifiable, volatile or nonvolatile, solid-state or non-solid state, fixed or replaceable media, etc.). Similarly, the memory can be, for example, Programmable Array Logic (Programmable Array Logic, referred to as "PAL"), Random Access Memory (Random Access Memory, referred to as "RAM"), Programmable Read Only Memory (Programmable Read Only Memory, referred to as "PROM") ), Read-Only Memory (Read-Only Memory, referred to as "ROM"), Electrically Erasable Programmable Read-Only Memory (Electrically Erasable Programmable ROM, referred to as "EEPROM"), magnetic disk, optical disc, Digital Versatile Disc (Digital Versatile Disc, referred to as "DVD") and so on.
第二实施例second embodiment
图3是根据本发明实施例的基于弹幕的推荐装置300的示意性框图。该装置用于执行上述方法流程,包括:Fig. 3 is a schematic block diagram of a barrage-based recommendation device 300 according to an embodiment of the present invention. This device is used for carrying out above-mentioned method flow process, comprises:
情感分析模型训练模块310、用于根据弹幕的语料,通过神经网络对所述语料的字向量和情感标签进行模型训练,并预测所述弹幕的情感。The sentiment analysis model training module 310 is configured to perform model training on word vectors and emotion tags of the corpus through a neural network according to the corpus of the bullet chat, and predict the emotion of the bullet chat.
用户模型构建模块320、用于通过用户在视频中发布的所述弹幕,将所述用户构建为能够表示所述用户对各类视频的喜好程度的用户向量,其中所述用户向量的每一维表示所述用户对所述各类视频中一类的喜好程度。The user model construction module 320 is used to construct the user into user vectors that can represent the user's preference for various types of videos through the barrage posted by the user in the video, wherein each of the user vectors Dimension represents the degree of preference of the user to one category of the various types of videos.
视频模型构建模块330、用于通过PV-DM模型得到所述视频的文字介绍的第一向量,通过主题生成模型得到所述视频的弹幕的各隐含主题的分布的第二向量,所述第一向量和所述第二向量共同构建视频向量。The video model construction module 330 is used to obtain the first vector of the text introduction of the video through the PV-DM model, and obtain the second vector of the distribution of each hidden topic of the barrage of the video through the topic generation model, the said The first vector and the second vector jointly construct a video vector.
推荐模块340、用于根据用户相似性和视频相似性预测所述用户对视频的预测评分,将所述预测评分不小于预定阈值的视频推荐给所述用户,其中通过分别对用户之间的相似度和视频之间的相似度计算得分后,将所述得分加权得到所述预测评分。The recommendation module 340 is used to predict the user's predicted score for the video according to the user similarity and video similarity, and recommend the video with the predicted score not less than a predetermined threshold to the user, wherein the similarity between users is respectively After calculating the score of the similarity between the degree and the video, the score is weighted to obtain the prediction score.
可选地,使用Skip-gram模型训练字向量,使用长短期记忆网络预测用户发送在视频中的弹幕的情感极性。Optionally, use the Skip-gram model to train word vectors, and use the long-short-term memory network to predict the emotional polarity of the barrage sent by the user in the video.
可选地,用户向量的每一维中包括根据弹幕的情感得出的分数,分数用于表示喜好程度。Optionally, each dimension of the user vector includes a score obtained according to the emotion of the barrage, and the score is used to indicate the degree of liking.
可选地,用户之间的相似度和视频之间的相似度分别通过余弦相似度获得。Optionally, the similarity between users and the similarity between videos are respectively obtained by cosine similarity.
可选地,推荐步骤中,选取用户预定时间内观看视频时出现次数最多的多个标签,根据标签获得需要预测的视频集合。Optionally, in the recommending step, a plurality of tags that appear most often when the user watches videos within a predetermined time period are selected, and a set of videos to be predicted is obtained according to the tags.
第一实施例是与本实施方式相对应的方法实施方式,本实施方式可与第一实施例互相配合实施。第一实施例中提到的相关技术细节在本实施方式中依然有效,为了减少重复,这里不再赘述。相应地,本实施方式中提到的相关技术细节也可应用在第一实施方式中。The first embodiment is a method implementation corresponding to this embodiment, and this embodiment can be implemented in cooperation with the first embodiment. The relevant technical details mentioned in the first embodiment are still valid in this embodiment, and will not be repeated here to reduce repetition. Correspondingly, the relevant technical details mentioned in this implementation manner can also be applied in the first implementation manner.
因此,本发明实施例的基于弹幕的推荐装置能够根据用户的弹幕预测用户的真实情感表达,推荐用户喜欢的视频。Therefore, the barrage-based recommendation device in the embodiment of the present invention can predict the user's real emotional expression according to the user's barrage, and recommend videos that the user likes.
进一步地,基于相同的技术构思,本发明的实施例提供一种计算设备,包括存储器,以及一个或者多个处理器;其中,计算设备还包括:一个或多个单元,一个或多个单元被存储在存储器中并被配置成由一个或多个处理器执行,一个或多个单元包括用于执行以下步骤的指令:Further, based on the same technical concept, embodiments of the present invention provide a computing device, including memory, and one or more processors; wherein, the computing device further includes: one or more units, and one or more units are Stored in memory and configured to be executed by one or more processors, the one or more units include instructions for:
根据弹幕的语料,通过神经网络对语料的字向量和情感标签进行模型训练,并预测弹幕的情感;According to the corpus of the bullet chatting, the neural network is used to train the model of the word vector and emotional label of the corpus, and predict the emotion of the bullet chatting;
通过用户在视频中发布的弹幕,将用户构建为能够表示用户对各类视频的喜好程度的用户向量,其中用户向量的每一维表示用户对各类视频中一类的喜好程度;Through the barrage posted by the user in the video, the user is constructed as a user vector that can represent the user's preference for various videos, where each dimension of the user vector represents the user's preference for one category of various videos;
通过PV-DM模型得到视频的文字介绍的第一向量,通过主题生成模型得到视频的弹幕的各隐含主题的分布的第二向量,第一向量和第二向量共同构建视频向量;Obtain the first vector of the text introduction of the video through the PV-DM model, obtain the second vector of the distribution of each hidden theme of the barrage of the video through the topic generation model, and the first vector and the second vector jointly construct the video vector;
根据用户相似度和视频相似度预测用户对视频的预测评分,将预测评分不小于预定阈值的视频推荐给用户,其中通过分别对用户之间的相似度和视频之间的相似度计算得分后,将得分加权得到预测评分。According to the user similarity and video similarity, predict the user's predicted score for the video, and recommend the video with the predicted score not less than a predetermined threshold to the user, wherein after calculating the scores for the similarity between users and the similarity between videos, The scores are weighted to obtain a predicted score.
基于相同的技术构思,根据本发明的实施例提供与计算设备结合使用的计算机程序产品,其特征在于,包括计算机可读的存储介质和内嵌于其中的计算机程序机制;其中,计算机程序机制包括执行以下步骤的指令:Based on the same technical idea, the embodiments of the present invention provide a computer program product used in combination with a computing device, which is characterized in that it includes a computer-readable storage medium and a computer program mechanism embedded therein; wherein, the computer program mechanism includes Instructions to perform the following steps:
根据弹幕的语料,通过神经网络对语料的字向量和情感标签进行模型训练,并预测弹幕的情感;According to the corpus of the bullet chatting, the neural network is used to train the model of the word vector and emotional label of the corpus, and predict the emotion of the bullet chatting;
通过用户在视频中发布的弹幕,将用户构建为能够表示用户对各类视频的喜好程度的用户向量,其中用户向量的每一维表示用户对各类视频中一类的喜好程度;Through the barrage posted by the user in the video, the user is constructed as a user vector that can represent the user's preference for various videos, where each dimension of the user vector represents the user's preference for one category of various videos;
通过PV-DM模型得到视频的文字介绍的第一向量,通过主题生成模型得到视频的弹幕的各隐含主题的分布的第二向量,第一向量和第二向量共同构建视频向量;Obtain the first vector of the text introduction of the video through the PV-DM model, obtain the second vector of the distribution of each hidden theme of the barrage of the video through the topic generation model, and the first vector and the second vector jointly construct the video vector;
根据用户相似度和视频相似度预测用户对视频的预测评分,将预测评分不小于预定阈值的视频推荐给用户,其中通过分别对用户之间的相似度和视频之间的相似度计算得分后,将得分加权得到预测评分。According to the user similarity and video similarity, predict the user's predicted score for the video, and recommend the video with the predicted score not less than a predetermined threshold to the user, wherein after calculating the scores for the similarity between users and the similarity between videos, The scores are weighted to obtain a predicted score.
应当理解,为了精简本公开并帮助理解各个发明方面中的一个或多个,在上面对本发明的示例性实施例的描述中,本发明的各个特征有时被一起分组到单个实施例、图、或者对其的描述中。然而,并不应将该公开的方法解释成反映如下意图:即所要求保护的本发明要求比在每个权利要求中所明确记载的特征更多的特征。更确切地说,如权利要求书所反映的那样,发明方面在于少于前面公开的单个实施例的所有特征。因此,遵循具体实施方式的权利要求书由此明确地并入该具体实施方式,其中每个权利要求本身都作为本发明的单独实施例。It should be appreciated that in the above description of exemplary embodiments of the invention, in order to streamline this disclosure and to facilitate understanding of one or more of the various inventive aspects, various features of the invention are sometimes grouped together in a single embodiment, figure, or in its description. This method of disclosure, however, is not to be interpreted as reflecting an intention that the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the Detailed Description are hereby expressly incorporated into this Detailed Description, with each claim standing on its own as a separate embodiment of this invention.
此外,本领域的技术人员能够理解,尽管在此所述的一些实施例包括其它实施例中所包括的某些特征而不是其它特征,但是不同实施例的特征的组合意味着处于本发明的范围之内并且形成不同的实施例。例如,在权利要求书中,所要求保护的实施例的任意之一都可以以任意的组合方式来使用。Furthermore, those skilled in the art will understand that although some embodiments described herein include some features included in other embodiments but not others, combinations of features from different embodiments are meant to be within the scope of the invention. and form different embodiments. For example, in the claims, any one of the claimed embodiments can be used in any combination.
尽管本文已公开了各种方面和实施例,但其它方面和实施例对于本领域技术人员而言将是明显的。本文公开的各种方面和实施例是为了说明的目的,而不意在进行限制,真实的范围应当由所附权利要求以及这样的权利要求所被授权的等效物的全部范围指示。还要理解,本文中使用的术语仅是为了描述特定实施例的目的,而不意在进行限制。Although various aspects and embodiments have been disclosed herein, other aspects and embodiments will be apparent to those skilled in the art. The various aspects and embodiments disclosed herein are for purposes of illustration and not limitation, with the true scope to be indicated by the appended claims, along with the full scope of equivalents to which such claims are entitled. It is also to be understood that terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting.
Claims (12)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201810426715.1A CN108737859A (en) | 2018-05-07 | 2018-05-07 | Video recommendation method based on barrage and device |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201810426715.1A CN108737859A (en) | 2018-05-07 | 2018-05-07 | Video recommendation method based on barrage and device |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN108737859A true CN108737859A (en) | 2018-11-02 |
Family
ID=63937000
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201810426715.1A Pending CN108737859A (en) | 2018-05-07 | 2018-05-07 | Video recommendation method based on barrage and device |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN108737859A (en) |
Cited By (18)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109361932A (en) * | 2018-11-23 | 2019-02-19 | 武汉斗鱼网络科技有限公司 | The method that temperature prediction is broadcast live, device, equipment and medium |
| CN109743617A (en) * | 2018-12-03 | 2019-05-10 | 清华大学 | A jump navigation method and device for video playback |
| CN109862397A (en) * | 2019-02-02 | 2019-06-07 | 广州虎牙信息科技有限公司 | A kind of video analysis method, apparatus, equipment and storage medium |
| CN110113673A (en) * | 2019-04-30 | 2019-08-09 | 北京奇艺世纪科技有限公司 | A kind of barrage display methods, device and electronic equipment |
| CN110263188A (en) * | 2019-05-29 | 2019-09-20 | 深圳市元征科技股份有限公司 | Media data methods of marking, device and equipment |
| CN110267111A (en) * | 2019-05-24 | 2019-09-20 | 平安科技(深圳)有限公司 | Video barrage analysis method, device and storage medium, computer equipment |
| CN111050193A (en) * | 2019-11-12 | 2020-04-21 | 汉口北进出口服务有限公司 | User portrait construction method and device, computer equipment and storage medium |
| CN111368204A (en) * | 2020-03-09 | 2020-07-03 | 北京字节跳动网络技术有限公司 | Content pushing method and device, electronic equipment and computer readable medium |
| CN111708901A (en) * | 2020-06-19 | 2020-09-25 | 腾讯科技(深圳)有限公司 | Multimedia resource recommendation method and device, electronic equipment and storage medium |
| CN111708941A (en) * | 2020-06-12 | 2020-09-25 | 腾讯科技(深圳)有限公司 | Content recommendation method, apparatus, computer equipment and storage medium |
| CN111860237A (en) * | 2020-07-07 | 2020-10-30 | 中国科学技术大学 | A method and device for identifying emotional clips in video |
| CN112231579A (en) * | 2019-12-30 | 2021-01-15 | 北京邮电大学 | A social video recommendation system and method based on implicit community discovery |
| CN112632277A (en) * | 2020-12-15 | 2021-04-09 | 五八同城信息技术有限公司 | Resource processing method and device for target content object |
| CN112749296A (en) * | 2019-10-31 | 2021-05-04 | 北京达佳互联信息技术有限公司 | Video recommendation method and device, server and storage medium |
| CN113766281A (en) * | 2021-09-10 | 2021-12-07 | 北京快来文化传播集团有限公司 | Short video recommendation method, electronic device and computer-readable storage medium |
| CN114064974A (en) * | 2021-11-15 | 2022-02-18 | 腾讯科技(深圳)有限公司 | Information processing method, information processing apparatus, electronic device, storage medium, and program product |
| CN115510269A (en) * | 2022-09-29 | 2022-12-23 | 中国银行股份有限公司 | Video recommendation method, device, equipment and storage medium |
| CN116781949A (en) * | 2023-07-26 | 2023-09-19 | 新励成教育科技股份有限公司 | Recommendation method, system, equipment and storage medium for talent lecture live broadcast |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7685022B1 (en) * | 2007-06-15 | 2010-03-23 | Amazon Technologies, Inc. | System and method of offering media content |
| CN104504059A (en) * | 2014-12-22 | 2015-04-08 | 合一网络技术(北京)有限公司 | Multimedia resource recommending method |
| CN105095508A (en) * | 2015-08-31 | 2015-11-25 | 北京奇艺世纪科技有限公司 | Multimedia content recommendation method and multimedia content recommendation apparatus |
| CN106454422A (en) * | 2016-07-01 | 2017-02-22 | 江苏省公用信息有限公司 | Fingerprint recognition-based IPTV program recommending method and apparatus |
| CN106952111A (en) * | 2017-02-27 | 2017-07-14 | 东软集团股份有限公司 | Personalized recommendation method and device |
| CN107038480A (en) * | 2017-05-12 | 2017-08-11 | 东华大学 | A kind of text sentiment classification method based on convolutional neural networks |
-
2018
- 2018-05-07 CN CN201810426715.1A patent/CN108737859A/en active Pending
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7685022B1 (en) * | 2007-06-15 | 2010-03-23 | Amazon Technologies, Inc. | System and method of offering media content |
| CN104504059A (en) * | 2014-12-22 | 2015-04-08 | 合一网络技术(北京)有限公司 | Multimedia resource recommending method |
| CN105095508A (en) * | 2015-08-31 | 2015-11-25 | 北京奇艺世纪科技有限公司 | Multimedia content recommendation method and multimedia content recommendation apparatus |
| CN106454422A (en) * | 2016-07-01 | 2017-02-22 | 江苏省公用信息有限公司 | Fingerprint recognition-based IPTV program recommending method and apparatus |
| CN106952111A (en) * | 2017-02-27 | 2017-07-14 | 东软集团股份有限公司 | Personalized recommendation method and device |
| CN107038480A (en) * | 2017-05-12 | 2017-08-11 | 东华大学 | A kind of text sentiment classification method based on convolutional neural networks |
Non-Patent Citations (2)
| Title |
|---|
| 邓扬 等: "基于弹幕情感分析的视频片段推荐模型", 《计算机应用》 * |
| 雷鸣 等: "情感分析在电影推荐系统中的应用", 《计算机工程与应用》 * |
Cited By (28)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109361932B (en) * | 2018-11-23 | 2021-01-01 | 武汉斗鱼网络科技有限公司 | Live broadcast heat prediction method, device, equipment and medium |
| CN109361932A (en) * | 2018-11-23 | 2019-02-19 | 武汉斗鱼网络科技有限公司 | The method that temperature prediction is broadcast live, device, equipment and medium |
| CN109743617B (en) * | 2018-12-03 | 2020-11-24 | 清华大学 | A jump navigation method and device for video playback |
| CN109743617A (en) * | 2018-12-03 | 2019-05-10 | 清华大学 | A jump navigation method and device for video playback |
| CN109862397A (en) * | 2019-02-02 | 2019-06-07 | 广州虎牙信息科技有限公司 | A kind of video analysis method, apparatus, equipment and storage medium |
| CN109862397B (en) * | 2019-02-02 | 2021-11-09 | 广州虎牙信息科技有限公司 | Video analysis method, device, equipment and storage medium |
| CN110113673A (en) * | 2019-04-30 | 2019-08-09 | 北京奇艺世纪科技有限公司 | A kind of barrage display methods, device and electronic equipment |
| CN110267111A (en) * | 2019-05-24 | 2019-09-20 | 平安科技(深圳)有限公司 | Video barrage analysis method, device and storage medium, computer equipment |
| CN110263188A (en) * | 2019-05-29 | 2019-09-20 | 深圳市元征科技股份有限公司 | Media data methods of marking, device and equipment |
| CN110263188B (en) * | 2019-05-29 | 2023-03-28 | 深圳市元征科技股份有限公司 | Media data scoring method, device and equipment |
| CN112749296B (en) * | 2019-10-31 | 2024-01-26 | 北京达佳互联信息技术有限公司 | Video recommendation method, device, server and storage medium |
| CN112749296A (en) * | 2019-10-31 | 2021-05-04 | 北京达佳互联信息技术有限公司 | Video recommendation method and device, server and storage medium |
| CN111050193A (en) * | 2019-11-12 | 2020-04-21 | 汉口北进出口服务有限公司 | User portrait construction method and device, computer equipment and storage medium |
| CN111050193B (en) * | 2019-11-12 | 2022-06-10 | 汉口北进出口服务有限公司 | User portrait construction method and device, computer equipment and storage medium |
| CN112231579B (en) * | 2019-12-30 | 2022-10-28 | 北京邮电大学 | A social video recommendation system and method based on implicit community discovery |
| CN112231579A (en) * | 2019-12-30 | 2021-01-15 | 北京邮电大学 | A social video recommendation system and method based on implicit community discovery |
| CN111368204A (en) * | 2020-03-09 | 2020-07-03 | 北京字节跳动网络技术有限公司 | Content pushing method and device, electronic equipment and computer readable medium |
| CN111708941A (en) * | 2020-06-12 | 2020-09-25 | 腾讯科技(深圳)有限公司 | Content recommendation method, apparatus, computer equipment and storage medium |
| CN111708941B (en) * | 2020-06-12 | 2025-05-09 | 深圳市雅阅科技有限公司 | Content recommendation method, device, computer equipment and storage medium |
| CN111708901B (en) * | 2020-06-19 | 2023-10-13 | 腾讯科技(深圳)有限公司 | Multimedia resource recommendation method and device, electronic equipment and storage medium |
| CN111708901A (en) * | 2020-06-19 | 2020-09-25 | 腾讯科技(深圳)有限公司 | Multimedia resource recommendation method and device, electronic equipment and storage medium |
| CN111860237B (en) * | 2020-07-07 | 2022-09-06 | 中国科学技术大学 | A method and device for identifying emotional clips in video |
| CN111860237A (en) * | 2020-07-07 | 2020-10-30 | 中国科学技术大学 | A method and device for identifying emotional clips in video |
| CN112632277A (en) * | 2020-12-15 | 2021-04-09 | 五八同城信息技术有限公司 | Resource processing method and device for target content object |
| CN113766281A (en) * | 2021-09-10 | 2021-12-07 | 北京快来文化传播集团有限公司 | Short video recommendation method, electronic device and computer-readable storage medium |
| CN114064974A (en) * | 2021-11-15 | 2022-02-18 | 腾讯科技(深圳)有限公司 | Information processing method, information processing apparatus, electronic device, storage medium, and program product |
| CN115510269A (en) * | 2022-09-29 | 2022-12-23 | 中国银行股份有限公司 | Video recommendation method, device, equipment and storage medium |
| CN116781949A (en) * | 2023-07-26 | 2023-09-19 | 新励成教育科技股份有限公司 | Recommendation method, system, equipment and storage medium for talent lecture live broadcast |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN108737859A (en) | Video recommendation method based on barrage and device | |
| CN112533051B (en) | Barrage information display method, barrage information display device, computer equipment and storage medium | |
| CN109547814B (en) | Video recommendation method and device, server and storage medium | |
| CN108509465B (en) | Video data recommendation method and device and server | |
| CN111708941B (en) | Content recommendation method, device, computer equipment and storage medium | |
| Ertugrul et al. | Movie genre classification from plot summaries using bidirectional LSTM | |
| US11868738B2 (en) | Method and apparatus for generating natural language description information | |
| CN114036398B (en) | Content recommendation and ranking model training method, device, equipment and storage medium | |
| CN111831924A (en) | Content recommendation method, apparatus, device and readable storage medium | |
| US20150347905A1 (en) | Modeling user attitudes toward a target from social media | |
| CN111858969B (en) | Multimedia data recommendation method, device, computer equipment and storage medium | |
| CN111738807B (en) | Method, computing device, and computer storage medium for recommending target objects | |
| CN113761271A (en) | Method and device for video recommendation and refrigerator with display screen | |
| Pentland et al. | Does accuracy matter? Methodological considerations when using automated speech-to-text for social science research | |
| Deng et al. | ContentCTR: Frame-level live streaming click-through rate prediction with multimodal transformer | |
| CN113688281B (en) | Video recommendation method and system based on deep learning behavior sequence | |
| CN116628202A (en) | Intention recognition method, electronic device, and storage medium | |
| CN119578427B (en) | Attribute-level emotion classification method and equipment | |
| CN116610872B (en) | Training method and device for news recommendation model | |
| CN111385659A (en) | Video recommendation method, device, equipment and storage medium | |
| CN114254151A (en) | Training method of search term recommendation model, search term recommendation method and device | |
| CN113987262A (en) | Video recommendation information determination method and device, electronic equipment and storage medium | |
| CN119248962A (en) | Video recommendation method, model training method, device, equipment and storage medium | |
| Liu et al. | Cost-effective modality selection for video popularity prediction | |
| CN117909542A (en) | Video recommendation method, device, equipment and storage medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20181102 |
|
| WD01 | Invention patent application deemed withdrawn after publication |