[go: up one dir, main page]

CN119917747B - Personalized recommendation method, device and medium based on high-order feature interaction - Google Patents

Personalized recommendation method, device and medium based on high-order feature interaction Download PDF

Info

Publication number
CN119917747B
CN119917747B CN202510421791.3A CN202510421791A CN119917747B CN 119917747 B CN119917747 B CN 119917747B CN 202510421791 A CN202510421791 A CN 202510421791A CN 119917747 B CN119917747 B CN 119917747B
Authority
CN
China
Prior art keywords
feature
layer
features
vector
personalized recommendation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202510421791.3A
Other languages
Chinese (zh)
Other versions
CN119917747A (en
Inventor
赵震
袁玉宁
朱少帅
佟鑫
张文函
李艳丽
付晨阳
张舒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inspur General Software Co Ltd
Original Assignee
Inspur General Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inspur General Software Co Ltd filed Critical Inspur General Software Co Ltd
Priority to CN202510421791.3A priority Critical patent/CN119917747B/en
Publication of CN119917747A publication Critical patent/CN119917747A/en
Application granted granted Critical
Publication of CN119917747B publication Critical patent/CN119917747B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Image Analysis (AREA)
  • Machine Translation (AREA)

Abstract

The application discloses a personalized recommendation method, equipment and medium based on high-order feature interaction, and relates to the field of deep learning, wherein the method comprises the steps of inputting acquired user information and knowledge in a knowledge base into a pre-trained personalized recommendation model, and outputting a classification variable for indicating whether a user has acquired the knowledge; and carrying out personalized recommendation of knowledge on the user according to the classification variables. The personalized recommendation model is a deep learning-based model that can effectively learn low-order features of strong interactions by using a domain factorizer, and can construct high-order features in combination with a crossover network module and a deep learning module, compared to traditional shallow models (e.g., factorizers, domain factorizers, etc.).

Description

Personalized recommendation method, device and medium based on high-order feature interaction
Technical Field
The application relates to the field of deep learning, in particular to a personalized recommendation method, device and medium based on high-order feature interaction.
Background
Along with the rapid development of internet and big data technologies, a recommendation system makes progress in aspects of intellectualization and individuation, but in a traditional scheme, the recommendation system mainly depends on simple statistics of scoring data and user behaviors, and higher demands of users on individuation, instantaneity and diversification are difficult to meet.
With the application of deep learning and data mining technologies, the intelligent development of a recommendation system is further promoted, however, the prior art still faces the problems of data sparsity, dynamic change of user interests, insufficient content attribute, poor model interpretation and the like to be solved.
Disclosure of Invention
In order to solve the above problems, the present application provides a personalized recommendation method based on high-order feature interaction, comprising:
Inputting the acquired user information and knowledge in a knowledge base into a pre-trained personalized recommendation model, outputting a classification variable used for indicating whether the user has acquired the knowledge, and performing personalized recommendation of the knowledge to the user according to the classification variable;
The personalized recommendation model comprises an input layer, an embedding layer, a characteristic interaction layer and a combined output layer;
the input layer is used for inputting the user information and the knowledge in the form of feature vectors;
the embedding layer converts the high-dimensional sparse feature vector into a low-dimensional dense embedding vector;
The feature interaction layer comprises a deep neural network module, a cross network module, a domain factor decomposition machine module and an attention mechanism module;
The deep neural network module and the cross network module construct high-order feature interaction through a cross network and a deep neural network;
The domain factor decomposition machine module defines a feature domain through a domain factor decomposition machine, groups the features and constructs a linear relation and a second-order feature interaction relation of the features;
the attention mechanism module learns the weight of second-order feature interaction through an attention network;
and the combined output layer predicts according to the output vectors of the deep neural network module, the cross network module and the attention mechanism module.
In one example, the personalized recommendation model includes, during model training:
the input layer acquires a data set and converts information features in the data set into feature vectors through coding;
The information features comprise a feature column and a tag column, wherein the feature column comprises multi-dimensional features corresponding to user information and knowledge, and the tag column comprises whether the user has acquired the knowledge or not;
the feature types of the multi-dimensional features comprise classified features and continuous features, and the classified features and the continuous features are processed through coding respectively.
In one example, the embedding layer, for each feature domain, converts the feature vector into an embedding vector corresponding to the feature domain through an embedding matrix corresponding to the feature domain;
And combining the embedded vectors corresponding to the feature domains to obtain the output vector of the embedded layer.
In one example, the deep neural network module includes a Product layer, a hidden layer, an output layer;
the Product layer includes a linear portion and a nonlinear portion;
The linear part outputs a corresponding linear part output result through a corresponding weight matrix and a linear signal vector, wherein the linear signal vector is obtained through a corresponding embedded vector;
and the nonlinear part outputs a corresponding nonlinear part output result through a corresponding weight matrix and a secondary signal vector, wherein the linear signal vector is obtained through a plurality of embedded vectors.
In one example, the crossover network module includes a plurality of crossover layers;
And each crossing layer performs crossing calculation with the output vector of the previous layer through the characteristic vector input by the current layer, adjusts the contribution of each layer in the crossing calculation through the weight vector of the current layer, and obtains the output of the current layer through the adjusted result, the original input vector and the offset item.
In one example, the domain factorizer module defines features of a single class for each feature domain, each feature domain including a plurality of features;
for each feature, assigning a plurality of hidden vectors to the feature;
and aiming at each feature in the feature interaction, calculating hidden vectors of the feature in a feature domain of the opposite side, and outputting a corresponding linear relation and a second-order feature interaction relation according to calculation results of all feature interactions, feature weights and global bias items corresponding to all features.
In one example, the attention mechanism module processes the weight matrix corresponding to the second-order feature interaction, the calculation result of the hidden vector and the corresponding bias term through the activation function, obtains the corresponding attention score according to the processing result and the attention network parameter, and normalizes the attention score.
In one example, the combined output layer concatenates the output vectors of the deep neural network module, the crossover network module, the attention mechanism module, and the bias term, and predicts the concatenated vectors by an activation function.
On the other hand, the application also provides personalized recommendation equipment based on high-order feature interaction, which comprises the following steps:
At least one processor, and
A memory communicatively coupled to the at least one processor, wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to personalize a recommendation method based on high-level feature interactions as described in any of the examples above.
On the other hand, the application also provides a nonvolatile computer storage medium which stores computer executable instructions, and is characterized in that the computer executable instructions are set as the personalized recommendation method based on the high-order feature interaction according to any one of the above examples.
The personalized recommendation method based on the high-order feature interaction provided by the application has the following beneficial effects:
The personalized recommendation model is a deep learning-based model, can effectively learn low-order features of strong interaction by using a domain factorizer, and can combine a cross network module and a deep learning module to construct high-order features compared with a traditional shallow model (such as the factorizer, the domain factorizer and the like).
Compared with the traditional models such as DeepFM, the parallel architecture of the personalized recommendation model enables the parallel architecture to explicitly construct any-order high-order feature interaction through the cross network by only implicitly processing the feature interaction through the deep neural network.
The personalized recommendation model introduces a Product layer in front of the hidden layer of the deep learning module for capturing nonlinear relations among features, so that the capability of high-order feature interaction is further enhanced.
Compared with traditional models such as xDeepFM and EDCN, although the traditional models are combined with explicit and implicit high-order feature construction, the traditional models are not as personalized recommendation models in the application, and second-order feature interaction can be constructed by using a shallow module, so that the weight of each second-order feature interaction is not focused, and the traditional models lose the capturing capability of low-order key information.
The personalized recommendation model can effectively capture complex relations between the user and knowledge, so that pertinence and accuracy of recommending knowledge for the user are effectively improved. The method and the device solve the problems of data sparsity, dynamic change of user interests, insufficient content attribute, poor model interpretation and the like in the traditional scheme.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
FIG. 1 is a flow chart of a personalized recommendation method based on high-order feature interaction in an embodiment of the application;
FIG. 2 is a schematic diagram of a personalized recommendation model according to an embodiment of the present application;
FIG. 3 is a schematic diagram illustrating a process of converting an embedded layer according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a deep neural network module according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a computational visualization of a cross-network module in one embodiment of the present application;
Fig. 6 is a schematic diagram of a personalized recommendation device based on high-order feature interaction in an embodiment of the application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be clearly and completely described below with reference to specific embodiments of the present application and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The following describes in detail the technical solutions provided by the embodiments of the present application with reference to the accompanying drawings.
As shown in fig. 1, an embodiment of the present application provides a personalized recommendation method based on high-order feature interaction, including:
S101, inputting the acquired user information and knowledge in a knowledge base into a pre-trained personalized recommendation model, and outputting a classification variable used for indicating whether the user has acquired the knowledge.
The initiation of personalized recommendations may be active or passive. For example, after the user issues a problem on the search platform, the user passively performs corresponding personalized recommendation according to the requirement of the user. Or the electronic commerce platform actively recommends corresponding commodities in a personalized way according to the purchase records of the users.
The user information and the knowledge in the knowledge base are preset and acquired. Knowledge and knowledge bases can also be set to corresponding types based on differences in actual scenarios. For example, knowledge may include web page addresses, commodity knowledge, skill knowledge, business knowledge, and the like.
When personalized recommendation is required to be performed for the user, a corresponding knowledge base is selected according to the current recommendation scene (for example, the current recommendation scene comprises recommended commodities, recommended books, recommended services and the like), and whether the user has acquired the corresponding knowledge is indicated by the classification variable output by the personalized recommendation model.
The classification variable, either classification variable 0 or 1, output by the personalized recommendation model (Higher-Order Feature Interaction Model, HFIM) indicates whether the user has acquired this knowledge.
S102, conducting personalized recommendation of knowledge to the user according to the classification variables.
The final personalized recommendation may also be different for different recommendation scenarios. For example, for a user search query, knowledge that the user has not acquired may be recommended preferentially. For merchandise recommendation, the same or similar merchandise may be recommended by the merchandise that the user has purchased (corresponding to the knowledge that has been acquired). For book recommendation, a similar or not yet read book may be recommended based on the books that the user has read (corresponding to the knowledge that has been acquired). For service recommendation, the service to be executed next may be recommended according to the service already executed by the user (corresponding to the acquired knowledge), referring to the service execution flow.
The HIFM model realizes the linear relation and the second-order characteristic interaction between the characteristics through a domain factor decomposition machine, introduces an attention network to learn the weight information of the second-order interaction characteristics, explicitly constructs the high-order characteristic interaction through a cross network, simultaneously implicitly constructs the high-order characteristic interaction through a deep neural network, and finally splices the outputs of the deep neural network, the cross network and the domain factor decomposition machine through the attention network to be used as a final prediction result of the model.
As shown in FIG. 2, the personalized recommendation model comprises an input layer, an embedding layer, a feature interaction layer and a combined output layer. And the input layer is used for inputting the processed user information and knowledge in the data set in the form of feature vectors. And the embedding layer is used for converting the high-dimensional sparse feature vectors into low-dimensional dense embedding vectors. The system comprises a feature interaction layer, a deep neural network module, a cross network module, a domain factor decomposition machine module and an attention mechanism module, wherein the deep neural network module and the cross network module construct high-order feature interaction through the cross network and the deep neural network, the domain factor decomposition machine module defines a feature domain through the domain factor decomposition machine and groups the features to construct a linear relation and a second-order feature interaction relation of the features, and the attention mechanism module learns the weight of the second-order feature interaction through the attention network. And the combined output layer predicts according to the output vectors of the deep neural network module, the cross network module and the attention mechanism module.
The input layer, which is the first step of the HIFM model, is to input the information features of the user and the information features of the knowledge into the model in the form of vectors. In the model training process of the personalized recommendation model, an original data set is set as a training sample (after training is finished, the data set corresponds to actual data content to be predicted).
In the training process, a data set is acquired, and information features in the data set are converted into feature vectors through encoding.
The information features comprise a feature column and a tag column, wherein the feature column comprises multi-dimensional features corresponding to user information and knowledge, such as various features of gender, age, hobbies, knowledge type and the like of the user. The tag column includes whether the user has acquired the knowledge (e.g., by determining whether the user has acquired the knowledge by clicking on the knowledge in a web page, platform, etc.).
The feature types of the multidimensional features comprise classified features and continuous features, and the classified features and the continuous features are processed through coding respectively. For the feature processing mode adopted by the classification type feature, the feature data is converted into an One-Hot coding mode, for example, the sex feature is coded into 10 or 01, and the sex feature represents a male if the first bit is 1, and represents a female if the second bit is 1. And discretizing the continuous feature data for continuous feature selection.
And the embedding layer is used for converting the feature vector of each feature domain into an embedding vector corresponding to the feature domain through an embedding matrix corresponding to the feature domain. And combining the embedded vectors corresponding to the feature domains to obtain an output vector of the embedded layer.
While One-Hot encoding somewhat standardizes model inputs, it can lead to dramatic increases in data dimensions when dealing with numerous types of features, such as knowledge types, resulting in high-dimensional sparse data. In this case, the effective expression of the features may be affected, and if the calculation is directly performed, the calculation amount of the model is quite huge, and the performance of the model is lost. Therefore, HFIM model introduces an embedding layer after the input layer to transform the high-dimensional sparse features into low-dimensional dense vectors, the transformation of the embedding process is as shown in equation one:
Formula one;
Wherein, the Represent the firstThe feature vectors of the individual feature fields,Represent the firstAn embedding matrix of the individual feature fields,Represent the firstAn embedded vector of the feature fields.
As shown in fig. 3, feature vectors of different dimensionsAfter the processing of the embedding layer, a group of embedded vectors with the same dimension can be obtained. Each embedded matrix in the conversion processThe parameters of the model are initialized to a floating point value, and the optimal parameter result is learned through back propagation of the model. The definition of the output vector E of the embedded layer is as follows:
a second formula;
Wherein, the Representing the output vector of the embedded layer,Represent the firstThe embedded vector of the individual feature fields,Representing the total number of feature fields,Representing the dimension size of the embedded vector.
As shown in fig. 4, the deep neural network module includes a Product layer, a hidden layer, and an output layer. HIFM implicitly build high-order feature combinations through deep neural networks. The essence of the method is a feedforward neural network, but the high-order features are privacy-learned only in a full-connection mode, and the relation between the features is difficult to fully capture, so that a Product layer is added in front of a first hidden layer to further capture the nonlinear relation between the features, and the cross between the features is realized.
The Product layer comprises a linear part, a nonlinear part, a linear part and a linear signal vector, wherein the linear part outputs a corresponding linear part output result through a corresponding weight matrix and the linear signal vector, the linear signal vector is obtained through a corresponding embedded vector, the nonlinear part outputs a corresponding nonlinear part output result through a corresponding weight matrix and the secondary signal vector, and the linear signal vector is obtained through a plurality of embedded vectors.
The Product layer mainly focuses on original information of the embedded vectors and interaction information of the embedded vectors, so that the information is richer, and the model effect is improved. The layer is formed by linear partsAnd a nonlinear partThe specific calculation formula is shown as a formula III and a formula IV:
a formula III;
a formula IV;
Wherein, the A vector of the linear signal is represented,The vector of the secondary signal is represented,Representing the Hadamard product operation,The embedded vector is represented as such,Representation ofAndThe inner product of the vectors is used to determine,Representing a weight matrix.
As shown in fig. 4, the left side of the Product layer directly copies the embedded vector, and introduces a constant signal "1" into the embedded layer for multiplication, and the right side performs inner Product cross calculation on the embedded vector, and then the full connection layer converts the dimensions of the two parts into the same dimension, and inputs the same dimension to the following hidden layer. By introducing a Product layer, the interaction between features is further enhanced.
The cross network module comprises a plurality of cross layers, wherein each cross layer carries out cross calculation with the output vector of the previous layer through the characteristic vector input by the current layer, adjusts the contribution of each layer in the cross calculation through the weight vector of the current layer, and obtains the output of the current layer through the adjusted result, the original input vector and the offset item.
The crossover between features can provide more information than the individual features, and although deep neural networks can implicitly construct higher-order features through multi-layer nonlinear transformation, they have the problem of insufficient interpretation and low efficiency, so feature crossover is applied at each layer through crossover networks (Cross networks), and high-order feature combinations are explicitly constructed. The crossover network is formed by stacking a plurality of crossover layers, each layer is used for capturing higher-order crossover of input features, specifically, each layer calculates the outer product of an input feature vector and the input feature vector on the basis of the output of the previous layer, then adjusts the contribution of the crossover by dot multiplication of the weight vector of the layer, finally adds the result with the original input vector and a bias term to form the output of the layer, and the calculation formula is shown in a formula five:
a fifth formula;
Wherein, the AndRespectively represent the cross networkLayer and the firstThe cross-layer output of the layers,Represent the firstThe parameters of the learning of the layer,Represent the firstOffset of the layers. As shown in FIG. 5, based on the residual mechanism, the output of each cross layer needs to be added with the output of one layer, e.g., the firstThe output of the layer includes constructed cross featuresAnd (d)Layer output
The above-mentioned method is a macroscopic computing mode of the cross network, and the computing process of each layer is described in detail by the specific computing of the first layerThe embedding dimension is that of the embedding vector output by the embedding layerThe bias term is 0, and the output of the first cross layer is shown in formula six:
A formula six;
from equation six, in the first layer, the first layer will be With which it is transposedAn outer product operation is performed to form a matrix, each element of which is the product of two elements embedded in the vector. Then, matrix and parameter vectorsMultiplying to generate a second order cross term. Finally, based on residual mechanism, the second-order cross item is combined with the output of the upper layerAnd (5) adding. By repeating this process, a plurality of intersecting layers are superimposed to effectively build up higher-order features. Furthermore, the temporal and spatial complexity of the crossover network is linear in the input dimension, and thus the complexity of introducing the crossover network is negligible relative to the deep neural network module.
The domain factor decomposition machine module is used for defining the features of a single class corresponding to each feature domain, wherein each feature domain comprises a plurality of features, distributing a plurality of hidden vectors for each feature, calculating the hidden vectors of each feature in the feature domain where the other feature is located according to each feature in feature interaction, and outputting a corresponding linear relation and a second-order feature interaction relation according to the calculation result of all feature interactions, feature weights and global bias items corresponding to all features.
The domain factorer (Field-aware Factorization Machines, FFM) is an improvement over the factorer (Factorization Machine, FM) in order to improve the accuracy of predictive models in handling complex interactions between features. The domain factor decomposition machine groups the features by introducing the concept of domains, and the interaction capability among the features is fully improved. Wherein between domain definition and feature isEach class of features of the knowledge may be referred to as a feature domain, each feature domain containing a plurality of features, e.g., a knowledge type may be referred to as a feature domain, and a technology class may be referred to as a knowledge type as a feature. In the FM model, each feature uses the same hidden vector when interacting with other features, while the FFM model assigns multiple hidden vectors to each feature, enabling features from different domains to respond with different hidden vectors when interacting. The expression of FFM is shown in formula seven:
formula seven;
Wherein, the The global bias term is represented as such,Representing the total number of features,Represent the firstThe weight of the individual features is determined,Represent the firstA vector representation of the individual features is provided,AndRespectively represent characteristics ofAnd featuresThe hidden vector in the opposite domain of the object,Representing a hidden vector dot product. The difference from the FM model is that the hidden vector is composed ofBecomes intoI.e., each feature is changed from an original unique hidden vector to a set of hidden vectors, which enables the FFM model to capture interactions between features more carefully.
And the attention mechanism module is used for processing the weight matrix corresponding to the second-order feature interaction, the calculation result of the hidden vector and the corresponding bias term through the activation function, obtaining the corresponding attention score according to the processing result and the attention network parameter, and carrying out normalization processing on the attention score.
In general, not all feature interactions have a positive effect on the performance of the model, and some unimportant feature interactions may even affect the performance of the model. Therefore, the HIF-CSR model distributes different weights for different second-order feature interactions by introducing an attention network, so that the model can automatically learn the importance of the feature interactions, the weights are obtained by calculation through the attention network, and the calculation process is shown as a formula eight and a formula nine:
formula eight;
formula nine;
Wherein, the Is a parameter of the attention network and,Is the function of the activation and,A weight matrix representing the interaction of the features,Represents the hidden vector dot product of the vector,The term of the bias is indicated,In order for the attention score to be given,Representing the normalized attention score.
And combining the output layers, splicing the output vectors of the deep neural network module, the cross network module and the attention mechanism module and the bias items, and predicting the spliced vectors through an activation function.
The combined output layer splices the output of the deep neural network module, the output of the cross network module and the output of the domain factor decomposition machine passing through the attention network module, and predicts whether the user obtains the knowledge to browse or not through the activation function. Deep neural networks capture complex patterns by learning nonlinear combinations of input features, while crossover networks focus on learning crossover combinations between features, which is very valuable for understanding interactions between features. The domain factorizer further enhances the processing power of the model for interactions between domains of different features, allows the model to learn interactions between features at finer granularity, and learns weights of different interaction features. The information is combined by connecting the output vectors of the layers end to end, so that a more comprehensive characteristic representation is formed. The fusion strategy not only enhances the capability of capturing complex modes in data by the model, but also provides a rich information basis for final calculation output, thereby improving the prediction accuracy and generalization capability of the whole personalized recommendation model.
The combined output layer adoptsThe activation function calculates the spliced output vector to obtain the final output of the model, and the output of the final model is shown in a formula ten:
Formula ten;
Wherein, the AndRepresenting the outputs of the deep neural network and the crossover network respectively,Representing the output of the domain factorer through the attention network,Representing the bias term.
The personalized recommendation model is a deep learning-based model, can effectively learn low-order features of strong interaction by using a domain factorizer, and can combine a cross network module and a deep learning module to construct high-order features compared with a traditional shallow model (such as the factorizer, the domain factorizer and the like).
Compared with the traditional models such as DeepFM, the parallel architecture of the personalized recommendation model enables the parallel architecture to explicitly construct any-order high-order feature interaction through the cross network by only implicitly processing the feature interaction through the deep neural network.
The personalized recommendation model introduces a Product layer in front of the hidden layer of the deep learning module for capturing nonlinear relations among features, so that the capability of high-order feature interaction is further enhanced.
Compared with traditional models such as xDeepFM and EDCN, although the traditional models are combined with explicit and implicit high-order feature construction, the traditional models are not as personalized recommendation models in the application, and second-order feature interaction can be constructed by using a shallow module, so that the weight of each second-order feature interaction is not focused, and the traditional models lose the capturing capability of low-order key information.
The personalized recommendation model can effectively capture complex relations between the user and knowledge, so that pertinence and accuracy of recommending knowledge for the user are effectively improved. The method and the device solve the problems of data sparsity, dynamic change of user interests, insufficient content attribute, poor model interpretation and the like in the traditional scheme.
As shown in fig. 6, the embodiment of the present application further provides a personalized recommendation device based on high-order feature interaction, including:
At least one processor, and
A memory communicatively coupled to the at least one processor, wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to personalize a recommendation method based on high-level feature interactions as described in any of the embodiments above.
The embodiment of the application also provides a nonvolatile computer storage medium which stores computer executable instructions, wherein the computer executable instructions are set as the personalized recommendation method based on the high-order feature interaction.
The embodiments of the present application are described in a progressive manner, and the same and similar parts of the embodiments are all referred to each other, and each embodiment is mainly described in the differences from the other embodiments. In particular, for the apparatus and medium embodiments, the description is relatively simple, as it is substantially similar to the method embodiments, with reference to the section of the method embodiments being relevant.
The devices and media provided in the embodiments of the present application are in one-to-one correspondence with the methods, so that the devices and media also have similar beneficial technical effects as the corresponding methods, and since the beneficial technical effects of the methods have been described in detail above, the beneficial technical effects of the devices and media are not repeated here.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the application are to be included in the scope of the claims of the present application.

Claims (8)

1. A personalized recommendation method based on high-order feature interaction is characterized by comprising the following steps:
inputting the acquired user information and knowledge in a knowledge base into a pre-trained personalized recommendation model, and outputting a classification variable for indicating whether the user has acquired knowledge;
according to the classification variables, carrying out personalized recommendation of knowledge on the user;
The personalized recommendation model comprises an input layer, an embedding layer, a characteristic interaction layer and a combined output layer;
the input layer is used for inputting the user information and the knowledge in the form of feature vectors;
the embedding layer converts the high-dimensional sparse feature vector into a low-dimensional dense embedding vector;
The feature interaction layer comprises a deep neural network module, a cross network module, a domain factor decomposition machine module and an attention mechanism module;
The deep neural network module and the cross network module construct high-order feature interaction through a cross network and a deep neural network;
The domain factor decomposition machine module defines a feature domain through a domain factor decomposition machine, groups the features and constructs a linear relation and a second-order feature interaction relation of the features;
the attention mechanism module learns the weight of second-order feature interaction through an attention network;
The combined output layer predicts according to the output vectors of the deep neural network module, the cross network module and the attention mechanism module;
The personalized recommendation model comprises the following components in the model training process:
the input layer acquires a data set and converts information features in the data set into feature vectors through coding;
The information features comprise a feature column and a tag column, wherein the feature column comprises multi-dimensional features corresponding to user information and knowledge, and the tag column comprises whether the user has acquired the knowledge or not;
The feature types of the multidimensional features comprise classified features and continuous features, and the classified features and the continuous features are processed through coding respectively;
The deep neural network module comprises a Product layer, a hidden layer and an output layer;
the Product layer includes a linear portion and a nonlinear portion;
The linear part outputs a corresponding linear part output result through a corresponding weight matrix and a linear signal vector, wherein the linear signal vector is obtained through a corresponding embedded vector;
and the nonlinear part outputs a corresponding nonlinear part output result through a corresponding weight matrix and a secondary signal vector, wherein the linear signal vector is obtained through a plurality of embedded vectors.
2. The personalized recommendation method based on high-order feature interaction according to claim 1, wherein the embedding layer converts, for a feature vector of each feature domain, an embedding matrix corresponding to the feature domain into an embedding vector corresponding to the feature domain;
And combining the embedded vectors corresponding to the feature domains to obtain the output vector of the embedded layer.
3. The personalized recommendation method based on high-order feature interactions of claim 1, wherein the crossover network module comprises a plurality of crossover layers;
And each crossing layer performs crossing calculation with the output vector of the previous layer through the characteristic vector input by the current layer, adjusts the contribution of each layer in the crossing calculation through the weight vector of the current layer, and obtains the output of the current layer through the adjusted result, the original input vector and the offset item.
4. The personalized recommendation method based on high-order feature interactions of claim 1, wherein the domain factorizer module defines features of a single class corresponding to each feature domain, each feature domain comprising a plurality of features;
for each feature, assigning a plurality of hidden vectors to the feature;
and aiming at each feature in the feature interaction, calculating hidden vectors of the feature in a feature domain of the opposite side, and outputting a corresponding linear relation and a second-order feature interaction relation according to calculation results of all feature interactions, feature weights and global bias items corresponding to all features.
5. The personalized recommendation method based on high-order feature interactions according to claim 4, wherein the attention mechanism module processes a weight matrix corresponding to second-order feature interactions, a calculation result of hidden vectors and corresponding bias terms through an activation function, obtains corresponding attention scores according to the processing result and attention network parameters, and normalizes the attention scores.
6. The personalized recommendation method based on high-order feature interactions of claim 1, wherein the combined output layer concatenates the output vectors of the deep neural network module, the crossover network module, the attention mechanism module, and bias terms, and predicts the concatenated vectors by an activation function.
7. A personalized recommendation device based on high-order feature interactions, comprising:
At least one processor, and
A memory communicatively coupled to the at least one processor, wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to personalize a recommendation method based on high-level feature interactions as set forth in any one of claims 1-6.
8. A non-transitory computer storage medium storing computer executable instructions configured for the high-order feature interaction-based personalized recommendation method of any one of claims 1-6.
CN202510421791.3A 2025-04-07 2025-04-07 Personalized recommendation method, device and medium based on high-order feature interaction Active CN119917747B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202510421791.3A CN119917747B (en) 2025-04-07 2025-04-07 Personalized recommendation method, device and medium based on high-order feature interaction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202510421791.3A CN119917747B (en) 2025-04-07 2025-04-07 Personalized recommendation method, device and medium based on high-order feature interaction

Publications (2)

Publication Number Publication Date
CN119917747A CN119917747A (en) 2025-05-02
CN119917747B true CN119917747B (en) 2025-06-27

Family

ID=95502992

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202510421791.3A Active CN119917747B (en) 2025-04-07 2025-04-07 Personalized recommendation method, device and medium based on high-order feature interaction

Country Status (1)

Country Link
CN (1) CN119917747B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN121120293A (en) * 2025-11-17 2025-12-12 武汉大学 Social network popularity prediction method and system based on high-order nonlinear dependency

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112396099A (en) * 2020-11-16 2021-02-23 哈尔滨工程大学 Click rate estimation method based on deep learning and information fusion
CN115221387A (en) * 2022-07-13 2022-10-21 全拓科技(杭州)股份有限公司 Enterprise information integration method based on deep neural network

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109902222B (en) * 2018-11-30 2022-05-13 华为技术有限公司 Recommendation method and device
CN112016961A (en) * 2020-08-26 2020-12-01 北京字节跳动网络技术有限公司 Push method, apparatus, electronic device, and computer-readable storage medium
CN111949884B (en) * 2020-08-26 2022-06-21 桂林电子科技大学 A deep fusion recommendation method based on multimodal feature interaction
CN113793175B (en) * 2021-09-07 2024-06-28 广东工业大学 Advertisement click rate estimation method based on bilinear FFM and multi-head attention mechanism
CN113626719B (en) * 2021-10-12 2022-02-08 腾讯科技(深圳)有限公司 Information recommendation method, device, equipment, storage medium and computer program product
CN114169869B (en) * 2022-02-14 2022-06-07 北京大学 A method and device for job recommendation based on attention mechanism
CN116701611A (en) * 2023-05-25 2023-09-05 湖北工业大学 A recommendation method and system for learning knowledge graphs that integrate interactive attention
CN119441581A (en) * 2023-08-03 2025-02-14 腾讯科技(深圳)有限公司 Recommendation index prediction method, device, equipment and storage medium
CN118628195A (en) * 2024-05-13 2024-09-10 欣正实业发展总公司 Personalized learning recommendation system and method based on deep reinforcement learning

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112396099A (en) * 2020-11-16 2021-02-23 哈尔滨工程大学 Click rate estimation method based on deep learning and information fusion
CN115221387A (en) * 2022-07-13 2022-10-21 全拓科技(杭州)股份有限公司 Enterprise information integration method based on deep neural network

Also Published As

Publication number Publication date
CN119917747A (en) 2025-05-02

Similar Documents

Publication Publication Date Title
Meng et al. Co-embedding attributed networks
Zhang et al. Field-aware neural factorization machine for click-through rate prediction
Wen et al. Neural attention model for recommendation based on factorization machines
Vu et al. Label-representative graph convolutional network for multi-label text classification
CN111541570A (en) Cloud service QoS prediction method based on multi-source feature learning
Colace et al. A content-based recommendation approach based on singular value decomposition
CN112699310A (en) Cold start cross-domain hybrid recommendation method and system based on deep neural network
CN119917747B (en) Personalized recommendation method, device and medium based on high-order feature interaction
CN115545834A (en) Personalized service recommendation method based on graph neural network and metadata
CN114996566B (en) An intelligent recommendation system and method for industrial Internet platform
Malik et al. Quantum AI powered dynamic user profiling for next-generation personalized recommender system
CN116976505A (en) Click-through rate prediction method based on information sharing decoupled attention network
Chen et al. Graph enhanced neural interaction model for recommendation
Yang et al. Exploring different interaction among features for CTR prediction: L. Yang et al.
Gupta et al. Multimodal graph-based recommendation system using hybrid filtering approach
CN113626564B (en) Concept label generation method and device, electronic equipment and storage medium
Fei et al. Deep feature fusion‐based stacked denoising autoencoder for tag recommendation systems
CN119886255B (en) A large language model alignment fine-tuning method, system and device for a recommendation system
Zhang et al. A CTR prediction model with double matrix-level cross-features
CN118155031A (en) Subject portrait construction method, recommendation method and device based on multimodal fusion
CN116932884A (en) Object recommendation method, device, electronic device and readable storage medium
Ye et al. A collaborative neural model for rating prediction by leveraging user reviews and product images
Mediani et al. A hybrid recommender system for pedagogical resources
CN116028704A (en) Recommended method and device, equipment, storage medium
CN114565436A (en) Vehicle model recommendation system, method, device and storage medium based on time sequence modeling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant