Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
With the rapid development of big data and artificial intelligence technology, the data volume and data complexity of various industries are increasing year by year. The form data, especially the time series data and the multidimensional environment data, are widely applied to the scenes of energy consumption prediction, sales prediction, rent price prediction and the like. However, in processing such data, conventional prediction methods generally face the following problems:
1. simple models such as linear regression models cannot handle complex combinations of multidimensional features, resulting in low prediction accuracy.
2. When the existing machine learning algorithm is used for feature selection, the feature is often required to be selected manually or a model based on simple regression, and the adjustment is difficult to automatically and dynamically adjust according to data. The method cannot automatically identify key features, and the prior knowledge cannot be effectively migrated in a new scene, so that the model prediction precision and adaptability are limited.
3. For different areas or scenarios (e.g., new store, new area energy consumption forecast or sales forecast), models cannot effectively migrate the existing knowledge, and require extensive data retraining, resulting in high costs.
Therefore, the application provides a table data feature selection and migration learning method based on a user-defined network structure, which can automatically select features and dynamically set weights, learn commonalities from a nationwide basic model, quickly migrate to a new scene, reduce training data requirements of the new scene and improve the accuracy and efficiency of prediction. The method has self-adaptive learning capability, can automatically adjust the selection and weight of the features, remarkably improves the prediction precision, greatly reduces the model training time under a new scene, and is suitable for data scenes with time sequences and multidimensional environment variables, such as business field energy consumption, apartment rents, financial fields (such as risk prediction), traffic fields (such as bus passenger flow prediction) and the like.
A method and apparatus for target prediction according to embodiments of the present application will be described in detail with reference to the accompanying drawings.
Fig. 1 is a flowchart of a method for target prediction according to an embodiment of the present application. As shown in fig. 1, the method for target prediction includes:
s101, acquiring original data, and extracting features of different dimensions from the original data to obtain original features of different dimensions.
Specifically, raw data for target prediction may be obtained, where the raw data may be understood as table data of multiple dimensions collected in historical data of a predicted task scenario, for example, in prediction of real estate targets (e.g., sales, energy consumption, rent), the raw data may include time series data (e.g., date, hour), environmental data (e.g., temperature, humidity, carbon dioxide concentration, etc.), and other relevant features (e.g., open-close store time, holidays, etc.).
Further, feature extraction may be performed on the raw data to obtain raw features with different dimensions, where the feature dimensions may be numeric (e.g., continuous values of temperature, price, etc.) or categorical (e.g., discrete categories of region, brand type, etc.).
S102, determining key features of each dimension from the original features of each dimension, and determining a feature matrix based on the key features of each dimension and the original features of each dimension.
Specifically, after the original features of each dimension are obtained, key features can be determined from the original features of each dimension, and the key features can be understood as features with the most influence on the target prediction task or features with the greatest contribution to the prediction task, for example, when the sales prediction task is heavy, factors such as weather temperature, promotion activity, advertisement expenditure and the like have greater influence on sales, so that data of the dimensions can be extracted as key features, and after the key features are determined, the key features can be combined with the original features to form a feature matrix containing all the important features, and then the target prediction is performed based on the feature matrix.
Therefore, key features can be automatically identified and screened from a multi-dimensional bureau of households, so that the model can focus on important features, interference of noise and irrelevant features is avoided, manual selection of features is not needed, subjectivity and deviation of manual selection of features are avoided, the key features can be flexibly adjusted according to different scenes and tasks through dynamic feature selection and weighting of the model based on the current scene characteristics, adaptability of the model in practical application is improved, and accuracy of target prediction results is remarkably improved.
S103, target prediction is carried out according to the feature matrix, and a target prediction result is obtained.
According to the technical scheme provided by the embodiment, the original data are subjected to feature extraction of different dimensions to obtain the original features of different dimensions, the key features of each dimension are determined from the original features of each dimension, the feature matrix is determined based on the key features of each dimension and the original features of each dimension, and finally the target prediction is performed based on the feature matrix to obtain the target prediction result, so that the complex steps of manually selecting and processing the features can be replaced by automatic feature selection, the feature extraction of various multidimensional complex data structures is adapted, the reasoning capacity and efficiency of the model are improved, and the target prediction can be performed efficiently and accurately.
In some embodiments, determining key features of each dimension from original features of each dimension comprises performing linear transformation on the original features of each dimension to obtain first features of each dimension, performing nonlinear processing on the first features of each dimension to obtain second features corresponding to each dimension, determining selective weights of the second features of each dimension, obtaining first weight features of each dimension according to the selective weights of the second features of each dimension and the corresponding second features, adding the first weight features of each dimension and the corresponding original features to obtain addition results corresponding to each dimension, and normalizing the addition results corresponding to each dimension to obtain the key features of each dimension.
Fig. 2 is a flowchart of another method for target prediction according to an embodiment of the present application, and the following description will explain the embodiment with reference to fig. 2.
Specifically, after the original features are extracted from the original data, the original features can be subjected to self-adaptive weighting processing, namely, different features are weighted, filtered and selected, so that more effective feature representation is obtained, the original features can be subjected to self-adaptive weighting processing through a self-defined selective residual error network (SelectiveResidualNet), a gating mechanism can be included in the selective residual error network, namely, the original features can be selectively transmitted through the gating mechanism, and the feature transmission mode can not only enhance the nonlinear relation modeling capability of a model to the features, but also ensure that feature information is not lost through residual error connection. It should be noted that, the processing of the original features may be performed separately, that is, the original features of each dimension may be processed separately, so as to obtain the key features corresponding to each dimension.
Firstly, in each dimension, a linear transformation can be performed on the original features through a full connection layer (Dense), the original features are mapped to a new feature space, a first feature is obtained, and a nonlinear activation function (such as a ReLU activation function) is further utilized to process the first feature, a second feature is obtained, so that nonlinear factors can be introduced, and the model can capture more complex data patterns and relations.
And further, the second feature can be processed through a gating mechanism to obtain the selective weight of the second feature.
It should be noted that, before generating the selective weight of the second feature, a random inactivation layer may be further added to the model structure, that is, the second feature is subjected to the full-connection layer treatment again, and then is subjected to random inactivation after the full-connection layer treatment, so as to increase the generalization capability of the model.
Further, the gating mechanism may be embodied in that the second feature is processed through a full connection layer to obtain a processed second feature, an activation function (e.g. Sigmoid activation function) is further connected to the full connection layer to perform weight generation on the processed second feature to obtain a selective weight of the second feature, in the selective weight generation process, importance of each feature in the target prediction task can be automatically estimated through an attention mechanism, the feature with higher weight can obtain greater attention, and further, the selective weight of the second feature is multiplied with the second feature processed by another full connection layer element by element to obtain the first weight feature.
Further, after the first weight feature is obtained, the first weight feature corresponding to the dimension can be fused with the input corresponding original feature, before fusion, the corresponding original feature can be processed through a full connection layer, the processed original feature and the first weight feature are added to obtain an addition result, the addition result is further normalized to obtain key features of the corresponding dimension, and feature extraction of the original feature of each dimension can be achieved through the steps to obtain the key features of each dimension.
According to the technical scheme provided by the embodiment, the key characteristics of each dimension are adaptively determined from the original characteristics of each dimension through the selective residual network, so that the model can automatically extract the most key characteristics from the original characteristics, manual intervention is reduced, and the dynamic generation of the selective weight is performed by combining the current scene requirement based on the attention mechanism, so that the contribution degree of the characteristics can be automatically adjusted under different environments by the model, the characteristics of different dimensions and dimensions can be better processed through the weighting and normalization operation model, the unbalanced problem caused by the dimension difference in the training process is avoided, and the stability and the robustness of the model are improved.
In some embodiments, determining the feature matrix based on the key features of each dimension and the original features of each dimension includes performing full-connection layer processing on the original features of each dimension to obtain output features corresponding to each dimension, performing dimension stitching on the output features corresponding to all dimensions to obtain stitching features corresponding to all dimensions, performing adaptive weighting on the stitching features to obtain second weight features, and fusing the second weight features with the key features of each dimension to obtain the feature matrix.
FIG. 3 is a flow chart of a method for target prediction according to an embodiment of the present application. The present embodiment will be described with reference to fig. 3.
Fig. 3 may represent a custom dynamic feature selector (i.e. DynamicFeatureSelector) in a model, where each dynamic feature selector includes a selective residual network, where the dynamic feature selector may be used to implement dynamic selection of features and fuse key features of each dimension with original features to obtain a feature matrix, and it should be understood that the selective residual network is used to process original features of each dimension, so that the number of selective residual networks in the dynamic feature selector may be determined based on the number of dimensions of the input original features, and of course, the same selective residual network may also be used to process original features of different dimensions sequentially, where only one layer of selective residual network is used, which is not limited herein.
Specifically, first, the input original features may be segmented to obtain original features of different dimensions, namely feature 1, feature 2. The original features of each dimension can be processed by the full connection layer while the key features are obtained by processing the original features of each dimension through the selective residual network, and obtaining output characteristics corresponding to each dimension output by the full connection layer, and splicing the output characteristics corresponding to all the dimensions to obtain spliced characteristics containing all the dimensions, so that a plurality of characteristic vectors can be combined into a characteristic vector with higher dimension, and the characteristic vector represents comprehensive information of all the dimensions.
Further, the adaptive weighting is performed on the spliced features by using the selective residual error network, a weighting mechanism (such as a probability distribution Softmax activation function) can be introduced in the adaptive weighting process, and an importance weight is allocated to each part in the spliced features, so that a second weight feature is obtained, and the second weight feature contains the weight feature of each dimension.
Further, the second weight feature is fused with the key feature of each dimension, so that a feature matrix can be obtained.
According to the technical scheme provided by the embodiment, the capturing capability of the model to the information of each dimension is gradually enhanced by gradually processing the original features of each dimension, so that the efficient processing of the features is realized, the importance of the features can be flexibly adjusted by self-adaptive weighting and feature fusion, the most effective prediction information is extracted from the original features of different dimensions, and the information is not lost in the feature processing process by combining the original features of all dimensions, so that the information can be effectively fused when the model processes complex and heterogeneous data, and the prediction precision is improved.
In other embodiments, the second weight feature is fused with the key feature of each dimension to obtain a feature matrix, wherein the feature matrix comprises stacking the key features of all dimensions to obtain stacking features corresponding to all dimensions, and multiplying the stacking features with the second weight feature element by element to obtain the feature matrix.
Specifically, the second weight feature is fused with the key feature of each dimension, and can be obtained by multiplying the features element by element.
Specifically, the key features of each dimension may be stacked dimension by dimension to form a high-dimension feature vector, that is, the key features of all dimensions are connected in series to form a combined feature set, which is not limited herein by the feature stacking method.
Further, the stacking feature and the second weight feature are multiplied by each other element by element, which can be understood as that the stacking feature is weighted again by the second weight feature to obtain a feature matrix, so that the feature matrix can capture key information of each dimension.
According to the technical scheme provided by the embodiment, the stacking characteristics containing the key information of all dimensions are obtained by stacking the key characteristics of each dimension, and the stacking characteristics are multiplied by the second weight characteristics element by element, so that each element in the feature matrix is further ensured to be influenced by the corresponding weight, a more representative feature matrix can be generated by weighting the characteristics of different dimensions by the model, and further the model can accurately predict the target according to the actual contribution of each dimension.
In some embodiments, the original data comprises discrete data and continuous data, and the original data is subjected to feature extraction of different dimensions to obtain original features of different dimensions, wherein the method comprises the steps of carrying out mapping processing on the discrete data of different dimensions through an embedding layer to obtain continuous high-dimensional value feature representations corresponding to the different dimensions, applying feature scaling on the continuous data of different dimensions to obtain continuous features corresponding to the different dimensions after scaling, and splicing the continuous high-dimensional value feature representations of each dimension and the corresponding continuous features after scaling to obtain the original features of different dimensions.
Specifically, when the target prediction is performed, the original data acquired from the historical data summary may include discrete data and continuous data, and corresponding processing modes can be selected for different data types, so that complex data is converted into features which can be used by the model.
Wherein, discrete data can be understood as non-numeric type and category type data, such as text, classification labels and the like, the discrete data can be converted by an embedding layer to obtain high-latitude numerical characteristic representation, and the embedding layer can map each discrete category (such as labels, words, numbers and the like) into a low-dimensional vector space to generate dense representation of the category.
The continuous data may represent numerical data, such as temperature, price, age, etc., and the threshold value of the continuous data may be very broad, so in order to make these data fit the neural network model, they may be feature scaled to process within the same scale, i.e. to adjust the value range of the feature to 0,1, or to align the feature value to a distribution with a mean value of 0 and a standard deviation of 1, so that each feature may be guaranteed to be co-linear and equal to the model during training, avoiding feature dominant model training process with an oversized factor value range.
Further, the continuous high-dimensional value feature representation and the scaled continuous features are spliced to obtain the original features containing corresponding feature information, namely different dimensions, and then the original features are used as the input of the model to conduct target prediction.
According to the technical scheme provided by the embodiment, different types of features can be automatically processed through the embedding layer and feature scaling, the features do not need to be manually designed, and the training efficiency, the reasoning efficiency and the prediction accuracy of the model are improved.
Fig. 4 is a flow chart of another method for target prediction according to an embodiment of the present application, and the scheme of the present application is further described below with reference to fig. 4.
Firstly, before continuous high-dimension value characteristic representation and scaled continuous characteristics are generated, window sliding treatment, characteristic difference, characteristic, moving average characteristic and the like can be carried out on original data, time sequence type understanding of the data is enhanced by a model, the characteristics input by the prediction model are obtained, further discrete characteristics and continuous characteristics contained in the input characteristics are respectively processed, namely, the discrete characteristics are mapped to obtain continuous high-latitude value characteristics, the continuous characteristics are subjected to characteristic scaling to obtain continuous characteristics, and characteristic splicing treatment is carried out on the continuous high-dimension value characteristics and the scaled continuous characteristics to obtain original characteristics with different dimensions.
Further, the original features with different dimensions are input into the dynamic feature selector to perform self-adaptive dynamic weighting processing to obtain a feature matrix, in this process, multiple processing can be performed through the multi-layer dynamic feature selector, the number of neurons (units) of the different dynamic feature selector can be specifically set when the model is built, for example, the first layer is set to 128, the second layer is set to 96, and the third layer is set to 64, so that features with different dimensions can be output through the dynamic feature selector of the different layers, and key features can be gradually extracted and screened. And a random inactivation layer is arranged in each layer of dynamic feature selector to randomly discard part of nodes, so that model overfitting (such as a first layer dropout:0.75, a second layer dropout:0.5 and a third layer: 0.25) is avoided, robustness of the model is enhanced, and an accurate feature matrix is obtained.
And then the feature matrix is input into a full-connection layer of the transfer learning fine tuning network for processing, final feature representation is further integrated and generated, the continuity of output is kept by using a Linear activation function (such as Linear), and a scalar (units: 1) is further output by using the full-connection layer as a target prediction result.
Therefore, the dynamic selection and weighting can be carried out through the self-defined network structure, the weight can be automatically distributed to the input features according to the feature importance of the data, the feature information is selectively filtered, the important features are ensured to be reserved, and the prediction precision and the generalization capability of the model are improved.
In some embodiments, before the original data is acquired, the method further comprises the steps of acquiring first training data containing different scene data, capturing commonalities of prediction tasks under different scenes through the first training data, obtaining a prediction model based on the commonalities, wherein the prediction model comprises an embedded layer and a dynamic feature selector, the dynamic feature selector is used for determining key features of each dimension from original features of each dimension, determining a feature matrix based on the key features of each dimension and the original features of each dimension, acquiring second training data corresponding to a target scene, fine-tuning the prediction model by utilizing the second training data, obtaining a fine-tuned prediction model, wherein the fine-tuned prediction model is used for executing the target prediction task, and the target scene characterizes a scene for executing the target prediction task.
Specifically, a prediction model with a custom structure may be constructed, and trained by first training data including different scene data, to obtain a prediction model with a prediction commonality rule, where the commonality rule may represent some consistency modes still existing on the target prediction task in different scenes, although the environment and the conditions are different, for example, in different user groups, some rules of purchasing behavior may be similar. And finally, utilizing the related data of the target prediction task in the current target scene as second training data to finely tune the prediction model so as to adapt to the specific prediction task, and obtaining a finely tuned model, wherein the second training data can represent data collected under a specific scene, can be historical data or artificially constructed data, and can further train the model by using a small amount of new data on the basis of the prediction model so as to adapt to new scenes and requirements. The model structure is not described here.
In this way, a fine-tuned prediction model which can better adapt to a specific target prediction task can be obtained through training and fine-tuning of the model. The model has better generalization capability when facing a new scene by learning a commonality rule under multiple scenes, and the demand of the model for new data is reduced by utilizing a migration learning and fine tuning method, so that the prediction effect of the model under a specific scene is rapidly realized.
In some embodiments, the prediction model further comprises a transfer learning fine tuning network layer, fine tuning is performed on the prediction model by using second training data to obtain a fine-tuned prediction model, the method comprises the steps of dividing the second training data into a training set and a verification set, training the transfer learning fine tuning network layer through the training set, evaluating the prediction precision of the trained prediction model according to the verification set and a preselected evaluation index, wherein the evaluation index comprises a root mean square error and an average absolute percentage error, and the fine-tuned prediction model is obtained under the condition that the prediction precision meets the preset precision.
Specifically, model trimming can be performed through a transition learning trim network layer in the prediction model, so that the model can adapt to a specific target prediction task.
Firstly, the second training data can be divided into a training set and a verification set, the dividing proportion is not limited, the training set is utilized to train the transfer learning fine-tuning network layer, namely, a prediction model with a prediction commonality rule is applied to a specific scene, the second training data is used for adapting to a new task, and thus parameters in the network can be adjusted through the training set, so that the prediction precision of the new target prediction task is improved. In the transfer learning process, most of network structures of the prediction model can be frozen, only the transfer learning fine-tuning network layer is unfrozen to adjust parameters, after training is finished, the transfer learning fine-tuning network layer can be verified through a verification set to judge whether the preset precision requirement is met, and when the transfer learning fine-tuning network layer is evaluated, a proper evaluation index can be selected to measure the prediction precision of the model, such as root mean square error, which is used for measuring the error between a predicted value and an actual value, and average absolute percentage error, which is used for measuring the percentage error between the predicted value and the actual value, is smaller, so that the predicted result is closer to the actual value.
When the prediction precision of the model reaches the preset precision, namely when the error reaches the preset error value, the model fine tuning process is regarded as being finished, the fine tuned prediction model is obtained, the preset precision is not limited, and the model fine tuning process can be set according to actual conditions.
Therefore, the model can be more suitable for a specific target prediction task through fine adjustment, the prediction accuracy is further improved, the model can automatically realize efficient knowledge migration according to a small amount of data in a new scene through migration learning and automatic fine adjustment, the training data requirement in the new scene is reduced, efficient optimization of the model is realized, manual intervention is reduced, and the model training efficiency is improved.
Any combination of the above optional solutions may be adopted to form an optional embodiment of the present application, which is not described herein.
The following are examples of the apparatus of the present application that may be used to perform the method embodiments of the present application. For details not disclosed in the embodiments of the apparatus of the present application, please refer to the embodiments of the method of the present application.
Fig. 5 is a schematic structural diagram of an apparatus for target prediction according to an embodiment of the present application. As shown in fig. 5, the apparatus for target prediction includes:
the acquiring module 501 is configured to acquire original data, and extract features of different dimensions from the original data to obtain original features of different dimensions;
a feature extraction module 502 configured to determine key features of each dimension from the original features of each dimension, and determine a feature matrix based on the key features of each dimension and the original features of each dimension;
And the prediction module 503 is configured to perform target prediction according to the feature matrix to obtain a target prediction result.
In some embodiments, the feature extraction module 502 is specifically configured to perform linear transformation on the original features of each dimension to obtain first features of each dimension, perform nonlinear processing on the first features of each dimension to obtain second features corresponding to each dimension, determine a selective weight of the second features of each dimension, obtain first weight features of each dimension according to the selective weights of the second features of each dimension and the corresponding second features, and add the first weight features of each dimension and the corresponding original features to obtain an addition result corresponding to each dimension, and normalize the addition result corresponding to each dimension to obtain key features of each dimension.
In some embodiments, the feature extraction module 502 is specifically configured to process the original feature of each dimension through a full connection layer to obtain an output feature corresponding to each dimension, perform dimension stitching on the output features corresponding to all dimensions to obtain stitching features corresponding to all dimensions, perform adaptive weighting on the stitching features to obtain a second weight feature, and fuse the second weight feature with the key feature of each dimension to obtain a feature matrix.
In some embodiments, the feature extraction module 502 is specifically configured to stack key features of all dimensions to obtain stacked features corresponding to all dimensions, and multiply the stacked features with the second weight features element by element to obtain a feature matrix.
In some embodiments, the obtaining module 501 is specifically configured to map discrete data of different dimensions through the embedding layer to obtain continuous high-dimensional value feature representations corresponding to the different dimensions, apply feature scaling to the continuous data of different dimensions to obtain continuous features corresponding to the different dimensions after scaling, and splice the continuous high-dimensional value feature representations of each dimension and the corresponding scaled continuous features to obtain original features of different dimensions.
In some embodiments, the obtaining module 501 is specifically configured to obtain first training data including different scene data, capture commonalities of prediction tasks under different scenes by the first training data, and obtain a prediction model based on the commonalities, where the prediction model includes an embedded layer and a dynamic feature selector, and the dynamic feature selector is configured to determine key features of each dimension from original features of each dimension, determine a feature matrix based on the key features of each dimension and the original features of each dimension, obtain second training data corresponding to a target scene, and fine-tune the prediction model by using the second training data to obtain a fine-tuned prediction model, where the fine-tuned prediction model is used to perform the target prediction task, and the target scene characterizes a scene in which the target prediction task is performed.
In some embodiments, the obtaining module 501 is specifically configured to divide the second training data into a training set and a verification set, train the transfer learning fine tuning network layer through the training set, evaluate the prediction accuracy of the trained prediction model according to the verification set and a preselected evaluation index, where the evaluation index includes a root mean square error and an average absolute percentage error, and obtain the fine-tuned prediction model if the prediction accuracy meets a preset accuracy.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present application.
Fig. 6 is a schematic diagram of an electronic device 6 according to an embodiment of the present application. As shown in fig. 6, the electronic device 6 of this embodiment comprises a processor 601, a memory 602 and a computer program 603 stored in the memory 602 and executable on the processor 601. The steps of the various method embodiments described above are implemented by the processor 601 when executing the computer program 603. Or the processor 601 when executing the computer program 603 performs the functions of the modules/units of the apparatus embodiments described above.
The electronic device 6 may be a desktop computer, a notebook computer, a palm computer, a cloud server, or the like. The electronic device 6 may include, but is not limited to, a processor 601 and a memory 602. It will be appreciated by those skilled in the art that fig. 6 is merely an example of the electronic device 6 and is not limiting of the electronic device 6 and may include more or fewer components than shown, or different components.
The Processor 601 may be a central processing unit (Central Processing Unit, CPU) or other general purpose Processor, digital signal Processor (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components, etc.
The memory 602 may be an internal storage unit of the electronic device 6, for example, a hard disk or a memory of the electronic device 6. The memory 602 may also be an external storage device of the electronic device 6, such as a plug-in hard disk, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD) or the like, which are provided on the electronic device 6. The memory 602 may also include both internal and external storage units of the electronic device 6. The memory 602 is used to store computer programs and other programs and data required by the electronic device.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit.
The integrated modules/units may be stored in a readable storage medium if implemented in the form of software functional units and sold or used as stand-alone products. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, and the computer program may be stored in a readable storage medium, where the computer program may implement the steps of the method embodiments described above when executed by a processor. The computer program may comprise computer program code, which may be in source code form, object code form, executable file or in some intermediate form, etc. The computer readable medium can include any entity or device capable of carrying computer program code, recording medium, USB flash disk, removable hard disk, magnetic disk, optical disk, computer Memory, read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), electrical carrier signals, telecommunications signals, and software distribution media, among others. It should be noted that the content of the computer readable medium can be appropriately increased or decreased according to the requirements of the jurisdiction's jurisdiction and the patent practice, for example, in some jurisdictions, the computer readable medium does not include electrical carrier signals and telecommunication signals according to the jurisdiction and the patent practice.
The foregoing embodiments are merely for illustrating the technical solution of the present application, but not for limiting the same, and although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those skilled in the art that the technical solution described in the foregoing embodiments may be modified or substituted for some of the technical features thereof, and that these modifications or substitutions should not depart from the spirit and scope of the technical solution of the embodiments of the present application and should be included in the protection scope of the present application.